Cutshort logo
Telecom IT Compnay logo
Devops Engineer
Telecom IT Compnay
Telecom IT Compnay's logo

Devops Engineer

at Telecom IT Compnay

Agency job
4 - 10 yrs
₹5L - ₹15L / yr
Gurugram, Bengaluru (Bangalore)
Skills
skill iconDocker
skill iconKubernetes
DevOps
Ansible
Linux/Unix
skill iconAmazon Web Services (AWS)

Job Description:

 

  • Hands on experience with Ansible & Terraform.
  • Scripting language, such as Python or Bash or PowerShell is required and willingness to learn and master others. 
  • Troubleshooting and resolving automation, build, and CI/CD related issues (in cloud environment like AWS or Azure).
  • Experience with Kubernetes is mandate.
  • To develop and maintain tooling and environments for test and production environments.
  • Assist team members in the development and maintenance of tooling for integration testing, performance testing, security testing, as well as source control systems (that includes working in CI systems like Azure DevOps, Team City, and orchestration tools like Octopus).
  • Good with Linux environment.
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Shubham Vishwakarma's profile image

Shubham Vishwakarma

Full Stack Developer - Averlon
I had an amazing experience. It was a delight getting interviewed via Cutshort. The entire end to end process was amazing. I would like to mention Reshika, she was just amazing wrt guiding me through the process. Thank you team.
Companies hiring on Cutshort
companies logos

Similar jobs

HaystackAnalytics
Careers Hr
Posted by Careers Hr
Mumbai, Navi Mumbai
2 - 3 yrs
₹7L - ₹13L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+4 more

Job Description


Position - SRE developer / DevOps Engineer

Location - Mumbai

Experience- 3- 10 years


 

About HaystackAnalytics:

HaystackAnalytics is a company working in deep technology of genomics, computing and data science for creating the first of its kind clinical reporting engine in Healthcare. We are a new but well funded company with a tremendous amount of pedigree in the team (IIT Founders, IIT & IIM core team). Some of the technologies we have created are a global first in infectious disease and chronic diagnostics. As a product company creating a huge amount of IP, our Technology and R&D team are our crown jewels. With early success of our products in India, we are now expanding to take our products to international shores. 


Inviting Passionate Engineers to join a new age enterprise:  

At HaystackAnalytics, we rely on our dynamic team of engineers to solve the many challenges and puzzles that come with our rapidly evolving stack that deals with Healthcare and Genomics.

We’re looking for full stack engineers who are passionate problem solvers, ready to work with new technologies and architectures in a forward-thinking organization that’s always pushing boundaries. Here, you will take complete, end-to-end ownership of projects across the entire stack. 

Our ideal candidate has experience building enterprise products and has understanding and experience of working with new age front end technologies, web frameworks, APIs, databases, distributed computing,back end languages, caching, security, message based architectures et al.

You’ll be joining a small team working at the forefront of new technology, solving the challenges that impact both the front end and back end architecture, and ultimately, delivering amazing global user experiences.



Objectives of this Role:

  • Work across the full stack, building highly scalable distributed solutions that enable positive user experiences and measurable business growth
  • Ideate and develop new product features in collaboration with domain experts in healthcare and genomics 
  • Develop state of the art enterprise standard front-end and backend services
  • Develop cloud platform services based on container orchestration platform 
  • Continuously embrace automation for repetitive tasks
  • Ensure application performance, uptime, and scale, maintaining high standards of code quality by using clean coding principles and solid design patterns 
  • Build robust tech modules that are Unit Testable, Automating recurring tasks and processes  
  • Engage effectively with team members and collaborate to upskill and unblock each other

Frontend Skills 

  • HTML 5  
  • CSS framework ( LESS/ SASS ) 
  • Es6 / Typescript 
  • Electron app / TAURI
  • Component library ( Webcomponents / radix / material ) 
  • CSS ( tailwind) 
  • State management --> Redux / Zustand / Recoil 
  • Build tools - > (webpack/vite/Parcel/turborepo)
  • Frameworks -- > Next JS / 
  • Design patterns 
  • Test Automation Frameworks (cypress playwright etc )
  • Functional Programming concepts  
  • Scripting ( bash , python )


Backend Skills 

  1. Node / Deno / bun - Express / NEST JS 
  2. Language : Typescript / Python / Rust / 
  3. REST / GRAPHQL 
  4. SOLID Design Principles
  5. Storage (mongodb/ Object Storage / postgres ) 
  6. Caching ( Redis / In memory Data grid )
  7. Pub sub (KAFKA / SQS / SNS / Event bridge / RabbitMQ) 
  8. Container Technology ( Docker / Kubernetes )  
  9. Cloud ( azure , aws , openshift ) 
  10. GITOPS 
  11. Automation ( terraform , Serverless ) 

Other Skills 

  • Innovation and thought leadership
  • UI - UX design skills  
  • Interest in learning new tools, languages, workflows, and philosophies to grow
  • Communication 
Read more
Thinqor
Ravikanth Dangeti
Posted by Ravikanth Dangeti
Bengaluru (Bangalore)
5 - 20 yrs
₹20L - ₹22L / yr
skill iconAmazon Web Services (AWS)
eks
Terraform
DataDog
EKS
+3 more

General Description:


Owns all technical aspects of software development for assigned applications.

Participates in the design and development of systems & application programs.

Functions as Senior member of an agile team and helps drive consistent development practices – tools, common components, and documentation.


Required Skills:


In depth experience configuring and administering EKS clusters in AWS.

In depth experience in configuring **DataDog** in AWS environments especially in **EKS**

In depth understanding of OpenTelemetry and configuration of **OpenTelemetry Collectors**

In depth knowledge of observability concepts and strong troubleshooting experience.

Experience in implementing comprehensive monitoring and logging solutions in AWS using **CloudWatch**.

Experience in **Terraform** and Infrastructure as code.

Experience in **Helm**

Strong scripting skills in Shell and/or python.

Experience with large-scale distributed systems and architecture knowledge (Linux/UNIX and Windows operating systems, networking, storage) in a cloud computing or traditional IT infrastructure environment.

Must have a good understanding of cloud concepts (Storage /compute/network).

Experience in Collaborating with several cross functional teams to architect observability pipelines for various GCP services like GKE, cloud run Big Query etc.

Experience with Git and GitHub.

Proficient in developing and maintaining technical documentation, ADRs, and runbooks.


Read more
Infra360 Solutions Pvt Ltd
at Infra360 Solutions Pvt Ltd
2 candid answers
HR Infra360
Posted by HR Infra360
Gurugram
3 - 7 yrs
₹10L - ₹15L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

Please Apply - https://zrec.in/L51Qf?source=CareerSite


About Us

Infra360 Solutions is a services company specializing in Cloud, DevSecOps, Security, and Observability solutions. We help technology companies adapt DevOps culture in their organization by focusing on long-term DevOps roadmap. We focus on identifying technical and cultural issues in the journey of successfully implementing the DevOps practices in the organization and work with respective teams to fix issues to increase overall productivity. We also do training sessions for the developers and make them realize the importance of DevOps. We provide these services - DevOps, DevSecOps, FinOps, Cost Optimizations, CI/CD, Observability, Cloud Security, Containerization, Cloud Migration, Site Reliability, Performance Optimizations, SIEM and SecOps, Serverless automation, Well-Architected Review, MLOps, Governance, Risk & Compliance. We do assessments of technology architecture, security, governance, compliance, and DevOps maturity model for any technology company and help them optimize their cloud cost, streamline their technology architecture, and set up processes to improve the availability and reliability of their website and applications. We set up tools for monitoring, logging, and observability. We focus on bringing the DevOps culture to the organization to improve its efficiency and delivery.


Job Description

Job Title:             Senior DevOps Engineer (Infrastructure/SRE)

Department:       Technology

Location:             Gurgaon

Work Mode:         On-site

Working Hours:   10 AM - 7 PM 

Terms:                 Permanent

Experience:      4-6 years

Education:           B.Tech/MCA

Notice Period:     Immediately



About Us

At Infra360.io, we are a next-generation cloud consulting and services company committed to delivering comprehensive, 360-degree solutions for cloud, infrastructure, DevOps, and security. We partner with clients to transform and optimize their technology landscape, ensuring resilience, scalability, cost efficiency and innovation.

Our core services include Cloud Strategy, Site Reliability Engineering (SRE), DevOps, Cloud Security Posture Management (CSPM), and related Managed Services. We specialize in driving operational excellence across multi-cloud environments, helping businesses achieve their goals with agility and reliability.

We thrive on ownership, collaboration, problem-solving, and excellence, fostering an environment where innovation and continuous learning are at the forefront. Join us as we expand and redefine what’s possible in cloud technology and infrastructure.


Role Summary

We are looking for a Senior DevOps Engineer (Infrastructure) to design, automate, and manage cloud-based and datacentre infrastructure for diverse projects. The ideal candidate will have deep expertise in a public cloud platform (AWS, GCP, or Azure), with a strong focus on cost optimization, security best practices, and infrastructure automation using tools like Terraform and CI/CD pipelines.

This role involves designing scalable architectures (containers, serverless, and VMs), managing databases, and ensuring system observability with tools like Prometheus and Grafana. Strong leadership, client communication, and team mentoring skills are essential. Experience with VPN technologies and configuration management tools (Ansible, Helm) is also critical. Multi-cloud experience and familiarity with APM tools are a plus.


Ideal Candidate Profile


  • Solid 4-6 years of experience as a DevOps engineer with a proven track record of architecting and automating solutions on Cloud
  • Experience in troubleshooting production incidents and handling high-pressure situations.
  • Strong leadership skills and the ability to mentor team members and provide guidance on best practices.
  • Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
  • Extensive experience with Kubernetes, Terraform, ArgoCD, and Helm.
  • Strong with at least one public cloud AWS/GCP/Azure
  • Strong with Cost Optimization and Security Best practices
  • Strong with Infrastructure automation using Terraform and CI/CD automation
  • Strong with Configuration Management using Ansible, Helm etc
  • Good with designing architectures (Containers, Serverless, VMs etc)
  • Hands-on Experience working on Multiple Projects
  • Strong with Client communication and requirements gathering
  • Databases management experience
  • Good experience with Prometheus, Grafana & Alert Manager
  • Able to manage multiple clients and take ownership of client issues.
  • Experience with Git and coding best practices
  • Proficiency in cloud networking, including VPCs, DNS, VPNs (OpenVPN, OpenSwan, Pritunl, Site-to-Site VPNs), load balancers, and firewalls, ensuring secure and efficient connectivity.
  • Strong understanding of cloud security best practices, identity and access management (IAM), and compliance requirements for modern infrastructure.


Good to have

  • Multi-cloud experience with AWS, GCP & Azure
  • Experience with APM & Observability tools like - Newrelic, Datadog, and OpenTelemetry
  • Proficiency in scripting languages (Python, Go) for automation and tooling to improve infrastructure and application reliability.


Key Responsibilities


  1. Design and Development:
  2. Architect, design, and develop high-quality, scalable, and secure cloud-based software solutions.
  3. Collaborate with product and engineering teams to translate business requirements into technical specifications.
  4. Write clean, maintainable, and efficient code, following best practices and coding standards.
  5. Cloud Infrastructure:
  6. Develop and optimise cloud-native applications, leveraging cloud services like AWS, Azure, or Google Cloud Platform (GCP).
  7. Implement and manage CI/CD pipelines for automated deployment and testing.
  8. Ensure the security, reliability, and performance of cloud infrastructure.
  9. Technical Leadership:
  10. Mentor and guide junior engineers, providing technical leadership and fostering a collaborative team environment.
  11. Participate in code reviews, ensuring adherence to best practices and high-quality code delivery.
  12. Lead technical discussions and contribute to architectural decisions.
  13. Problem Solving and Troubleshooting:
  14. Identify, diagnose, and resolve complex software and infrastructure issues.
  15. Perform root cause analysis for production incidents and implement preventative measures.
  16. Continuous Improvement:
  17. Stay up-to-date with the latest industry trends, tools, and technologies in cloud computing and software engineering.
  18. Contribute to the continuous improvement of development processes, tools, and methodologies.
  19. Drive innovation by experimenting with new technologies and solutions to enhance the platform.
  20. Collaboration:
  21. Work closely with DevOps, QA, and other teams to ensure smooth integration and delivery of software releases.
  22. Communicate effectively with stakeholders, including technical and non-technical team members.
  23. Client Interaction & Management: 
  24. Will serve as a direct point of contact for multiple clients.
  25. Able to handle the unique technical needs and challenges of two or more clients concurrently. 
  26. Involve both direct interaction with clients and internal team coordination.
  27. Production Systems Management: 
  28. Must have extensive experience in managing, monitoring, and debugging production environments. 
  29. Will work on troubleshooting complex issues and ensure that production systems are running smoothly with minimal downtime.
Read more
Egnyte
at Egnyte
4 recruiters
John Vivek
Posted by John Vivek
Remote only
8 - 15 yrs
Best in industry
Windows Azure
skill iconPython
skill iconKubernetes
DevOps

Staff DevOps Engineer with Azure

 

EGNYTE YOUR CAREER. SPARK YOUR PASSION.

Egnyte is a place where we spark opportunities for amazing people. We believe that every role has meaning, and every Egnyter should be respected. With 22,000+ customers worldwide and growing, you can make an impact by protecting their valuable data. When joining Egnyte, you’re not just landing a new career, you become part of a team of Egnyters that are doers, thinkers, and collaborators who embrace and live by our values:


Invested Relationships

Fiscal Prudence

Candid Conversations

 


ABOUT EGNYTE


Egnyte is the secure multi-cloud platform for content security and governance that enables organizations to better protect and collaborate on their most valuable content. Established in 2008, Egnyte has democratized cloud content security for more than 22,000 organizations, helping customers improve data security, maintain compliance, prevent and detect ransomware threats, and boost employee productivity on any app, any cloud, anywhere. For more information, visit www.egnyte.com.

 

Our Production Engineering team enables Egnyte to provide customers access to their data 24/7 by providing best in class infrastructure.

 

ABOUT THE ROLE

We store multibillion files and multiple petabytes of data. We observe more than 11K API requests per second on average. To make that possible and to provide the best possible experience, we rely on great engineers. For us, people who own their work, from start to finish, are integral. Our engineers are part of the process from design to code, to test, to deployment and back again for further iterations. You can, and will, touch every level of the infrastructure depending on the day and what project you are working on. The ideal candidate should be able to take a complex problem and execute end to end. Mentor and set higher standards for the rest of the team and for the new hires.

 

WHAT YOU’LL DO:

•  Design, build and maintain self-hosted and cloud environments to serve our own applications and services.

•  Collaborate with software developers to build stable, scalable and high-performance solutions.

•   Taking part in big projects like migrating solutions from self-hosted environments to the cloud, from virtual machines to Kubernetes, from monolith to microservices.

  • Proactively make our organization and technology better!
  • Advising others as to how DevOps can make a positive impact on their work.

•   Share knowledge, mentor more junior team members while also still learning and gaining new skills.

  • Maintain​ ​consistently​ ​high​ ​standards​ ​of​ ​​communication,​ ​productivity, and teamwork​ ​across​ ​all​ ​teams.

 

 

YOUR QUALIFICATIONS:

•  5+ years of proven experience in a DevOps Engineer, System Administrator or Developer role, working on infrastructure or build processes.

•  Expert knowledge of Microsoft Azure.

•  Programming prowess (Python, Golang).

• Knowledge and experience about deployment and maintenance of Java and Python apps using application and web servers (Tomcat, Nginx, etc.).

• Ability to solve complex problems with simple, elegant and clean code.

• Practical knowledge of CI/CD solutions, GitLab CI or similar.

• Practical knowledge of Docker as a tool for testing and building an environment.

• Knowledge of Kubernetes and related technologies.

• Experience with metric-based monitoring solutions.

• Solid English skills to effectively communicate with other team members.

• Good understanding of the Linux Operating System on the administration level.

• Drive to grow as a DevOps Engineer (we value open-mindedness and a can-do attitude).

• Strong sense of ownership and ability to drive big projects.

 

BONUS SKILLS:

• Work experience as a Microsoft Azure architect.

• Experience in Cloud migrations projects.

• Leadership skills and experience.

 

 

COMMITMENT TO DIVERSITY, EQUITY, AND INCLUSION:

At Egnyte, we celebrate our differences and thrive on our diversity for our employees, our products, our customers, our investors, and our communities. Egnyters are encouraged to bring their whole selves to work and to appreciate the many differences that collectively make Egnyte a higher-performing company and a great place to be.

Read more
Concentric AI
at Concentric AI
7 candid answers
1 product
Gopal Agarwal
Posted by Gopal Agarwal
Pune
4 - 10 yrs
₹10L - ₹45L / yr
skill iconPython
Shell Scripting
DevOps
skill iconAmazon Web Services (AWS)
Infrastructure architecture
+7 more
About us:

Ask any CIO about corporate data and they’ll happily share all the work they’ve done to make their databases secure and compliant. Ask them about other sensitive information, like contracts, financial documents, and source code, and you’ll probably get a much less confident response. Few organizations have any insight into business-critical information stored in unstructured data.

There was a time when that didn’t matter. Those days are gone. Data is now accessible, copious, and dispersed, and it includes an alarming amount of business-critical information. It’s a target for both cybercriminals and regulators but securing it is incredibly difficult. It’s the data challenge of our generation.

Existing approaches aren’t doing the job. Keyword searches produce a bewildering array of possibly relevant documents that may or may not be business critical. Asking users to categorize documents requires extensive training and constant vigilance to make sure users are doing their part. What’s needed is an autonomous solution that can find and assess risk so you can secure your unstructured data wherever it lives.

That’s our mission. Concentric’s semantic intelligence solution reveals the meaning in your structured and unstructured data so you can fight off data loss and meet compliance and privacy mandates.

Check out our core cultural values and behavioural tenets here: https://concentric.ai/the-concentric-tenets-daily-behavior-to-aspire-to/" target="_blank">https://concentric.ai/the-concentric-tenets-daily-behavior-to-aspire-to/

Title: Cloud DevOps Engineer 

Role: Individual Contributor (4-8 yrs)  

      

Requirements: 

  • Energetic self-starter, a fast learner, with a desire to work in a startup environment  
  • Experience working with Public Clouds like AWS 
  • Operating and Monitoring cloud infrastructure on AWS. 
  • Primary focus on building, implementing and managing operational support 
  • Design, Develop and Troubleshoot Automation scripts (Configuration/Infrastructure as code or others) for Managing Infrastructure. 
  • Expert at one of the scripting languages – Python, shell, etc  
  • Experience with Nginx/HAProxy, ELK Stack, Ansible, Terraform, Prometheus-Grafana stack, etc 
  • Handling load monitoring, capacity planning, and services monitoring. 
  • Proven experience With CICD Pipelines and Handling Database Upgrade Related Issues. 
  • Good Understanding and experience in working with Containerized environments like Kubernetes and Datastores like Cassandra, Elasticsearch, MongoDB, etc
Read more
RaRa Now
at RaRa Now
3 recruiters
N SHUBHANGINI
Posted by N SHUBHANGINI
Remote only
2 - 8 yrs
₹7L - ₹15L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

About RaRa Delivery

Not just a delivery company…

RaRa Delivery is revolutionising instant delivery for e-commerce in Indonesia through data driven logistics.

RaRa Delivery is making instant and same-day deliveries scalable and cost-effective by leveraging a differentiated operating model and real-time optimisation technology. RaRa makes it possible for anyone, anywhere to get same day delivery in Indonesia. While others are focusing on ‘one-to-one’ deliveries, the company has developed proprietary, real-time batching tech to do ‘many-to-many’ deliveries within a few hours.. RaRa is already in partnership with some of the top eCommerce players in Indonesia like Blibli, Sayurbox, Kopi Kenangan and many more.

We are a distributed team with the company headquartered in Singapore 🇸🇬 , core operations in Indonesia 🇮🇩 and technology team based out of India 🇮🇳

Future of eCommerce Logistics.

  • Datadriven logistics company that is bringing in same day delivery revolution in Indonesia 🇮🇩
  • Revolutionising delivery as an experience
  • Empowering D2C Sellers with logistics as the core technology

About the Role

  • Build and maintain CI/CD tools and pipelines.
  • Designing and managing highly scalable, reliable, and fault-tolerant infrastructure & networking that forms the backbone of distributed systems at RaRa Delivery.
  • Continuously improve code quality, product execution, and customer delight.
  • Communicate, collaborate and work effectively across distributed teams in a global environment.
  • Operate to strengthen teams across their product with their knowledge base
  • Contribute to improving team relatedness, and help build a culture of camaraderie.
  • Continuously refactor applications to ensure high-quality design
  • Pair with team members on functional and non-functional requirements and spread design philosophy and goals across the team
  • Excellent bash, and scripting fundamentals and hands-on with scripting in programming languages such as Python, Ruby, Golang, etc.
  • Good understanding of distributed system fundamentals and ability to troubleshoot issues in a larger distributed infrastructure
  • Working knowledge of the TCP/IP stack, internet routing, and load balancing
  • Basic understanding of cluster orchestrators and schedulers (Kubernetes)
  • Deep knowledge of Linux as a production environment, container technologies. e.g. Docker, Infrastructure As Code such as Terraform, K8s administration at large scale.
  • Have worked on production distributed systems and have an understanding of microservices architecture, RESTful services, CI/CD.
Read more
Calibo
at Calibo
1 recruiter
Ganesh B
Posted by Ganesh B
Pune
4 - 8 yrs
₹10L - ₹30L / yr
DevOps
CI/CD
Ansible
skill iconDocker
skill iconJenkins
+1 more
 CI/CD tools Jenkins/Bamboo/Teamcity/CircleCI, DevSecOps Pipeline, Cloud Services (AWS/Azure/GCP), Ansible, Terraform, Docker, Helm, Cloud formation template, Webserver deployment & config, Databases(SQL/NoSQL) deployment & config, Git, Artifactory, Monitoring tools (Nagios, Grafana, Prometheus etc), Application logs (ELK/EFK, Splunk etc.), API Gateways, Security tools, Vault.
Read more
Antino Labs Private Limited
Aditi Gupta
Posted by Aditi Gupta
Gurugram
1 - 4 yrs
₹2L - ₹6L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+3 more
To design and develop automated deployment arrangements by leveraging configuration management technology.
Implementing various development, testing, automation tools, and IT infrastructure
Selecting and deploying appropriate CI/CD tools

Required Candidate profile

Linux
Programming language Ruby, Python, Perl, and Java.
Cloud platform (AWS, Azure, GCP)
Working knowledge of any webserver eg- NGINX or Apache
Also working knowledge of open-source DevOps tool
Read more
US Healthcare IT Product company
US Healthcare IT Product company
Agency job
via Confidenthire by Aprajeeta Sinha
Pune
8 - 10 yrs
₹20L - ₹30L / yr
DevOps
skill iconDocker
skill iconKubernetes
skill iconJenkins
skill iconAmazon Web Services (AWS)

Responsibilities

  • Designing and building infrastructure to support AWS, Azure, and GCP-based Cloud services and infrastructure.
  • Creating and utilizing tools to monitor our applications and services in the cloud including system health indicators, trend identification, and anomaly detection.
  • Working with development teams to help engineer scalable, reliable, and resilient software running in the cloud.
  • Participating in on-call escalation to troubleshoot customer-facing issues
  • Analyzing and monitoring performance bottlenecks and key metrics to optimize software and system performance.
  • Providing analytics and forecasts for cloud capacity, troubleshooting analysis, and uptime.

Skills

  • Should have strong experience of a couple of years, in leading DevOps team and planning, defining DevOps roadmap and executing as per the same along with the team 
  • Familiarity with AWS cloud and JSON templates, Python, AWS Cloud formation templates
  • Designing solutions using one or more AWS features, tools, and technologies such as EC2, EBS, Glacier, S3, ELB, CloudFormation, Lambada, CloudWatch, VPC, RDS, Direct Connect, AWS CLI, REST API
  • Design and implement system architecture with AWS cloud - Develop automation scripts, ARM templates, Ansible, Chef, Python, Powershell Knowledge of AWS services and cloud design patterns- Knowledge on Cloud fundamentals like autoscaling, serverless
  • Have experience with DevOps and Infrastructure as Code: AWS environment and application automation utilizing CloudFormation and third-party tools. CI/CD pipeline setup utilizing
  • CI experience with the following is a must: Jenkins, Bitbucket/GIT, Nexus or Artifactory, SonarQube, WireMock or other mocking solution 
  • Expert knowledge on Windows/Linux OS/Mac with at least 5-6 years of system administration experience
  • Should have strong skills in using JIRA build tool
  • Should have knowledge in managing the CI/CD pipeline on public cloud deployments using AWS
  • Should have strong skills in using tools like Jenkins, Docker, Kubernetes (AWS EKS, Azure AKS), and Cloudformation.
  • Experience in monitoring tools like Pingdom, Nagios, etc.
  • Experience in reverse proxy services like Nginx and Apache
  • Desirable experience in Bitbucket with version control tools like GIT/SVN
  • Experience of manual/automated testing desired application deployments
  • Experience in database technologies such as PostgreSQL, MySQL
  • Knowledge of helm and terraform
Read more
US based product engineering company
US based product engineering company
Agency job
via Sagar Enterprises by Sagar Khamkar
Remote only
5 - 13 yrs
₹30L - ₹35L / yr
DevOps
skill iconPython
CI/CD
skill iconAmazon Web Services (AWS)
skill iconDocker
+10 more

Required Skills and Experience 

 

  • 4+ years of relevant experience with DevOps tools Jenkins, Ansible, Chef etc
  • 4+ years of experience in continuous integration/deployment and software tools development experience with Python and shell scripts etc
  • Building and running Docker images and deployment on Amazon ECS
  • Working with AWS services (EC2, S3, ELB, VPC, RDS, Cloudwatch, ECS, ECR, EKS)
  • Knowledge and experience working with container technologies such as Docker and Amazon ECS, EKS, Kubernetes
  • Experience with source code and configuration management tools such as Git, Bitbucket, and Maven
  • Ability to work with and support Linux environments (Ubuntu, Amazon Linux, CentOS)
  • Knowledge and experience in cloud orchestration tools such as AWS Cloudformation/Terraform etc
  • Experience with implementing "infrastructure as code", “pipeline as code” and "security as code" to enable continuous integration and delivery
  • Understanding of IAM, RBAC, NACLs, and KMS
  • Good communication skills

 

Good to have:

 

  • Strong understanding of security concepts, methodologies and apply them such as SSH, public key encryption, access credentials, certificates etc.
  • Knowledge of database administration such as MongoDB.
  • Knowledge of maintaining and using tools such as Jira, Bitbucket, Confluence.
  • Work with Leads and Architects in designing and implementation of technical infrastructure, platform, and tools to support modern best practices and facilitate the efficiency of our development teams through automation, CI/CD pipelines, and ease of access and performance.
  • Establish and promote DevOps thinking, guidelines, best practices, and standards.
  • Contribute to architectural discussions, Agile software development process improvement, and DevOps best practices.

 

Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Shubham Vishwakarma's profile image

Shubham Vishwakarma

Full Stack Developer - Averlon
I had an amazing experience. It was a delight getting interviewed via Cutshort. The entire end to end process was amazing. I would like to mention Reshika, she was just amazing wrt guiding me through the process. Thank you team.
Companies hiring on Cutshort
companies logos