Cutshort logo
Dhruv Compusoft Consultancy logo
Devsecops Engineer
Dhruv Compusoft Consultancy's logo

Devsecops Engineer

sanjay CK's profile picture
Posted by sanjay CK
5 - 7 yrs
₹15L - ₹30L / yr
Bengaluru (Bangalore)
Skills
skill iconAmazon Web Services (AWS)
skill iconKubernetes
devsecops

Hi,


We are looking for candidate with experience in DevSecOps.

Please find below JD for your reference.


Responsibilities:


Execute shell scripts for seamless automation and system management.

Implement infrastructure as code using Terraform for AWS, Kubernetes, Helm, kustomize, and kubectl.

Oversee AWS security groups, VPC configurations, and utilize Aviatrix for efficient network orchestration.

Contribute to Opentelemetry Collector for enhanced observability.

Implement microsegmentation using AWS native resources and Aviatrix for commercial routes.

Enforce policies through Open Policy Agent (OPA) integration.

Develop and maintain comprehensive runbooks for standard operating procedures.

Utilize packet tracing for network analysis and security optimization.

Apply OWASP tools and practices for robust web application security.

Integrate container vulnerability scanning tools seamlessly within CI/CD pipelines.

Define security requirements for source code repositories, binary repositories, and secrets managers in CI/CD pipelines.

Collaborate with software and platform engineers to infuse security principles into DevOps teams.

Regularly monitor and report project status to the management team.


Qualifications:


Proficient in shell scripting and automation.

Strong command of Terraform, AWS, Kubernetes, Helm, kustomize, and kubectl.

Deep understanding of AWS security practices, VPC configurations, and Aviatrix.

Familiarity with Opentelemetry for observability and OPA for policy enforcement.

Experience in packet tracing for network analysis.

Practical application of OWASP tools and web application security.

Integration of container vulnerability scanning tools within CI/CD pipelines.

Proven ability to define security requirements for source code repositories, binary repositories, and secrets managers in CI/CD pipelines.

Collaboration expertise with DevOps teams for security integration.

Regular monitoring and reporting capabilities.

Site Reliability Engineering experience.

Hands-on proficiency with source code management tools, especially Git.

Cloud platform expertise (AWS, Azure, or GCP) with hands-on experience in deploying and managing applications.


Please send across your updated profile.

Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Shubham Vishwakarma's profile image

Shubham Vishwakarma

Full Stack Developer - Averlon
I had an amazing experience. It was a delight getting interviewed via Cutshort. The entire end to end process was amazing. I would like to mention Reshika, she was just amazing wrt guiding me through the process. Thank you team.
Companies hiring on Cutshort
companies logos

About Dhruv Compusoft Consultancy

Founded :
2005
Type :
Services
Size :
100-1000
Stage :
Profitable

About

Dhruv mainly focus on Manufacturing,Life cycle,Business Intelligence,Application Development,Web and Mobile,Life cycle and Manufacuring Process
Read more

Connect with the team

Profile picture
Susmitha Murthy
Profile picture
HR Dhruv

Company social profiles

bloglinkedinfacebook

Similar jobs

Variyas Labs Pvt. Ltd.
at Variyas Labs Pvt. Ltd.
2 candid answers
Rajan Agarwal
Posted by Rajan Agarwal
Delhi, Noida, greater noida
1 - 3 yrs
₹4L - ₹7L / yr
skill iconKubernetes
openshift
argocd
skill iconJenkins
Linux administration

We are seeking a skilled and proactive Kubernetes Administrator with strong hands-on experience in managing Red Hat OpenShift environments. The ideal candidate will have a solid background in Kubernetes administration, ArgoCD, and Jenkins.

This role demands a self-motivated, quick learner who can confidently manage OpenShift-based infrastructure in production environments, communicate effectively with stakeholders, and escalate issues promptly when needed.


Key Skills & Qualifications

  • Strong experience with Red Hat OpenShift and Kubernetes administration (OpenShift or Kubernetes certification a plus).
  • Proven expertise in managing containerized workloads on OpenShift platforms.
  • Experience with ArgoCD, GitLab CI/CD, and Helm for deployment automation.
  • Ability to troubleshoot issues in high-pressure production environments.
  • Strong communication and customer-facing skills.
  • Quick learner with a positive attitude toward problem-solving.


Read more
Quantalent AI is hiring for a fastest growing fin-tech firm
Quantalent AI is hiring for a fastest growing fin-tech firm
Agency job
via Quantalent AI by Mubashira Sultana
Bengaluru (Bangalore)
6 - 8 yrs
₹30L - ₹41L / yr
DevOps
DNS
skill iconKubernetes
SRE
Terraform
+5 more

Job Title: DevOps - 3


Roles and Responsibilities:


  • Develop deep understanding of the end-to-end configurations, dependencies, customer requirements, and overall characteristics of the production services as the accountable owner for overall service operations
  • Implementing best practices, challenging the status quo, and tab on industry and technical trends, changes, and developments to ensure the team is always striving for best-in-class work
  • Lead incident response efforts, working closely with cross-functional teams to resolve issues quickly and minimize downtime. Implement effective incident management processes and post-incident reviews
  • Participate in on-call rotation responsibilities, ensuring timely identification and resolution of infrastructure issues
  • Possess expertise in designing and implementing capacity plans, accurately estimating costs and efforts for infrastructure needs.
  • Systems and Infrastructure maintenance and ownership for production environments, with a continued focus on improving efficiencies, availability, and supportability through automation and well defined runbooks
  • Provide mentorship and guidance to a team of DevOps engineers, fostering a collaborative and high-performing work environment. Mentor team members in best practices, technologies, and methodologies.
  • Design for Reliability - Architect & implement solutions that keeps Infrastructure running with Always On availability and ensures high uptime SLA for the Infrastructure
  • Manage individual project priorities, deadlines, and deliverables related to your technical expertise and assigned domains
  • Collaborate with Product & Information Security teams to ensure the integrity and security of Infrastructure and applications. Implement security best practices and compliance standards.


Must Haves

  • 5-8 years of experience as Devops / SRE / Platform Engineer.
  • Strong expertise in automating Infrastructure provisioning and configuration using tools like Ansible, Packer, Terraform, Docker, Helm Charts etc.
  • Strong skills in network services such as DNS, TLS/SSL, HTTP, etc
  • Expertise in managing large-scale cloud infrastructure (preferably AWS and Oracle)
  • Expertise in managing production grade Kubernetes clusters
  • Experience in scripting using programming languages like Bash, Python, etc.
  • Expertise in skill sets for centralized logging systems, metrics, and tooling frameworks such as ELK, Prometheus/VictoriaMetrics, and Grafana etc.
  • Experience in Managing and building High scale API Gateway, Service Mesh, etc
  • Systematic problem-solving approach, coupled with strong communication skills and a sense of ownership and drive
  • Have a working knowledge of a backend programming language
  • Deep knowledge & experience with Unix / Linux operating systems internals (Eg. filesystems, user management, etc)
  • A working knowledge and deep understanding of cloud security concepts
  • Proven track record of driving results and delivering high-quality solutions in a fast-paced environment
  • Demonstrated ability to communicate clearly with both technical and non-technical project stakeholders, with the ability to work effectively in a cross-functional team environment. 


Read more
building video-calling as a service
building video-calling as a service
Agency job
via Qrata by Blessy Fernandes
Remote only
3 - 5 yrs
₹30L - ₹40L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more
Location - Bangalore (Remote)
Experience - 2+ Years
Requirements:

● Should have at least 2+ years of DevOps experience
● Should have experience with Kubernetes
● Should have experience with Terraform/Helm
● Should have experience in building scalable server-side systems
● Should have experience in cloud infrastructure and designing databases
● Having experience with NodeJS/TypeScript/AWS is a bonus
● Having experience with WebRTC is a bonus
Read more
Information Technology Services
Information Technology Services
Agency job
via Jobdost by Sathish Kumar
Pune
5 - 9 yrs
₹10L - ₹30L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+8 more
Preferred Education & Experience: 
• Bachelor’s or master’s degree in Computer Engineering,
Computer Science, Computer Applications, Mathematics, Statistics or related technical field or
equivalent practical experience. Relevant experience of at least 3 years in lieu of above if from a
different stream of education.
• Well-versed in DevOps principals & practices and hands-on DevOps
tool-chain integration experience: Release Orchestration & Automation, Source Code & Build
Management, Code Quality & Security Management, Behavior Driven Development, Test Driven
Development, Continuous Integration, Continuous Delivery, Continuous Deployment, and
Operational Monitoring & Management; extra points if you can demonstrate your knowledge with
working examples.
• Hands-on experience with demonstrable working experience with DevOps tools
and platforms viz., Slack, Jira, GIT, Jenkins, Code Quality & Security Plugins, Maven, Artifactory,
Terraform, Ansible/Chef/Puppet, Spinnaker, Tekton, StackStorm, Prometheus, Grafana, ELK,
PagerDuty, VictorOps, etc.
• Well-versed in Virtualization & Containerization; must demonstrate
experience in technologies such as Kubernetes, Istio, Docker, OpenShift, Anthos, Oracle VirtualBox,
Vagrant, etc.
• Well-versed in AWS and/or Azure or and/or Google Cloud; must demonstrate
experience in at least FIVE (5) services offered under AWS and/or Azure or and/or Google Cloud in
any categories: Compute or Storage, Database, Networking & Content Delivery, Management &
Governance, Analytics, Security, Identity, & Compliance (or) equivalent demonstratable Cloud
Platform experience.
• Well-versed with demonstrable working experience with API Management,
API Gateway, Service Mesh, Identity & Access Management, Data Protection & Encryption, tools &
platforms.
• Hands-on programming experience in either core Java and/or Python and/or JavaScript
and/or Scala; freshers passing out of college or lateral movers into IT must be able to code in
languages they have studied.
• Well-versed with Storage, Networks and Storage Networking basics
which will enable you to work in a Cloud environment.
• Well-versed with Network, Data, and
Application Security basics which will enable you to work in a Cloud as well as Business
Applications / API services environment.
• Extra points if you are certified in AWS and/or Azure
and/or Google Cloud.
Read more
Thinqor
Ravikanth Dangeti
Posted by Ravikanth Dangeti
Bengaluru (Bangalore)
5 - 20 yrs
₹20L - ₹22L / yr
skill iconAmazon Web Services (AWS)
eks
Terraform
DataDog
EKS
+3 more

General Description:


Owns all technical aspects of software development for assigned applications.

Participates in the design and development of systems & application programs.

Functions as Senior member of an agile team and helps drive consistent development practices – tools, common components, and documentation.


Required Skills:


In depth experience configuring and administering EKS clusters in AWS.

In depth experience in configuring **DataDog** in AWS environments especially in **EKS**

In depth understanding of OpenTelemetry and configuration of **OpenTelemetry Collectors**

In depth knowledge of observability concepts and strong troubleshooting experience.

Experience in implementing comprehensive monitoring and logging solutions in AWS using **CloudWatch**.

Experience in **Terraform** and Infrastructure as code.

Experience in **Helm**

Strong scripting skills in Shell and/or python.

Experience with large-scale distributed systems and architecture knowledge (Linux/UNIX and Windows operating systems, networking, storage) in a cloud computing or traditional IT infrastructure environment.

Must have a good understanding of cloud concepts (Storage /compute/network).

Experience in Collaborating with several cross functional teams to architect observability pipelines for various GCP services like GKE, cloud run Big Query etc.

Experience with Git and GitHub.

Proficient in developing and maintaining technical documentation, ADRs, and runbooks.


Read more
Molecular Connections
at Molecular Connections
4 recruiters
Molecular Connections
Posted by Molecular Connections
Bengaluru (Bangalore)
3 - 5 yrs
₹5L - ₹10L / yr
DevOps
skill iconKubernetes
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
Windows Azure
+3 more

About the Role:

We are seeking a talented and passionate DevOps Engineer to join our dynamic team. You will be responsible for designing, implementing, and managing scalable and secure infrastructure across multiple cloud platforms. The ideal candidate will have a deep understanding of DevOps best practices and a proven track record in automating and optimizing complex workflows.


Key Responsibilities:


Cloud Management:

  • Design, implement, and manage cloud infrastructure on AWS, Azure, and GCP.
  • Ensure high availability, scalability, and security of cloud resources.

Containerization & Orchestration:

  • Develop and manage containerized applications using Docker.
  • Deploy, scale, and manage Kubernetes clusters.

CI/CD Pipelines:

  • Build and maintain robust CI/CD pipelines to automate the software delivery process.
  • Implement monitoring and alerting to ensure pipeline efficiency.

Version Control & Collaboration:

  • Manage code repositories and workflows using Git.
  • Collaborate with development teams to optimize branching strategies and code reviews.

Automation & Scripting:

  • Automate infrastructure provisioning and configuration using tools like Terraform, Ansible, or similar.
  • Write scripts to optimize and maintain workflows.

Monitoring & Logging:

  • Implement and maintain monitoring solutions to ensure system health and performance.
  • Analyze logs and metrics to troubleshoot and resolve issues.


Required Skills & Qualifications:

  • 3-5 years of experience with AWS, Azure, and Google Cloud Platform (GCP).
  • Proficiency in containerization tools like Docker and orchestration tools like Kubernetes.
  • Hands-on experience building and managing CI/CD pipelines.
  • Proficient in using Git for version control.
  • Experience with scripting languages such as Bash, Python, or PowerShell.
  • Familiarity with infrastructure-as-code tools like Terraform or CloudFormation.
  • Solid understanding of networking, security, and system administration.
  • Excellent problem-solving and troubleshooting skills.
  • Strong communication and teamwork skills.


Preferred Qualifications:

  • Certifications such as AWS Certified DevOps Engineer, Azure DevOps Engineer, or Google Professional DevOps Engineer.
  • Experience with monitoring tools like Prometheus, Grafana, or ELK Stack.
  • Familiarity with serverless architectures and microservices.


Read more
Product Base Company into Logistic
Product Base Company into Logistic
Agency job
via Qrata by Rayal Rajan
Mumbai, Navi Mumbai
8 - 10 yrs
₹20L - ₹30L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Linux/Unix
+14 more

Role : Principal Devops Engineer


About the Client


It is a Product base company that has to build a platform using AI and ML technology for their transportation and logiticsThey also have a presence in the global market


Responsibilities and Requirements


• Experience in designing and maintaining high volume and scalable micro-services architecture on cloud infrastructure

• Knowledge in Linux/Unix Administration and Python/Shell Scripting

• Experience working with cloud platforms like AWS (EC2, ELB, S3, Auto-scaling, VPC, Lambda), GCP, Azure

• Knowledge in deployment automation, Continuous Integration and Continuous Deployment (Jenkins, Maven, Puppet, Chef, GitLab) and monitoring tools like Zabbix, Cloud Watch Monitoring, Nagios

• Knowledge of Java Virtual Machines, Apache Tomcat, Nginx, Apache Kafka, Microservices architecture, Caching mechanisms

• Experience in enterprise application development, maintenance and operations

• Knowledge of best practices and IT operations in an always-up, always-available service

• Excellent written and oral communication skills, judgment and decision-making skill

Read more
Acceldata
at Acceldata
5 recruiters
Richa  Kukar
Posted by Richa Kukar
Bengaluru (Bangalore)
5 - 9 yrs
₹10L - ₹30L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Windows Azure
+2 more

Acceldata is creating the Data observability space. We make it possible for data-driven enterprises to effectively monitor, discover, and validate Data platforms at Petabyte scale. Our customers are Fortune 500 companies including Asia's largest telecom company, a unicorn fintech startup of India, and many more. We are lean, hungry, customer-obsessed, and growing fast. Our Solutions team values productivity, integrity, and pragmatism. We provide a flexible, remote-friendly work environment.

 

We are building software that can provide insights into companies' data operations and allows them to focus on delivering data reliably with speed and effectiveness. Join us in building an industry-leading data observability platform that focuses on ensuring data reliability from every spectrum (compute, data and pipeline) of a cloud or on-premise data platform.

 

Position Summary-

 

This role will support the customer implementation of a data quality and reliability product. The candidate is expected to install the product in the client environment, manage proof of concepts with prospects, and become a product expert and troubleshoot post installation, production issues. The role will have significant interaction with the client data engineering team and the candidate is expected to have good communication skills.

 

Required experience

  1. 6-7 years experience providing engineering support to data domain/pipelines/data engineers.
  2. Experience in troubleshooting data issues, analyzing end to end data pipelines and in working with users in resolving issues
  3. Experience setting up enterprise security solutions including setting up active directories, firewalls, SSL certificates, Kerberos KDC servers, etc.
  4. Basic understanding of SQL
  5. Experience working with technologies like S3, Kubernetes experience preferred.
  6. Databricks/Hadoop/Kafka experience preferred but not required

 

 

 

 

 

 

 

Read more
upraisal
Bengaluru (Bangalore)
5 - 15 yrs
₹5L - ₹50L / yr
DevOps
skill iconKubernetes
skill iconDocker
Shell Scripting
Perl
+5 more
  • Work towards improving the following 4 verticals - scalability, availability, security, and cost, for company's workflows and products.
  • Help in provisioning, managing, optimizing cloud infrastructure in AWS (IAM, EC2, RDS, CloudFront, S3, ECS, Lambda, ELK etc.)
  • Work with the development teams to design scalable, robust systems using cloud architecture for both 0-to-1 and 1-to-100 products.
  • Drive technical initiatives and architectural service improvements.
  • Be able to predict problems and implement solutions that detect and prevent outages.
  • Mentor/manage a team of engineers.
  • Design solutions with failure scenarios in mind to ensure reliability.
  • Document rigorously to keep track of all changes/upgrades to the infrastructure and as well share knowledge with the rest of the team
  • Identify vulnerabilities during development with actionable information to empower developers to remediate vulnerabilities
  • Automate the build and testing processes to consistently integrate code
  • Manage changes to documents, software, images, large web sites, and other collections of code, configuration, and metadata among disparate teams
Read more
Neurosensum
at Neurosensum
5 recruiters
Tanuj Diwan
Posted by Tanuj Diwan
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
2 - 3 yrs
₹4L - ₹10L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

At Neurosensum we are committed to make customer feedback more actionable. We have developed a platform called SurveySensum which breaks the conventional market research turnaround time. 

SurveySensum is becoming a great tool to not only capture the feedbacks but also to extract some useful insights with the quick workflow setups and dashboards. We have more than 7 channels through which we can collect the feedbacks. This makes us challenge the conventional software development design principles. The team likes to grind and helps each other to lift in tough situations. 

Day to day responsibilities include:

  1. Work on the deployment of code via Bitbucket, AWS CodeDeploy and manual
  2. Work on Linux/Unix OS and Multi tech application patching
  3. Manage, coordinate, and implement software upgrades, patches, and hotfixes on servers.
  4. Create and modify scripts or applications to perform tasks
  5. Provide input on ways to improve the stability, security, efficiency, and scalability of the environment
  6. Easing developers’ life so that they can focus on the business logic rather than deploying and maintaining it. 
  7. Managing release of the sprint. 
  8. Educating team of the best practices.
  9. Finding ways to avoid human error and save time by automating the processes using Terraform, CloudFormation, Bitbucket pipelines, CodeDeploy, scripting
  10. Implementing cost effective measure on cloud and minimizing existing costs.

Skills and prerequisites

  1. OOPS knowledge
  2. Problem solving nature
  3. Willing to do the R&D
  4. Works with the team and support their queries patiently 
  5. Bringing new things on the table - staying updated 
  6. Pushing solution above a problem. 
  7. Willing to learn and experiment
  8. Techie at heart
  9. Git basics
  10. Basic AWS or any cloud platform – creating and managing ec2, lambdas, IAM, S3 etc
  11. Basic Linux handling 
  12. Docker and orchestration (Great to have)
  13. Scripting – python (preferably)/bash
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Shubham Vishwakarma's profile image

Shubham Vishwakarma

Full Stack Developer - Averlon
I had an amazing experience. It was a delight getting interviewed via Cutshort. The entire end to end process was amazing. I would like to mention Reshika, she was just amazing wrt guiding me through the process. Thank you team.
Companies hiring on Cutshort
companies logos