1. Should have worked with AWS, Dockers and Kubernetes.
2. Should have worked with a scripting language.
3. Should know how to monitor system performance, CPU, Memory.
4. Should be able to do troubleshooting.
5. Should have knowledge of automated deployment
6. Proficient in one programming knowledge - python preferred.

About 91social
About
Connect with the team
Similar jobs
Senior DevSecOps Engineer (Cybersecurity & VAPT) - Arcitech AI
Arcitech AI, located in Mumbai's bustling Lower Parel, is a trailblazer in software and IT, specializing in software development, AI, mobile apps, and integrative solutions. Committed to excellence and innovation, Arcitech AI offers incredible growth opportunities for team members. Enjoy unique perks like weekends off and a provident fund. Our vibrant culture is friendly and cooperative, fostering a dynamic work environment that inspires creativity and forward-thinking. Join us to shape the future of technology.
Full-time
Navi Mumbai, Maharashtra, India
5+ Years Experience
₹
1200000 - 1400000
Job Title: Senior DevSecOps Engineer (Cybersecurity & VAPT)
Location: Vashi, Navi Mumbai (On-site)
Shift: 10:00 AM - 7:00 PM
Experience: 5+ years
Salary : INR 12,00,000 - 14,00,000
Job Summary
Hiring a Senior DevSecOps Engineer with strong cloud, CI/CD, automation skills and hands-on experience in Cybersecurity & VAPT to manage deployments, secure infrastructure, and support DevSecOps initiatives.
Key Responsibilities
Cloud & Infrastructure
- Manage deployments on AWS/Azure
- Maintain Linux servers & cloud environments
- Ensure uptime, performance, and scalability
CI/CD & Automation
- Build and optimize pipelines (Jenkins, GitHub Actions, GitLab CI/CD)
- Automate tasks using Bash/Python
- Implement IaC (Terraform/CloudFormation)
Containerization
- Build and run Docker containers
- Work with basic Kubernetes concepts
Cybersecurity & VAPT
- Perform Vulnerability Assessment & Penetration Testing
- Identify, track, and mitigate security vulnerabilities
- Implement hardening and support DevSecOps practices
- Assist with firewall/security policy management
Monitoring & Troubleshooting
- Use ELK, Prometheus, Grafana, CloudWatch
- Resolve cloud, deployment, and infra issues
Cross-Team Collaboration
- Work with Dev, QA, and Security for secure releases
- Maintain documentation and best practices
Required Skills
- AWS/Azure, Linux, Docker
- CI/CD tools: Jenkins, GitHub Actions, GitLab
- Terraform / IaC
- VAPT experience + understanding of OWASP, cloud security
- Bash/Python scripting
- Monitoring tools (ELK, Prometheus, Grafana)
- Strong troubleshooting & communication
Job Description
Role: Sr. DevOps – Architect
Location: Bangalore
Who are we looking for?
A senior level DevOps consultant with deep DevOps related expertise. The Individual should be passionate about technology and demonstrate depth and breadth of expertise in similar roles and Enterprise Systems/Enterprise Architecture Frameworks;
Technical Skills:
• 8+ years of relevant DevOps /Operations/Development experience working under Agile DevOps culture on large scale distributed systems.
• Experience in building a DevOps platform in integrating DevOps tool chain using REST/SOAP/ESB technologies.
• Required hands on programming skills on developing automation modules using one of these scripting languages Python/Perl/Ruby/Bash
• Require hands on experience with public cloud such as AWS, Azure, Openstack, Pivotal Cloud Foundry etc though Azure experience is must.
• Experience in working more than one of the configuration management tools like Chef/Puppet/Ansible and building own cookbook/manifest is required.
• Experience with Docker and Kubernetes.
• Experience in Building CI/CD pipelines using any of the continuous integration tools like Jenkins, Bamboo etc.
• Experience with planning tools like Jira, Rally etc.
• Hands on experience on continuous integration and build tools like (Jenkins, Bamboo, CruiseControl etc.) along with version control system like (GIT, SVN, GITHUB, TFS etc.), build automation tools like Maven/Gradle/ANT and dependency management tools like Artifactory/Nexus.
• Experience with more than one deployment automation tools like IBM urban code, CA automic, XL Deploy etc.
• Experience on setting up and managing DevOps tools on Repository, Monitoring, Log Analysis etc. using tools like (New Relic, Splunk, App Dynamics etc.)
• Understanding of Applications, Networking and Open source tools.
• Experience on security side of DevOps i.e. DevSecOps
• Good to have understanding of Micro services architecture.
• Experience working with remote/offshore teams is a huge plus
• Experience in building a Dashboard based on latest JS technologies like NodeJS
• Experience with NoSQL database like MongoDB
• Experience in working with REST APIs
• Experience with tools like NPM, Gulp
Process Skills:
• Ability in performing rapid assessments of clients’ internal technology landscape and targeting use cases and deployment targets
• Develop and create program blueprint, case study, supporting technical documentations for DevOps to be commercialized and duplicate work across different business customers
• Compile, deliver, and evangelize roadmaps that guide the evolution of services
• Grasp and communicate big-picture enterprise-wide issues to team
• Experience working in an Agile / Scrum / SAFe environment preferred
Behavioral Skills :
• Should have directly worked on creating enterprise level operating models, architecture options
• Model as-is and to-be architectures based on business requirements
• Good communication & presentation skills
• Self-driven + Disciplined + Organized + Result Oriented + Focused & Passionate about work
• Flexible for short term travel
Primary Duties / Responsibilities:
• Build Automations and modules for DevOps platform
• Build integrations between various DevOps tools
• Interface with another teams to provide support and understand the overall vision of the transformation platform.
• Understand the customer deployment scenarios, and Continuously improve and update the platform based on agile requirements.
• Preparing HLDs and LLDs.
• Presenting status to leadership and key stakeholders at regular intervals
Qualification:
• Somebody who has at least 12+ years of work experience in software development.
• 5+ years industry experience in DevOps architecture related to Continuous Integration/Delivery solutions, Platform Automation including technology consulting experience
• Education qualification: B.Tech, BE, BCA, MCA, M. Tech or equivalent technical degree from a reputed college
Please Apply - https://zrec.in/GzLLD?source=CareerSite
About Us
Infra360 Solutions is a services company specializing in Cloud, DevSecOps, Security, and Observability solutions. We help technology companies adapt DevOps culture in their organization by focusing on long-term DevOps roadmap. We focus on identifying technical and cultural issues in the journey of successfully implementing the DevOps practices in the organization and work with respective teams to fix issues to increase overall productivity. We also do training sessions for the developers and make them realize the importance of DevOps. We provide these services - DevOps, DevSecOps, FinOps, Cost Optimizations, CI/CD, Observability, Cloud Security, Containerization, Cloud Migration, Site Reliability, Performance Optimizations, SIEM and SecOps, Serverless automation, Well-Architected Review, MLOps, Governance, Risk & Compliance. We do assessments of technology architecture, security, governance, compliance, and DevOps maturity model for any technology company and help them optimize their cloud cost, streamline their technology architecture, and set up processes to improve the availability and reliability of their website and applications. We set up tools for monitoring, logging, and observability. We focus on bringing the DevOps culture to the organization to improve its efficiency and delivery.
Job Description
Job Title: DevOps Engineer AWS
Department: Technology
Location: Gurgaon
Work Mode: On-site
Working Hours: 10 AM - 7 PM
Terms: Permanent
Experience: 2-4 years
Education: B.Tech/MCA/BCA
Notice Period: Immediately
Infra360.io is searching for a DevOps Engineer to lead our group of IT specialists in maintaining and improving our software infrastructure. You'll collaborate with software engineers, QA engineers, and other IT pros in deploying, automating, and managing the software infrastructure. As a DevOps engineer you will also be responsible for setting up CI/CD pipelines, monitoring programs, and cloud infrastructure.
Below is a detailed description of the roles and responsibilities, expectations for the role.
Tech Stack :
- Kubernetes: Deep understanding of Kubernetes clusters, container orchestration, and its architecture.
- Terraform: Extensive hands-on experience with Infrastructure as Code (IaC) using Terraform for managing cloud resources.
- ArgoCD: Experience in continuous deployment and using ArgoCD to maintain GitOps workflows.
- Helm: Expertise in Helm for managing Kubernetes applications.
- Cloud Platforms: Expertise in AWS, GCP or Azure will be an added advantage.
- Debugging and Troubleshooting: The DevOps Engineer must be proficient in identifying and resolving complex issues in a distributed environment, ranging from networking issues to misconfigurations in infrastructure or application components.
Key Responsibilities:
- CI/CD and configuration management
- Doing RCA of production issues and providing resolution
- Setting up failover, DR, backups, logging, monitoring, and alerting
- Containerizing different applications on the Kubernetes platform
- Capacity planning of different environment's infrastructure
- Ensuring zero outages of critical services
- Database administration of SQL and NoSQL databases
- Infrastructure as a code (IaC)
- Keeping the cost of the infrastructure to the minimum
- Setting up the right set of security measures
- CI/CD and configuration management
- Doing RCA of production issues and providing resolution
- Setting up failover, DR, backups, logging, monitoring, and alerting
- Containerizing different applications on the Kubernetes platform
- Capacity planning of different environment's infrastructure
- Ensuring zero outages of critical services
- Database administration of SQL and NoSQL databases
- Infrastructure as a code (IaC)
- Keeping the cost of the infrastructure to the minimum
- Setting up the right set of security measures
Ideal Candidate Profile:
- A graduation/post-graduation degree in Computer Science and related fields
- 2-4 years of strong DevOps experience with the Linux environment.
- Strong interest in working in our tech stack
- Excellent communication skills
- Worked with minimal supervision and love to work as a self-starter
- Hands-on experience with at least one of the scripting languages - Bash, Python, Go etc
- Experience with version control systems like Git
- Strong experience of Amazon Web Services (EC2, RDS, VPC, S3, Route53, IAM etc.)
- Strong experience with managing the Production Systems day in and day out
- Experience in finding issues in different layers of architecture in production environment and fixing them
- Knowledge of SQL and NoSQL databases, ElasticSearch, Solr etc.
- Knowledge of Networking, Firewalls, load balancers, Nginx, Apache etc.
- Experience in automation tools like Ansible/SaltStack and Jenkins
- Experience in Docker/Kubernetes platform and managing OpenStack (desirable)
- Experience with Hashicorp tools i.e. Vault, Vagrant, Terraform, Consul, VirtualBox etc. (desirable)
- Experience with managing/mentoring small team of 2-3 people (desirable)
- Experience in Monitoring tools like Prometheus/Grafana/Elastic APM.
- Experience in logging tools Like ELK/Loki.
Required Skills
• Automation is a part of your daily functions, so thorough familiarity with Unix Bourne shell scripting and Python is a critical survival skill.
• Integration and maintenance of automated tools
• Strong analytical and problem-solving skills
• Working experience in source control tools such as GIT/Github/Gitlab/TFS
• Have experience with modern virtualization technologies (Docker, KVM, AWS, OpenStack, or any orchestration platforms)
• Automation of deployment, customization, upgrades, and monitoring through modern DevOps tools (Ansible, Kubernetes, OpenShift, etc) • Advanced Linux admin experience
• Using Jenkins or similar tools
• Deep understanding of Container orchestration(Preferably Kubernetes )
• Strong knowledge of Object Storage(Preferably Cept on Rook)
• Experience in installing, managing & tuning microservices environments using Kubernetes & Docker both on-premise and on the cloud.
• Experience in deploying and managing spring boot applications.
• Experience in deploying and managing Python applications using Django, FastAPI, Flask.
• Experience in deploying machine learning pipelines/data pipelines using Airflow/Kubeflow /Mlflow.
• Experience in web server and reverse Proxy like Nginx, Apache Server, HAproxy
• Experience in monitoring tools like Prometheus, Grafana.
• Experience in provisioning & maintaining SQL/NoSQL databases.
Desired Skills
• Configuration software: Ansible
• Excellent communication and collaboration skills
• Good experience on Networking Technologies like a Load balancer, ACL, Firewall, VIP, DNS
• Programmatic experience with AWS, DO, or GCP storage & machine images
• Experience on various Linux distributions
• Knowledge of Azure DevOps Server
• Docker management and troubleshooting
• Familiarity with micro-services and RESTful systems
• AWS / GCP / Azure certification
• Interact with the Engineering for supporting/maintaining/designing backend infrastructure for product support
• Create fully automated global cloud infrastructure that spans multiple regions.
• Great learning attitude to the newest technology and a Team player
As a MLOps Engineer in QuantumBlack you will:
Develop and deploy technology that enables data scientists and data engineers to build, productionize and deploy machine learning models following best practices. Work to set the standards for SWE and
DevOps practices within multi-disciplinary delivery teams
Choose and use the right cloud services, DevOps tooling and ML tooling for the team to be able to produce high-quality code that allows your team to release to production.
Build modern, scalable, and secure CI/CD pipelines to automate development and deployment
workflows used by data scientists (ML pipelines) and data engineers (Data pipelines)
Shape and support next generation technology that enables scaling ML products and platforms. Bring
expertise in cloud to enable ML use case development, including MLOps
Our Tech Stack-
We leverage AWS, Google Cloud, Azure, Databricks, Docker, Kubernetes, Argo, Airflow, Kedro, Python,
Terraform, GitHub actions, MLFlow, Node.JS, React, Typescript amongst others in our projects
Key Skills:
• Excellent hands-on expert knowledge of cloud platform infrastructure and administration
(Azure/AWS/GCP) with strong knowledge of cloud services integration, and cloud security
• Expertise setting up CI/CD processes, building and maintaining secure DevOps pipelines with at
least 2 major DevOps stacks (e.g., Azure DevOps, Gitlab, Argo)
• Experience with modern development methods and tooling: Containers (e.g., docker) and
container orchestration (K8s), CI/CD tools (e.g., Circle CI, Jenkins, GitHub actions, Azure
DevOps), version control (Git, GitHub, GitLab), orchestration/DAGs tools (e.g., Argo, Airflow,
Kubeflow)
• Hands-on coding skills Python 3 (e.g., API including automated testing frameworks and libraries
(e.g., pytest) and Infrastructure as Code (e.g., Terraform) and Kubernetes artifacts (e.g.,
deployments, operators, helm charts)
• Experience setting up at least one contemporary MLOps tooling (e.g., experiment tracking,
model governance, packaging, deployment, feature store)
• Practical knowledge delivering and maintaining production software such as APIs and cloud
infrastructure
• Knowledge of SQL (intermediate level or more preferred) and familiarity working with at least
one common RDBMS (MySQL, Postgres, SQL Server, Oracle)
We are front runners of the technological revolution with an inexhaustible passion for technology! DevOn is the technical organization that originated from Prowareness. We are the company at the forefront of leading DevOps transformations and setting up High Performance Distributed DevOps teams with leading companies worldwide. DevOn helps market leaders to take the next step in software delivery. We consist of a dynamic team, in which personal growth is central!
About You
You have 6+ years of experience in AWS infra Automation. This is a fantastic opportunity to work in a fast-paced operations environment and to develop your career in Cloud technologies, particularly Amazon Web Services.
You are building and monitoring CI/CD pipeline in AWS cloud. This is a highly scalable backend application building on Java platform. We need a resource who can troubleshoot, diagnose and rectify system service issues.
You’re cloud native with Terraform as an orchestration. You would use Terraform as a key Orchestration in Infrastructure as Code.
You're comfortable driving. You prefer to own your work streams and enjoy working in autonomy to progress towards your goals.
You provide an incredible support to the team. You sweat the small stuff but keep the big picture in mind. You know that a pair programming can give better result
An ideal candidate is/are:
This is a key role within our DevOps team and will involve working as part of a collaborative agile team in a shared services DevOps organization to support and deliver innovative technology solutions that directly align with the delivery of business value and enhanced customer experience. The primary objective is to provide support to Amazon Web Services hosted environment, ensure continuous availability, working closely with development teams to ensure best value for money, and effective estate management.
- Setup CI/CD Pipeline from scratch along with integration of appropriate quality gates.
- Expertise level knowledge in AWS cloud. Provision and configure infrastructure as code using Terraform
- Build and configure Kubernetes-based infrastructure, networking policies, LBs, and cluster security. Define autoscaling and cost strategies.
- Automate the build of containerized systems with CI/CD tooling, Helm charts, and more
- Manage deployments and rollbacks of applications
- Implement monitoring and metrics with Cloud watch, Newrelic
- Troubleshoot and optimize containerized workload deployments for clients
- Automate operational tasks, and assist in the transition to service ownership models.
Job Dsecription:
○ Develop best practices for team and also responsible for the architecture
○ solutions and documentation operations in order to meet the engineering departments quality and standards
○ Participate in production outage and handle complex issues and works towards Resolution
○ Develop custom tools and integration with existing tools to increase engineering Productivity
Required Experience and Expertise
○ Having a good knowledge of Terraform + someone who has worked on large TF code bases.
○ Deep understanding of Terraform with best practices & writing TF modules.
○ Hands-on experience of GCP and AWS and knowledge on AWS Services like VPC and VPC related services like (route tables, vpc endpoints, privatelinks) EKS, S3, IAM. Cost aware mindset towards Cloud services.
○ Deep understanding of Kernel, Networking and OS fundamentals
NOTICE PERIOD - Max - 30 days
Experience: 5+yrs
Skills Required: -
Experience in Azure Administration, Configuration and Deployment of WindowsLinux VMContainer
based infrastructure Scripting Programming in Python, JavaScriptTypeScript, C Scripting PowerShell ,
Azure CLI and shell Scripts Identity, Access Management and RBAC model Virtual Networking, storage,
and Compute Resources
Azure Database Technologies. Monitoring and Analytics Tools in Azure
Azure DevOps based CICD Build pipeline integrated with GitHub – Java and Node.js
Test Automation and other CICD Tools
Azure Infrastructure using ARM template Terrafor
● Develop and deliver automation software required for building & improving the functionality, reliability, availability, and manageability of applications and cloud platforms
● Champion and drive the adoption of Infrastructure as Code (IaC) practices and mindset
● Design, architect, and build self-service, self-healing, synthetic monitoring and alerting platform and tools
● Automate the development and test automation processes through CI/CD pipeline (Git, Jenkins, SonarQube, Artifactory, Docker containers)
● Build container hosting-platform using Kubernetes
● Introduce new cloud technologies, tools & processes to keep innovating in commerce area to drive greater business value.
Skills Required:
● Excellent written and verbal communication skills and a good listener.
● Proficiency in deploying and maintaining Cloud based infrastructure services (AWS, GCP, Azure – good hands-on experience in at least one of them)
● Well versed with service-oriented architecture, cloud-based web services architecture, design patterns and frameworks.
● Good knowledge of cloud related services like compute, storage, network, messaging (Eg SNS, SQS) and automation (Eg. CFT/Terraform).
● Experience with relational SQL and NoSQL databases, including Postgres and
Cassandra.
● Experience in systems management/automation tools (Puppet/Chef/Ansible, Terraform)
● Strong Linux System Admin Experience with excellent troubleshooting and problem solving skills
● Hands-on experience with languages (Bash/Python/Core Java/Scala)
● Experience with CI/CD pipeline (Jenkins, Git, Maven etc)
● Experience integrating solutions in a multi-region environment
● Self-motivate, learn quickly and deliver results with minimal supervision
● Experience with Agile/Scrum/DevOps software development methodologies.
Nice to Have:
● Experience in setting-up Elastic Logstash Kibana (ELK) stack.
● Having worked with large scale data.
● Experience with Monitoring tools such as Splunk, Nagios, Grafana, DataDog etc.
● Previously experience on working with distributed architectures like Hadoop, Mapreduce etc.










