Cutshort logo
SCA Jobs in Delhi, NCR and Gurgaon

11+ SCA Jobs in Delhi, NCR and Gurgaon | SCA Job openings in Delhi, NCR and Gurgaon

Apply to 11+ SCA Jobs in Delhi, NCR and Gurgaon on CutShort.io. Explore the latest SCA Job opportunities across top companies like Google, Amazon & Adobe.

icon
AdTech Industry

AdTech Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
8 - 12 yrs
₹50L - ₹75L / yr
Ansible
Terraform
skill iconAmazon Web Services (AWS)
Platform as a Service (PaaS)
CI/CD
+30 more

ROLE & RESPONSIBILITIES:

We are hiring a Senior DevSecOps / Security Engineer with 8+ years of experience securing AWS cloud, on-prem infrastructure, DevOps platforms, MLOps environments, CI/CD pipelines, container orchestration, and data/ML platforms. This role is responsible for creating and maintaining a unified security posture across all systems used by DevOps and MLOps teams — including AWS, Kubernetes, EMR, MWAA, Spark, Docker, GitOps, observability tools, and network infrastructure.


KEY RESPONSIBILITIES:

1.     Cloud Security (AWS)-

  • Secure all AWS resources consumed by DevOps/MLOps/Data Science: EC2, EKS, ECS, EMR, MWAA, S3, RDS, Redshift, Lambda, CloudFront, Glue, Athena, Kinesis, Transit Gateway, VPC Peering.
  • Implement IAM least privilege, SCPs, KMS, Secrets Manager, SSO & identity governance.
  • Configure AWS-native security: WAF, Shield, GuardDuty, Inspector, Macie, CloudTrail, Config, Security Hub.
  • Harden VPC architecture, subnets, routing, SG/NACLs, multi-account environments.
  • Ensure encryption of data at rest/in transit across all cloud services.

 

2.     DevOps Security (IaC, CI/CD, Kubernetes, Linux)-

Infrastructure as Code & Automation Security:

  • Secure Terraform, CloudFormation, Ansible with policy-as-code (OPA, Checkov, tfsec).
  • Enforce misconfiguration scanning and automated remediation.

CI/CD Security:

  • Secure Jenkins, GitHub, GitLab pipelines with SAST, DAST, SCA, secrets scanning, image scanning.
  • Implement secure build, artifact signing, and deployment workflows.

Containers & Kubernetes:

  • Harden Docker images, private registries, runtime policies.
  • Enforce EKS security: RBAC, IRSA, PSP/PSS, network policies, runtime monitoring.
  • Apply CIS Benchmarks for Kubernetes and Linux.

Monitoring & Reliability:

  • Secure observability stack: Grafana, CloudWatch, logging, alerting, anomaly detection.
  • Ensure audit logging across cloud/platform layers.


3.     MLOps Security (Airflow, EMR, Spark, Data Platforms, ML Pipelines)-

Pipeline & Workflow Security:

  • Secure Airflow/MWAA connections, secrets, DAGs, execution environments.
  • Harden EMR, Spark jobs, Glue jobs, IAM roles, S3 buckets, encryption, and access policies.

ML Platform Security:

  • Secure Jupyter/JupyterHub environments, containerized ML workspaces, and experiment tracking systems.
  • Control model access, artifact protection, model registry security, and ML metadata integrity.

Data Security:

  • Secure ETL/ML data flows across S3, Redshift, RDS, Glue, Kinesis.
  • Enforce data versioning security, lineage tracking, PII protection, and access governance.

ML Observability:

  • Implement drift detection (data drift/model drift), feature monitoring, audit logging.
  • Integrate ML monitoring with Grafana/Prometheus/CloudWatch.


4.     Network & Endpoint Security-

  • Manage firewall policies, VPN, IDS/IPS, endpoint protection, secure LAN/WAN, Zero Trust principles.
  • Conduct vulnerability assessments, penetration test coordination, and network segmentation.
  • Secure remote workforce connectivity and internal office networks.


5.     Threat Detection, Incident Response & Compliance-

  • Centralize log management (CloudWatch, OpenSearch/ELK, SIEM).
  • Build security alerts, automated threat detection, and incident workflows.
  • Lead incident containment, forensics, RCA, and remediation.
  • Ensure compliance with ISO 27001, SOC 2, GDPR, HIPAA (as applicable).
  • Maintain security policies, procedures, RRPs (Runbooks), and audits.


IDEAL CANDIDATE:

  • 8+ years in DevSecOps, Cloud Security, Platform Security, or equivalent.
  • Proven ability securing AWS cloud ecosystems (IAM, EKS, EMR, MWAA, VPC, WAF, GuardDuty, KMS, Inspector, Macie).
  • Strong hands-on experience with Docker, Kubernetes (EKS), CI/CD tools, and Infrastructure-as-Code.
  • Experience securing ML platforms, data pipelines, and MLOps systems (Airflow/MWAA, Spark/EMR).
  • Strong Linux security (CIS hardening, auditing, intrusion detection).
  • Proficiency in Python, Bash, and automation/scripting.
  • Excellent knowledge of SIEM, observability, threat detection, monitoring systems.
  • Understanding of microservices, API security, serverless security.
  • Strong understanding of vulnerability management, penetration testing practices, and remediation plans.


EDUCATION:

  • Master’s degree in Cybersecurity, Computer Science, Information Technology, or related field.
  • Relevant certifications (AWS Security Specialty, CISSP, CEH, CKA/CKS) are a plus.


PERKS, BENEFITS AND WORK CULTURE:

  • Competitive Salary Package
  • Generous Leave Policy
  • Flexible Working Hours
  • Performance-Based Bonuses
  • Health Care Benefits
Read more
Global Digital Transformation Solutions Provider

Global Digital Transformation Solutions Provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore), Chennai, Hyderabad, Kochi (Cochin), Noida, Pune, Thiruvananthapuram
7 - 10 yrs
₹21L - ₹30L / yr
Perforce
DevOps
skill iconGit
skill iconGitHub
skill iconPython
+7 more

JOB DETAILS:

* Job Title: Specialist I - DevOps Engineering

* Industry: Global Digital Transformation Solutions Provider

* Salary: Best in Industry

* Experience: 7-10 years

* Location: Bengaluru (Bangalore), Chennai, Hyderabad, Kochi (Cochin), Noida, Pune, Thiruvananthapuram

 

Job Description

Job Summary:

As a DevOps Engineer focused on Perforce to GitHub migration, you will be responsible for executing seamless and large-scale source control migrations. You must be proficient with GitHub Enterprise and Perforce, possess strong scripting skills (Python/Shell), and have a deep understanding of version control concepts.

The ideal candidate is a self-starter, a problem-solver, and thrives on challenges while ensuring smooth transitions with minimal disruption to development workflows.

 

Key Responsibilities:

  • Analyze and prepare Perforce repositories — clean workspaces, merge streams, and remove unnecessary files.
  • Handle large files efficiently using Git Large File Storage (LFS) for files exceeding GitHub’s 100MB size limit.
  • Use git-p4 fusion (Python-based tool) to clone and migrate Perforce repositories incrementally, ensuring data integrity.
  • Define migration scope — determine how much history to migrate and plan the repository structure.
  • Manage branch renaming and repository organization for optimized post-migration workflows.
  • Collaborate with development teams to determine migration points and finalize migration strategies.
  • Troubleshoot issues related to file sizes, Python compatibility, network connectivity, or permissions during migration.

 

Required Qualifications:

  • Strong knowledge of Git/GitHub and preferably Perforce (Helix Core) — understanding of differences, workflows, and integrations.
  • Hands-on experience with P4-Fusion.
  • Familiarity with cloud platforms (AWS, Azure) and containerization technologies (Docker, Kubernetes).
  • Proficiency in migration tools such as git-p4 fusion — installation, configuration, and troubleshooting.
  • Ability to identify and manage large files using Git LFS to meet GitHub repository size limits.
  • Strong scripting skills in Python and Shell for automating migration and restructuring tasks.
  • Experience in planning and executing source control migrations — defining scope, branch mapping, history retention, and permission translation.
  • Familiarity with CI/CD pipeline integration to validate workflows post-migration.
  • Understanding of source code management (SCM) best practices, including version history and repository organization in GitHub.
  • Excellent communication and collaboration skills for cross-team coordination and migration planning.
  • Proven practical experience in repository migration, large file management, and history preservation during Perforce to GitHub transitions.

 

Skills: Github, Kubernetes, Perforce, Perforce (Helix Core), Devops Tools

 

Must-Haves

Git/GitHub (advanced), Perforce (Helix Core) (advanced), Python/Shell scripting (strong), P4-Fusion (hands-on experience), Git LFS (proficient)

Read more
GradRight

at GradRight

4 recruiters
Patrali M
Posted by Patrali M
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
3 - 4 yrs
₹8L - ₹15L / yr
skill iconAmazon Web Services (AWS)
CI/CD
Computer Networking

About GradRight

Our vision is to be the world’s leading Ed-Fin Tech company dedicated to making higher education accessible and affordable to all. Our mission is to drive transparency and accountability in the global higher education sector and create significant impact using the power of technology, data science and collaboration.

GradRight is the world’s first SaaS ecosystem that brings together students, universities and financial institutions in an integrated manner. It enables students to find and fund high return college education, universities to engage and select the best-fit students and banks to lend in an effective and efficient manner.

In the last three years, we have enabled students to get the best deals on a $ 2.8+ Billion of loan requests and facilitated disbursements of more than $ 350+ Million in loans. GradRight won the HSBC Fintech Innovation Challenge supported by the Ministry of Electronics & IT, Government of India & was among the top 7 global finalists in The PIEoneer awards, UK.

GradRight’s team possesses extensive domestic and international experience in the launch and scale-up of premier higher education institutions. It is led by alumni of IIT Delhi, BITS Pilani, IIT Roorkee, ISB Hyderabad and University of Pennsylvania. GradRight is a Delaware, USA registered company with a wholly owned subsidiary in India. 


About the Role

We are looking for a passionate DevOps Engineer with hands-on experience in AWS cloud infrastructure, containerization, and orchestration. The ideal candidate will be responsible for building, automating, and maintaining scalable cloud solutions, ensuring smooth CI/CD pipelines, and supporting development and operations teams.


Core Responsibilities

Design, implement, and manage scalable, secure, and highly available infrastructure on AWS.

Build and maintain CI/CD pipelines using tools like Jenkins, GitLab CI/CD, or GitHub Actions.

Containerize applications using Docker and manage deployments with Kubernetes (EKS, self-managed, or other distributions).

Monitor system performance, availability, and security using tools like CloudWatch, Prometheus, Grafana, ELK/EFK stack.

Collaborate with development teams to optimize application performance and deployment processes.


Required Skills & Experience

3–4 years of professional experience as a DevOps Engineer or similar role.

Strong expertise in AWS services (EC2, S3, RDS, Lambda, VPC, IAM, CloudWatch, EKS, etc.).

Hands-on experience with Docker and Kubernetes (EKS or self-hosted clusters).

Proficiency in CI/CD pipeline design and automation.

Experience with Infrastructure as Code (Terraform / AWS CloudFormation).

Solid understanding of Linux/Unix systems and shell scripting.

Knowledge of monitoring, logging, and alerting tools.

Familiarity with networking concepts (DNS, Load Balancing, Security Groups, Firewalls).

Basic programming/scripting experience in Python, Bash, or Go.


Nice to Have

Exposure to microservices architecture and service mesh (Istio/Linkerd).

Knowledge of serverless (AWS Lambda, API Gateway).



Read more
Variyas Labs Pvt. Ltd.

at Variyas Labs Pvt. Ltd.

2 candid answers
Rajan Agarwal
Posted by Rajan Agarwal
Delhi, Noida, greater noida
1 - 3 yrs
₹4L - ₹7L / yr
skill iconKubernetes
openshift
argocd
skill iconJenkins
Linux administration

We are seeking a skilled and proactive Kubernetes Administrator with strong hands-on experience in managing Red Hat OpenShift environments. The ideal candidate will have a solid background in Kubernetes administration, ArgoCD, and Jenkins.

This role demands a self-motivated, quick learner who can confidently manage OpenShift-based infrastructure in production environments, communicate effectively with stakeholders, and escalate issues promptly when needed.


Key Skills & Qualifications

  • Strong experience with Red Hat OpenShift and Kubernetes administration (OpenShift or Kubernetes certification a plus).
  • Proven expertise in managing containerized workloads on OpenShift platforms.
  • Experience with ArgoCD, GitLab CI/CD, and Helm for deployment automation.
  • Ability to troubleshoot issues in high-pressure production environments.
  • Strong communication and customer-facing skills.
  • Quick learner with a positive attitude toward problem-solving.


Read more
K2India

at K2India

2 recruiters
Nidhi Kohli
Posted by Nidhi Kohli
Noida
3 - 7 yrs
₹4L - ₹6L / yr
skill iconAmazon Web Services (AWS)
DevOps
Cloud Engineer
AWS CloudFormation
skill iconKubernetes
+4 more

  Job Description:

 

We are looking to recruit engineers with zeal to learn cloud solutions using Amazon Web Services (AWS). We\'ll prefer an engineer who is passionate about AWS Cloud technology, passionate about helping customers succeed, passionate about quality and truly enjoys what they do. The qualified candidate for AWS Cloud Engineer position is someone who has a can-do attitude and is an innovative thinker.

 

  • Be a hands on with responsibilities for the installation, configuration, and ongoing management of Linux based solutions on AWS for our clients.
  • Responsible for creating and managing Autoscaling EC2 instances using VPCs, Elastic Load Balancers, and other services across multiple availability zones to build resilient, scalable and failsafe cloud solutions.
  • Familiarity with other AWS services such as CloudFront, ALB, EC2, RDS, Route 53 etc. desirable.
  • Working Knowledge of RDS, Dynamo DB, Guard Duty, WAF, Multi tier architecture.
  • Proficient in working on Git, CI CD Pipelined, AWS Devops, Git, Bit Bucket, Ansible.
  • Proficient in working on Docker Engine, Containers, Kubernetes .
  • Expertise in Migration workload to AWS from different cloud providers
  • Should be versatile in problem solving and resolve complex issues ranging from OS and application faults to creatively improving solution design
  • Should be ready to work in rotation on a 24x7 schedule, and be available on call at other times due to the critical nature of the role
  • Fault finding, analysis and of logging information for reporting of performance exceptions
  • Deployment, automation, management, and maintenance of AWS cloud-based production system.
  • Ensuring availability, performance, security, and scalability of AWS production systems.
  • Management of creation, release, and configuration of production systems.
  • Evaluation of new technology alternatives and vendor products.
  • System troubleshooting and problem resolution across various application domains and platforms.
  • Pre-production acceptance testing for quality assurance.
  • Provision of critical system security by leveraging best practices and prolific cloud security solutions.
  • Providing recommendations for architecture and process improvements.
  • Definition and deployment of systems for metrics, logging, and monitoring on AWS platform.
  • Designing, maintenance and management of tools for automation of different operational processes.

 

Desired Candidate Profile

 

o    Customer oriented personality with good communication skills, who is able to articulate and communicate very effectively verbally as well as in written communications.

o    Be a team player that collaborates and shares experience and expertise with the rest of the team.

o    Understands database system such as MSSQL, Mongo DB, MySQL, MariaDB, Dynamo DB, RDS.

o    Understands Web Servers such as Apache, Ningx.

o    Must be RHEL certified.

o    In depth knowledge of Linux Commands and Services.

o    Efficiency enough to manage all internet applications inclusive FTP, SFTP, Ningx Apache, MySQL, PHP.

o    Good communication skill.

o    Atleast 3-7 Years of experience in AWS and Devops.

 

 

 

 

 

  Company Profile:

 

i2k2 Networks is a trusted name in the IT cloud hosting services industry. We help enterprises with cloud migration, cost optimization, support, and fully managed services which helps them to move faster and scale with lower IT costs. i2k2 Networks offers a complete range of cutting-edge solution that drives the Internet-powered business modules. We excel in:

 

 

 

  - Managed IT Services

  - Dedicated Web Servers Hosting

  - Cloud Solutions

  - Email Solutions

  - Enterprise Services

  - Round the clock Technical Support

 

 

 

   https://www.i2k2.com/">https://www.i2k2.com/

 

   Regards

   Nidhi Kohli

   i2k2 Networks Pvt Ltd.

   AM - Talent Acquisition

 

 

Read more
A leading Edtech company

A leading Edtech company

Agency job
via Jobdost by Sathish Kumar
Noida
5 - 8 yrs
₹12L - ₹17L / yr
DevOps
skill iconAmazon Web Services (AWS)
CI/CD
skill iconDocker
skill iconKubernetes
+4 more
  • Minimum 3+ yrs of Experience in DevOps with AWS Platform
  •     • Strong AWS knowledge and experience
  •     • Experience in using CI/CD automation tools (Git, Jenkins, Configuration deployment tools ( Puppet/Chef/Ansible)
  •     • Experience with IAC tools  Terraform
  •     • Excellent experience in operating a container orchestration cluster (Kubernetes, Docker)
  •     • Significant experience with Linux operating system environments
  •     • Experience with infrastructure scripting solutions such as Python/Shell scripting
  •     • Must have experience in designing Infrastructure automation framework.
  •     • Good experience in any of the Setting up Monitoring tools and Dashboards ( Grafana/kafka)
  •     • Excellent problem-solving, Log Analysis and troubleshooting skills
  •     • Experience in setting up centralized logging for system (EKS, EC2) and application
  •     • Process-oriented with great documentation skills
  •     • Ability to work effectively within a team and with minimal supervision
Read more
Neurosensum

at Neurosensum

5 recruiters
Tanuj Diwan
Posted by Tanuj Diwan
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
2 - 3 yrs
₹4L - ₹10L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

At Neurosensum we are committed to make customer feedback more actionable. We have developed a platform called SurveySensum which breaks the conventional market research turnaround time. 

SurveySensum is becoming a great tool to not only capture the feedbacks but also to extract some useful insights with the quick workflow setups and dashboards. We have more than 7 channels through which we can collect the feedbacks. This makes us challenge the conventional software development design principles. The team likes to grind and helps each other to lift in tough situations. 

Day to day responsibilities include:

  1. Work on the deployment of code via Bitbucket, AWS CodeDeploy and manual
  2. Work on Linux/Unix OS and Multi tech application patching
  3. Manage, coordinate, and implement software upgrades, patches, and hotfixes on servers.
  4. Create and modify scripts or applications to perform tasks
  5. Provide input on ways to improve the stability, security, efficiency, and scalability of the environment
  6. Easing developers’ life so that they can focus on the business logic rather than deploying and maintaining it. 
  7. Managing release of the sprint. 
  8. Educating team of the best practices.
  9. Finding ways to avoid human error and save time by automating the processes using Terraform, CloudFormation, Bitbucket pipelines, CodeDeploy, scripting
  10. Implementing cost effective measure on cloud and minimizing existing costs.

Skills and prerequisites

  1. OOPS knowledge
  2. Problem solving nature
  3. Willing to do the R&D
  4. Works with the team and support their queries patiently 
  5. Bringing new things on the table - staying updated 
  6. Pushing solution above a problem. 
  7. Willing to learn and experiment
  8. Techie at heart
  9. Git basics
  10. Basic AWS or any cloud platform – creating and managing ec2, lambdas, IAM, S3 etc
  11. Basic Linux handling 
  12. Docker and orchestration (Great to have)
  13. Scripting – python (preferably)/bash
Read more
Horizontal Integration
Remote, Bengaluru (Bangalore), Hyderabad, Vadodara, Pune, Jaipur, Mumbai, Delhi, Gurugram, Noida, Ghaziabad, Faridabad
6 - 15 yrs
₹10L - ₹25L / yr
skill iconAmazon Web Services (AWS)
Windows Azure
Microsoft Windows Azure
Google Cloud Platform (GCP)
skill iconDocker
+2 more

Position Summary

DevOps is a Department of Horizontal Digital, within which we have 3 different practices.

  1. Cloud Engineering
  2. Build and Release
  3. Managed Services

This opportunity is for Cloud Engineering role who also have some experience with Infrastructure migrations, this will be a complete hands-on job, with focus on migrating clients workloads to the cloud, reporting to the Solution Architect/Team Lead and along with that you are also expected to work on different projects for building out the Sitecore Infrastructure from scratch.

We are Sitecore Platinum Partner and majority of the Infrastructure work that we are doing is for Sitecore.

Sitecore is a .Net Based Enterprise level Web CMS, which can be deployed on On-Prem, IaaS, PaaS and Containers.

So, most of our DevOps work is currently planning, architecting and deploying infrastructure for Sitecore.
 

Key Responsibilities:

  • This role includes ownership of technical, commercial and service elements related to cloud migration and Infrastructure deployments.
  • Person who will be selected for this position will ensure high customer satisfaction delivering Infra and migration projects.
  • Candidate must expect to work in parallel across multiple projects, along with that candidate must also have a fully flexible approach to working hours.
  • Candidate should keep him/herself updated with the rapid technological advancements and developments that are taking place in the industry.
  • Along with that candidate should also have a know-how on Infrastructure as a code, Kubernetes, AKS/EKS, Terraform, Azure DevOps, CI/CD Pipelines.

Requirements:

  • Bachelor’s degree in computer science or equivalent qualification.
  • Total work experience of 6 to 8 Years.
  • Total migration experience of 4 to 6 Years.
  • Multiple Cloud Background (Azure/AWS/GCP)
  • Implementation knowledge of VMs, Vnet,
  • Know-how of Cloud Readiness and Assessment
  • Good Understanding of 6 R's of Migration.
  • Detailed understanding of the cloud offerings
  • Ability to Assess and perform discovery independently for any cloud migration.
  • Working Exp. on Containers and Kubernetes.
  • Good Knowledge of Azure Site Recovery/Azure Migrate/Cloud Endure
  • Understanding on vSphere and Hyper-V Virtualization.
  • Working experience with Active Directory.
  • Working experience with AWS Cloud formation/Terraform templates.
  • Working Experience of VPN/Express route/peering/Network Security Groups/Route Table/NAT Gateway, etc.
  • Experience of working with CI/CD tools like Octopus, Teamcity, Code Build, Code Deploy, Azure DevOps, GitHub action.
  • High Availability and Disaster Recovery Implementations, taking into the consideration of RTO and RPO aspects.
  • Candidates with AWS/Azure/GCP Certifications will be preferred.
Read more
Opoyi Inc

at Opoyi Inc

3 recruiters
Bishwajeet Mishra
Posted by Bishwajeet Mishra
NCR (Delhi | Gurgaon | Noida)
3 - 10 yrs
₹5L - ₹12L / yr
skill iconAmazon Web Services (AWS)
Linux administration
Shell Scripting
DevOps
Linux/Unix
+12 more
Skill Required (Technical)

Technical Experience/Knowledge Needed :

  • Cloud-hosted services environment.
  • Proven ability to work in a Cloud-based environment.
  • Ability to manage and maintain Cloud Infrastructure on AWS
  • Must have strong experience in technologies such as Dockers, Kubernetes, Functions, etc.
  • Knowledge in orchestration tools Ansible
  • Experience with ELK Stack
  • Strong knowledge in Micro Services, Container-based architecture and the corresponding deployment tools and techniques.
  • Hands-on knowledge of implementing multi-staged CI / CD with tools like Jenkins and Git.
  • Sound knowledge on tools like Kibana, Kafka, Grafana, Instana and so on.
  • Proficient in bash Scripting Languages.
  • Must have in-depth knowledge of Clustering, Load Balancing, High Availability and Disaster Recovery, Auto Scaling, etc.
Skill Required (Other) :
  • AWS Certified Solutions Architect or/and Linux System Administrator
  • Strong ability to work independently on complex issues
  • Collaborate efficiently with internal experts to resolve customer issues quickly
  • No objection to working night shifts as the production support team works on 24*7 basis. Hence, rotational shifts will be assigned to the candidates weekly to get equal opportunity to work in a day and night shifts. But if you get candidates willing to work the night shift on a need basis, discuss with us.
  • Early Joining
  • Willingness to work in Delhi NCR
Read more
Statusbrew

at Statusbrew

2 recruiters
Tushar Mahajan
Posted by Tushar Mahajan
Amritsar, NCR (Delhi | Gurgaon | Noida), Chandigarh, Ludhiana, Bengaluru (Bangalore)
2 - 7 yrs
₹5L - ₹23L / yr
skill iconAmazon Web Services (AWS)
DevOps
skill iconGit
Do you breathe and drink DevOps? Do you keep redesigning and reviewing your deployment until you can optimize it no more? If yes, and you are good at it, an extremely high quality work + great work life balance awaits you. StatusBrew is one of the few companies in India that have built a massively successful product at the global level. Ranked under 5000 by Alexa for a high volume of traffic and with about 1,000,000 monthly active users, we are ready to make it even bigger. As incharge of DevOps at StatusBrew, you will be managing the deployment of a global product with: 1. 1 mn monthly active users, 200K daily active, 2000 users concurrent at any point of time 2. 1 TB data generated in 1 week, 1 Billion database rows in just 1 day 3. $20,000 spent on AWS every month after many optimizations We have an extremely cozy office in Amritsar. We have a tight knit team that enjoys working and having fun together. Talk to us if you want to join us in our journey to the next 100 million users.
Read more
Adobe Systems
Noida, NCR (Delhi | Gurgaon | Noida)
11 - 17 yrs
₹40L - ₹80L / yr
DevOps
SRE
skill iconAmazon Web Services (AWS)
Adobe - An Award-Winning Employer Adobe believes in hiring the very best. We are known for our vibrant, dynamic and rewarding workplace where personal and professional fulfillment and company success go hand in hand. We take pride in creating exceptional work experiences, encouraging innovation and being involved with our employees, customers, and communities. We invite you to discover what makes Adobe a place where exceptional people thrive. Click this link to experience A Day in the Life at Adobe: http://www.adobe.com/aboutadobe/careeropp/fma/dayinthelife/ About Technical Operations: Adobe Technical Operations supports the Adobe Marketing Cloud's design, delivery, and operation. We’re a global team of over 200 smart, passionate people. We work with Development and Product Management to balance scope, quality, and time to market for our industry leading SaaS solutions. Our multiple groups include Security, Networking, Storage, Data Center Operations, 24x7 NOC, Systems Engineering, and Application Development. We work with a wide variety of technologies - we are a collection of organic and acquired products and services. We focus on building services that Development and Operations can reuse to encourage speed, consistency and value creation. Responsibilities • Develop the solutions to maintain and optimize the availability and performance of services to ensure a fantastic, reliable experience for our customers. • Envision, design and build the tools and solutions to keep the services healthy and responsive • Continuously improve the techniques and processes used in Operations to optimize the costs and increase the productivity • Evaluate and utilize the newer technologies coming in the industry to keep the solution on the cutting edge • Collaborate across different teams – development, quality engineering, product management, program management etc to ensure the true devops culture to get the right systems and solutions in place for agile delivery of a growing portfolio of SaaS applications, product releases and infrastructure optimizations • Effectively work across multiple timezones to collaborate with peers in other geographies • Handle escalations from different quarters – customers, client care and engineering teams, resolve the issues and effectively communicate the status across the board • Create a culture that supports innovation and creativity while delivering high volume in a predictable and reliable way. • Keep the team motivated to go beyond the expected in execution and thought leadership.
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort