Cutshort logo
SCA Jobs in Delhi, NCR and Gurgaon

11+ SCA Jobs in Delhi, NCR and Gurgaon | SCA Job openings in Delhi, NCR and Gurgaon

Apply to 11+ SCA Jobs in Delhi, NCR and Gurgaon on CutShort.io. Explore the latest SCA Job opportunities across top companies like Google, Amazon & Adobe.

icon
AdTech Industry

AdTech Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
8 - 12 yrs
₹50L - ₹75L / yr
Ansible
Terraform
skill iconAmazon Web Services (AWS)
Platform as a Service (PaaS)
CI/CD
+30 more

ROLE & RESPONSIBILITIES:

We are hiring a Senior DevSecOps / Security Engineer with 8+ years of experience securing AWS cloud, on-prem infrastructure, DevOps platforms, MLOps environments, CI/CD pipelines, container orchestration, and data/ML platforms. This role is responsible for creating and maintaining a unified security posture across all systems used by DevOps and MLOps teams — including AWS, Kubernetes, EMR, MWAA, Spark, Docker, GitOps, observability tools, and network infrastructure.


KEY RESPONSIBILITIES:

1.     Cloud Security (AWS)-

  • Secure all AWS resources consumed by DevOps/MLOps/Data Science: EC2, EKS, ECS, EMR, MWAA, S3, RDS, Redshift, Lambda, CloudFront, Glue, Athena, Kinesis, Transit Gateway, VPC Peering.
  • Implement IAM least privilege, SCPs, KMS, Secrets Manager, SSO & identity governance.
  • Configure AWS-native security: WAF, Shield, GuardDuty, Inspector, Macie, CloudTrail, Config, Security Hub.
  • Harden VPC architecture, subnets, routing, SG/NACLs, multi-account environments.
  • Ensure encryption of data at rest/in transit across all cloud services.

 

2.     DevOps Security (IaC, CI/CD, Kubernetes, Linux)-

Infrastructure as Code & Automation Security:

  • Secure Terraform, CloudFormation, Ansible with policy-as-code (OPA, Checkov, tfsec).
  • Enforce misconfiguration scanning and automated remediation.

CI/CD Security:

  • Secure Jenkins, GitHub, GitLab pipelines with SAST, DAST, SCA, secrets scanning, image scanning.
  • Implement secure build, artifact signing, and deployment workflows.

Containers & Kubernetes:

  • Harden Docker images, private registries, runtime policies.
  • Enforce EKS security: RBAC, IRSA, PSP/PSS, network policies, runtime monitoring.
  • Apply CIS Benchmarks for Kubernetes and Linux.

Monitoring & Reliability:

  • Secure observability stack: Grafana, CloudWatch, logging, alerting, anomaly detection.
  • Ensure audit logging across cloud/platform layers.


3.     MLOps Security (Airflow, EMR, Spark, Data Platforms, ML Pipelines)-

Pipeline & Workflow Security:

  • Secure Airflow/MWAA connections, secrets, DAGs, execution environments.
  • Harden EMR, Spark jobs, Glue jobs, IAM roles, S3 buckets, encryption, and access policies.

ML Platform Security:

  • Secure Jupyter/JupyterHub environments, containerized ML workspaces, and experiment tracking systems.
  • Control model access, artifact protection, model registry security, and ML metadata integrity.

Data Security:

  • Secure ETL/ML data flows across S3, Redshift, RDS, Glue, Kinesis.
  • Enforce data versioning security, lineage tracking, PII protection, and access governance.

ML Observability:

  • Implement drift detection (data drift/model drift), feature monitoring, audit logging.
  • Integrate ML monitoring with Grafana/Prometheus/CloudWatch.


4.     Network & Endpoint Security-

  • Manage firewall policies, VPN, IDS/IPS, endpoint protection, secure LAN/WAN, Zero Trust principles.
  • Conduct vulnerability assessments, penetration test coordination, and network segmentation.
  • Secure remote workforce connectivity and internal office networks.


5.     Threat Detection, Incident Response & Compliance-

  • Centralize log management (CloudWatch, OpenSearch/ELK, SIEM).
  • Build security alerts, automated threat detection, and incident workflows.
  • Lead incident containment, forensics, RCA, and remediation.
  • Ensure compliance with ISO 27001, SOC 2, GDPR, HIPAA (as applicable).
  • Maintain security policies, procedures, RRPs (Runbooks), and audits.


IDEAL CANDIDATE:

  • 8+ years in DevSecOps, Cloud Security, Platform Security, or equivalent.
  • Proven ability securing AWS cloud ecosystems (IAM, EKS, EMR, MWAA, VPC, WAF, GuardDuty, KMS, Inspector, Macie).
  • Strong hands-on experience with Docker, Kubernetes (EKS), CI/CD tools, and Infrastructure-as-Code.
  • Experience securing ML platforms, data pipelines, and MLOps systems (Airflow/MWAA, Spark/EMR).
  • Strong Linux security (CIS hardening, auditing, intrusion detection).
  • Proficiency in Python, Bash, and automation/scripting.
  • Excellent knowledge of SIEM, observability, threat detection, monitoring systems.
  • Understanding of microservices, API security, serverless security.
  • Strong understanding of vulnerability management, penetration testing practices, and remediation plans.


EDUCATION:

  • Master’s degree in Cybersecurity, Computer Science, Information Technology, or related field.
  • Relevant certifications (AWS Security Specialty, CISSP, CEH, CKA/CKS) are a plus.


PERKS, BENEFITS AND WORK CULTURE:

  • Competitive Salary Package
  • Generous Leave Policy
  • Flexible Working Hours
  • Performance-Based Bonuses
  • Health Care Benefits
Read more
Global Digital Transformation Solutions Provider

Global Digital Transformation Solutions Provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore), Chennai, Hyderabad, Kochi (Cochin), Noida, Pune, Thiruvananthapuram
7 - 10 yrs
₹21L - ₹30L / yr
Perforce
DevOps
skill iconGit
skill iconGitHub
skill iconPython
+7 more

JOB DETAILS:

* Job Title: Specialist I - DevOps Engineering

* Industry: Global Digital Transformation Solutions Provider

* Salary: Best in Industry

* Experience: 7-10 years

* Location: Bengaluru (Bangalore), Chennai, Hyderabad, Kochi (Cochin), Noida, Pune, Thiruvananthapuram

 

Job Description

Job Summary:

As a DevOps Engineer focused on Perforce to GitHub migration, you will be responsible for executing seamless and large-scale source control migrations. You must be proficient with GitHub Enterprise and Perforce, possess strong scripting skills (Python/Shell), and have a deep understanding of version control concepts.

The ideal candidate is a self-starter, a problem-solver, and thrives on challenges while ensuring smooth transitions with minimal disruption to development workflows.

 

Key Responsibilities:

  • Analyze and prepare Perforce repositories — clean workspaces, merge streams, and remove unnecessary files.
  • Handle large files efficiently using Git Large File Storage (LFS) for files exceeding GitHub’s 100MB size limit.
  • Use git-p4 fusion (Python-based tool) to clone and migrate Perforce repositories incrementally, ensuring data integrity.
  • Define migration scope — determine how much history to migrate and plan the repository structure.
  • Manage branch renaming and repository organization for optimized post-migration workflows.
  • Collaborate with development teams to determine migration points and finalize migration strategies.
  • Troubleshoot issues related to file sizes, Python compatibility, network connectivity, or permissions during migration.

 

Required Qualifications:

  • Strong knowledge of Git/GitHub and preferably Perforce (Helix Core) — understanding of differences, workflows, and integrations.
  • Hands-on experience with P4-Fusion.
  • Familiarity with cloud platforms (AWS, Azure) and containerization technologies (Docker, Kubernetes).
  • Proficiency in migration tools such as git-p4 fusion — installation, configuration, and troubleshooting.
  • Ability to identify and manage large files using Git LFS to meet GitHub repository size limits.
  • Strong scripting skills in Python and Shell for automating migration and restructuring tasks.
  • Experience in planning and executing source control migrations — defining scope, branch mapping, history retention, and permission translation.
  • Familiarity with CI/CD pipeline integration to validate workflows post-migration.
  • Understanding of source code management (SCM) best practices, including version history and repository organization in GitHub.
  • Excellent communication and collaboration skills for cross-team coordination and migration planning.
  • Proven practical experience in repository migration, large file management, and history preservation during Perforce to GitHub transitions.

 

Skills: Github, Kubernetes, Perforce, Perforce (Helix Core), Devops Tools

 

Must-Haves

Git/GitHub (advanced), Perforce (Helix Core) (advanced), Python/Shell scripting (strong), P4-Fusion (hands-on experience), Git LFS (proficient)

Read more
Variyas Labs Pvt. Ltd.

at Variyas Labs Pvt. Ltd.

2 candid answers
Sales Team
Posted by Sales Team
Delhi, Gurugram, Noida
4 - 6 yrs
₹15L - ₹24L / yr
OpenStack
Linux/Unix
openshift
skill iconKubernetes

Job Overview:

We are looking for a seasoned OpenStack Administrator with strong expertise in managing large-scale production environments. The ideal candidate should have hands-on experience with Linux, Kubernetes, and OpenShift, and be capable of performing routine maintenance, upgrades, and troubleshooting in complex cloud infrastructures.


The candidate must also be comfortable working with Red Hat support, managing escalations, and communicating effectively with both internal teams and external clients.


Key Skills & Qualifications:

  • Proven experience managing OpenStack infrastructure in production.
  • Strong proficiency in Linux system administration (RHEL/CentOS preferred).
  • Hands-on experience with Kubernetes and OpenShift.
  • Experience with system monitoring, log management, and troubleshooting tools.
  • Familiarity with RH support portal, managing cases, and following up on resolutions.
  • Excellent problem-solving skills and ability to work under pressure.
  • Strong client communication skills and ability to articulate technical issues clearly.
  • Proven ability to work in and manage large-scale production environments.


Candidates with OpenStack certification will be preferred.

Read more
Cygen Host

at Cygen Host

2 candid answers
Cygen Host
Posted by Cygen Host
Bengaluru (Bangalore), Mumbai, Delhi
3 - 7 yrs
₹12L - ₹30L / yr
Microsoft Azure
skill iconAmazon Web Services (AWS)

As a DevOps Engineer, you’ll play a key role in managing our cloud infrastructure, automating deployments, and ensuring high availability across our global server network. You’ll work closely with our technical team to optimize performance and scalability.


Responsibilities

✅ Design, implement, and manage cloud infrastructure (primarily Azure)

✅ Automate deployments using CI/CD pipelines (GitHub Actions, Jenkins, or equivalent)

✅ Monitor and optimize server performance & uptime (100% uptime goal)

✅ Work with cPanel-based hosting environments and ensure seamless operation

✅ Implement security best practices & compliance measures

✅ Troubleshoot system issues, scale infrastructure, and enhance reliability


Requirements

🔹 3-7 years of DevOps experience in cloud environments (Azure preferred)

🔹 Hands-on expertise in CI/CD tools (GitHub Actions, Jenkins, etc.)

🔹 Proficiency in Terraform, Ansible, Docker, Kubernetes

🔹 Strong knowledge of Linux system administration & networking

🔹 Experience with monitoring tools (Prometheus, Grafana, ELK, etc.)

🔹 Security-first mindset & automation-driven approach


Why Join Us?

🚀 Work at a fast-growing startup backed by Microsoft

💡 Lead high-impact DevOps projects in a cloud-native environment

🌍 Hybrid work model with flexibility in Bangalore, Delhi, or Mumbai

💰 Competitive salary ₹12-30 LPA based on experience


How to Apply?

📩 Apply now & follow us for future updates:

🔗 X (Twitter): https://x.com/CygenHost

🔗 LinkedIn: https://www.linkedin.com/company/cygen-host/

🔗 Instagram: https://www.instagram.com/cygenhost

Would you like any modifications before posting this? Or should I move on to the next role? 🚀

Read more
AI & Consumer Tech company

AI & Consumer Tech company

Agency job
via Qrata by Rayal Rajan
Gurugram, Delhi, Noida, Ghaziabad, Faridabad
4 - 8 yrs
Best in industry
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
skill iconGo Programming (Golang)
+3 more

Role : Senior Engineer Infrastructure


Key Responsibilities:


● Infrastructure Development and Management: Design, implement, and manage robust and scalable infrastructure solutions, ensuring optimal performance,security, and availability. Lead transition and migration projects, moving legacy systemsto cloud-based solutions.

● Develop and maintain applications and services using Golang.

● Automation and Optimization: Implement automation tools and frameworksto optimize operational processes. Monitorsystem performance, optimizing and modifying systems as necessary.

● Security and Compliance: Ensure infrastructure security by implementing industry best practices and compliance requirements. Respond to and mitigate security incidents and vulnerabilities.



Qualifications:


● Bachelor's degree in Computer Science, Engineering, or a related field (or equivalent practical experience).

● Good understanding of prominent backend languageslike Golang, Python, Node.js, or others.

● In-depth knowledge of network architecture,system security, infrastructure scalability.

● Proficiency with development tools,server management, and database systems.

● Strong experience with cloud services(AWS.), deployment,scaling, and management.

● Knowledge of Azure is a plus

● Familiarity with containers and orchestration services,such as Docker, Kubernetes, etc.

● Strong problem-solving skills and analytical thinking.

● Excellent verbal and written communication skills.

● Ability to thrive in a collaborative team environment.

● Genuine passion for backend development and keen interest in scalable systems. 




Read more
Classplus

at Classplus

1 video
4 recruiters
Peoples Office
Posted by Peoples Office
Noida
5 - 8 yrs
Best in industry
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
+11 more
About us

Classplus is India's largest B2B ed-tech start-up, enabling 1 Lac+ educators and content creators to create their digital identity with their own branded apps. Starting in 2018, we have grown more than 10x in the last year, into India's fastest-growing video learning platform.
Over the years, marquee investors like Tiger Global, Surge, GSV Ventures, Blume, Falcon, Capital, RTP Global, and Chimera Ventures have supported our vision. Thanks to our awesome and dedicated team, we achieved a major milestone in March this year when we secured a “Series-D” funding.

Now as we go global, we are super excited to have new folks on board who can take the rocketship higher🚀. Do you think you have what it takes to help us achieve this? Find Out Below!

What will you do?

Define the overall process, which includes building a team for DevOps activities and ensuring that infrastructure changes are reviewed from an architecture and security perspective

Create standardized tooling and templates for development teams to create CI/CD pipelines

Ensure infrastructure is created and maintained using terraform

Work with various stakeholders to design and implement infrastructure changes to support new feature sets in various product lines.

Maintain transparency and clear visibility of costs associated with various product verticals, environments and work with stakeholders to plan for optimization and implementation

Spearhead continuous experimenting and innovating initiatives to optimize the infrastructure in terms of uptime, availability, latency and costs

You should apply, if you

1. Are a seasoned Veteran: Have managed infrastructure at scale running web apps, microservices, and data pipelines using tools and languages like JavaScript(NodeJS), Go, Python, Java, Erlang, Elixir, C++ or Ruby (experience in any one of them is enough)

2. Are a Mr. Perfectionist: You have a strong bias for automation and taking the time to think about the right way to solve a problem versus quick fixes or band-aids.

3. Bring your A-Game: Have hands-on experience and ability to design/implement infrastructure with GCP services like Compute, Database, Storage, Load Balancers, API Gateway, Service Mesh, Firewalls, Message Brokers, Monitoring, Logging and experience in setting up backups, patching and DR planning

4. Are up with the times: Have expertise in one or more cloud platforms (Amazon WebServices or Google Cloud Platform or Microsoft Azure), and have experience in creating and managing infrastructure completely through Terraform kind of tool

5. Have it all on your fingertips: Have experience building CI/CD pipeline using Jenkins, Docker for applications majorly running on Kubernetes. Hands-on experience in managing and troubleshooting applications running on K8s

6. Have nailed the data storage game: Good knowledge of Relational and NoSQL databases (MySQL,Mongo, BigQuery, Cassandra…)

7. Bring that extra zing: Have the ability to program/script is and strong fundamentals in Linux and Networking.

8. Know your toys: Have a good understanding of Microservices architecture, Big Data technologies and experience with highly available distributed systems, scaling data store technologies, and creating multi-tenant and self hosted environments, that’s a plus

Being Part of the Clan

At Classplus, you’re not an “employee” but a part of our “Clan”. So, you can forget about being bound by the clock as long as you’re crushing it workwise😎. Add to that some passionate people working with and around you, and what you get is the perfect work vibe you’ve been looking for!

It doesn’t matter how long your journey has been or your position in the hierarchy (we don’t do Sirs and Ma’ams); you’ll be heard, appreciated, and rewarded. One can say, we have a special place in our hearts for the Doers! ✊🏼❤️

Are you a go-getter with the chops to nail what you do? Then this is the place for you.
Read more
Coredge

at Coredge

3 recruiters
Sajal Saxena
Posted by Sajal Saxena
Bengaluru (Bangalore), Noida, Pune
5 - 10 yrs
₹20L - ₹35L / yr
OpenStack
Ansible
Ceph
skill iconDocker
skill iconKubernetes
+1 more

You need to drive automation for implementing scalable and robust applications. You would indulge your dedication and passion to build server-side optimization ensuring low-latency and high-end performance for the cloud deployed within datacentre. You should have sound knowledge of Open stack and Kubernetes domain.

YOUR ‘OKR’ SUMMARY

OKR means Objective and Key Results.

As a DevOps Engineer, you will understand the overall movement of data in the entire platform, find bottlenecks,define solutions, develop key pieces, write APIs and own deployment of those. You will work with internal and external development teams to discover these opportunities and to solve hard problems. You will also guide engineers in solving the complex problems, developing your acceptance tests for those and reviewing the work and

the test results.

What you will do

  • As a DevOps Engineer responsible for systems being used by customer across the globe.
  • Set the goals for overall system and divide into goals for the sub-system.
  • Guide/motivate/convince/mentor the architects on sub-systems and help them achieving improvements with agility and speed.
  • Identify the performance bottleneck and come up with the solution to optimize time and cost taken by build/test system.
  • Be a thought leader to contribute to the capacity planning for software/hardware, spanning internal and public cloud, solving the trade-off between turnaround time and utilization.
  • Bring in technologies enabling massively parallel systems to improve turnaround time by an order of magnitude.

What you will need

A strong sense of ownership, urgency, and drive. As an integral part of the development team, you will need the following skills to succeed.

  • BS or BE/B.Tech or equivalent experience in EE/CS with 10+ years of experience.
  • Strong background of Architecting and shipping distributed scalable software product with good understanding of system programming.
  • Excellent background of Cloud technologies like: OpenStack, Docker, Kubernetes, Ansible, Ceph is must.
  • Excellent understanding of hybrid, multi-cloud architecture and edge computing concepts.
  • Ability to identify the bottleneck and come up with solution to optimize it.
  • Programming and software development skills in Python, Shell-script along with good understanding of distributed systems and REST APIs.
  • Experience in working with SQL/NoSQL database systems such as MySQL, MongoDB or Elasticsearch.
  • Excellent knowledge and working experience with Docker containers and Virtual Machines.
  • Ability to effectively work across organizational boundaries to maximize alignment and productivity between teams.
  • Ability and flexibility to work and communicate effectively in a multi-national, multi-time-zone corporate.

Additional Advantage:
  • Deep understanding of technology and passionate about what you do.
  • Background in designing high performant scalable software systems with strong focus to optimizehardware cost.
  • Solid collaborative and interpersonal skills, specifically a proven ability to effectively guide andinfluence within a dynamic environment.
  • Strong commitment to get the most performance out of a system being worked on.
  • Prior development of a large software project using service-oriented architecture operating with real time constraints.

What's In It for You?

  • You will get a chance to work on cloud-native and hyper-scale products
  • You will be working with industry leaders in cloud.
  • You can expect a steep learning curve.
  • You will get the experience of solving real time problems, eventually you become a problem solver.

Benefits & Perks:

  • Competitive Salary
  • Health Insurance
  • Open Learning - 100% Reimbursement for online technical courses.
  • Fast Growth - opportunities to grow quickly and surely
  • Creative Freedom + Flat hierarchy
  • Sponsorship to all those employees who represent company in events and meet ups.
  • Flexible working hours
  • 5 days week
  • Hybrid Working model (Office and WFH)

Our Hiring Process:

Candidates for this position can expect the hiring process as follows (subject to successful clearing of every round)

  • Initial Resume screening call with our Recruiting team
  • Next, candidates will be invited to solve coding exercises.
  • Next, candidates will be invited for first technical interview
  • Next, candidates will be invited for final technical interview
  • Finally, candidates will be invited for Culture Plus interview with HR
  • Candidates may be asked to interview with the Leadership team
  • Successful candidates will subsequently be made an offer via email

As always, the interviews and screening call will be conducted via a mix of telephonic and video call.

So, if you are looking at an opportunity to really make a difference- make it with us…

Coredge.io provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by applicable central, state or local laws.

Read more
Extramarks

at Extramarks

4 recruiters
Prachi Sharma
Posted by Prachi Sharma
Remote, Noida, Delhi, Gurugram, Ghaziabad, Faridabad
3 - 7 yrs
₹8L - ₹14L / yr
DevOps
Terraform
skill iconDocker
skill iconJenkins
Ansible
+2 more

Job Description

• Minimum 3+ yrs of Experience in DevOps with AWS Platform

• Strong AWS knowledge and experience

• Experience in using CI/CD automation tools (Git, Jenkins, Configuration deployment tools ( Puppet/Chef/Ansible)

• Experience with IAC tools Terraform

• Excellent experience in operating a container orchestration cluster (Kubernetes, Docker)

• Significant experience with Linux operating system environments

• Experience with infrastructure scripting solutions such as Python/Shell scripting

• Must have experience in designing Infrastructure automation framework.

• Good experience in any of the Setting up Monitoring tools and Dashboards ( Grafana/kafka)

• Excellent problem-solving, Log Analysis and troubleshooting skills

• Experience in setting up centralized logging for system (EKS, EC2) and application

• Process-oriented with great documentation skills

• Ability to work effectively within a team and with minimal supervision

Read more
Hiring for one MNC client for Gurgaon location

Hiring for one MNC client for Gurgaon location

Agency job
via Natalie Consultants by Swati Bansal
Gurugram, Delhi, Noida, Ghaziabad, Faridabad
3 - 7 yrs
₹6L - ₹14L / yr
DevOps
skill iconAmazon Web Services (AWS)
skill iconKubernetes
skill iconDocker
Terraform

Exposure to development and implementation practices in a modern systems environment together with exposure to working in a project team particularly with reference to industry methodologies, e.g. Agile, continuous delivery, etc

  • At least 3-5 years of experience building and maintaining AWS infrastructure (VPC, EC2, Security Groups, IAM, ECS, CodeDeploy, CloudFront, S3)
  • Strong understanding of how to secure AWS environments and meet compliance requirements
  • Experience using DevOps methodology and Infrastructure as Code
  • Automation / CI/CD tools – Bitbucket Pipelines, Jenkins
  • Infrastructure as code – Terraform, Cloudformation, etc
  • Strong experience deploying and managing infrastructure with Terraform
  • Automated provisioning and configuration management – Ansible, Chef, Puppet
  • Experience with Docker, GitHub, Jenkins, ELK and deploying applications on AWS
  • Improve CI/CD processes, support software builds and CI/CD of the development departments
  • Develop, maintain, and optimize automated deployment code for development, test, staging and production environments
Read more
Khati Solutions Pvt. Ltd.
Vinayak Parashar
Posted by Vinayak Parashar
Delhi
3 - 7 yrs
₹10L - ₹18L / yr
DevOps
skill iconAmazon Web Services (AWS)
skill iconPython
Bash
skill iconMongoDB
+1 more
● Minimum 3 years of hands-on working experience in building DevOps ● platform in AWS cloud ● Expertise in any one scripting languages - Python, bash, shell etc ● Own the design and implementation of AWS environment that is scalable, ● highly available, and secure ● Conceptualize, architect and build automated deployment pipelines in a ● CI/CD environment like Jenkins ● Solid experience working with AWS services (EC2, VPC, ELB, S3, Cloud ● Formation, Cloud Trail, Route 53, RDS, SQS, Code Deploy etc) ● Good understanding of AWS best practices and especially strong with ● AWS Cloud Formation, Terraforms and Code Deploy ● Proficiency with the AWS CLI ● Exposure to modern IT infrastructure (eg. Docker swarm/ ● Mesos/Kubernetes/ Openstack) ● Good working knowledge of RDBMS services in cloud, db replication etc ● experience in nosql(MongoDB, etc) ● build and manage dashboards to provide visibility into production
Read more
Statusbrew

at Statusbrew

2 recruiters
Tushar Mahajan
Posted by Tushar Mahajan
Amritsar, NCR (Delhi | Gurgaon | Noida), Chandigarh, Ludhiana, Bengaluru (Bangalore)
2 - 7 yrs
₹5L - ₹23L / yr
skill iconAmazon Web Services (AWS)
DevOps
skill iconGit
Do you breathe and drink DevOps? Do you keep redesigning and reviewing your deployment until you can optimize it no more? If yes, and you are good at it, an extremely high quality work + great work life balance awaits you. StatusBrew is one of the few companies in India that have built a massively successful product at the global level. Ranked under 5000 by Alexa for a high volume of traffic and with about 1,000,000 monthly active users, we are ready to make it even bigger. As incharge of DevOps at StatusBrew, you will be managing the deployment of a global product with: 1. 1 mn monthly active users, 200K daily active, 2000 users concurrent at any point of time 2. 1 TB data generated in 1 week, 1 Billion database rows in just 1 day 3. $20,000 spent on AWS every month after many optimizations We have an extremely cozy office in Amritsar. We have a tight knit team that enjoys working and having fun together. Talk to us if you want to join us in our journey to the next 100 million users.
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort