Cutshort logo
Perforce Jobs in Delhi, NCR and Gurgaon

11+ Perforce Jobs in Delhi, NCR and Gurgaon | Perforce Job openings in Delhi, NCR and Gurgaon

Apply to 11+ Perforce Jobs in Delhi, NCR and Gurgaon on CutShort.io. Explore the latest Perforce Job opportunities across top companies like Google, Amazon & Adobe.

icon
Global Digital Transformation Solutions Provider

Global Digital Transformation Solutions Provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore), Chennai, Hyderabad, Kochi (Cochin), Noida, Pune, Thiruvananthapuram
7 - 10 yrs
₹21L - ₹30L / yr
Perforce
DevOps
skill iconGit
skill iconGitHub
skill iconPython
+7 more

JOB DETAILS:

* Job Title: Specialist I - DevOps Engineering

* Industry: Global Digital Transformation Solutions Provider

* Salary: Best in Industry

* Experience: 7-10 years

* Location: Bengaluru (Bangalore), Chennai, Hyderabad, Kochi (Cochin), Noida, Pune, Thiruvananthapuram

 

Job Description

Job Summary:

As a DevOps Engineer focused on Perforce to GitHub migration, you will be responsible for executing seamless and large-scale source control migrations. You must be proficient with GitHub Enterprise and Perforce, possess strong scripting skills (Python/Shell), and have a deep understanding of version control concepts.

The ideal candidate is a self-starter, a problem-solver, and thrives on challenges while ensuring smooth transitions with minimal disruption to development workflows.

 

Key Responsibilities:

  • Analyze and prepare Perforce repositories — clean workspaces, merge streams, and remove unnecessary files.
  • Handle large files efficiently using Git Large File Storage (LFS) for files exceeding GitHub’s 100MB size limit.
  • Use git-p4 fusion (Python-based tool) to clone and migrate Perforce repositories incrementally, ensuring data integrity.
  • Define migration scope — determine how much history to migrate and plan the repository structure.
  • Manage branch renaming and repository organization for optimized post-migration workflows.
  • Collaborate with development teams to determine migration points and finalize migration strategies.
  • Troubleshoot issues related to file sizes, Python compatibility, network connectivity, or permissions during migration.

 

Required Qualifications:

  • Strong knowledge of Git/GitHub and preferably Perforce (Helix Core) — understanding of differences, workflows, and integrations.
  • Hands-on experience with P4-Fusion.
  • Familiarity with cloud platforms (AWS, Azure) and containerization technologies (Docker, Kubernetes).
  • Proficiency in migration tools such as git-p4 fusion — installation, configuration, and troubleshooting.
  • Ability to identify and manage large files using Git LFS to meet GitHub repository size limits.
  • Strong scripting skills in Python and Shell for automating migration and restructuring tasks.
  • Experience in planning and executing source control migrations — defining scope, branch mapping, history retention, and permission translation.
  • Familiarity with CI/CD pipeline integration to validate workflows post-migration.
  • Understanding of source code management (SCM) best practices, including version history and repository organization in GitHub.
  • Excellent communication and collaboration skills for cross-team coordination and migration planning.
  • Proven practical experience in repository migration, large file management, and history preservation during Perforce to GitHub transitions.

 

Skills: Github, Kubernetes, Perforce, Perforce (Helix Core), Devops Tools

 

Must-Haves

Git/GitHub (advanced), Perforce (Helix Core) (advanced), Python/Shell scripting (strong), P4-Fusion (hands-on experience), Git LFS (proficient)

Read more
AdTech Industry

AdTech Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
8 - 12 yrs
₹50L - ₹75L / yr
Ansible
Terraform
skill iconAmazon Web Services (AWS)
Platform as a Service (PaaS)
CI/CD
+30 more

ROLE & RESPONSIBILITIES:

We are hiring a Senior DevSecOps / Security Engineer with 8+ years of experience securing AWS cloud, on-prem infrastructure, DevOps platforms, MLOps environments, CI/CD pipelines, container orchestration, and data/ML platforms. This role is responsible for creating and maintaining a unified security posture across all systems used by DevOps and MLOps teams — including AWS, Kubernetes, EMR, MWAA, Spark, Docker, GitOps, observability tools, and network infrastructure.


KEY RESPONSIBILITIES:

1.     Cloud Security (AWS)-

  • Secure all AWS resources consumed by DevOps/MLOps/Data Science: EC2, EKS, ECS, EMR, MWAA, S3, RDS, Redshift, Lambda, CloudFront, Glue, Athena, Kinesis, Transit Gateway, VPC Peering.
  • Implement IAM least privilege, SCPs, KMS, Secrets Manager, SSO & identity governance.
  • Configure AWS-native security: WAF, Shield, GuardDuty, Inspector, Macie, CloudTrail, Config, Security Hub.
  • Harden VPC architecture, subnets, routing, SG/NACLs, multi-account environments.
  • Ensure encryption of data at rest/in transit across all cloud services.

 

2.     DevOps Security (IaC, CI/CD, Kubernetes, Linux)-

Infrastructure as Code & Automation Security:

  • Secure Terraform, CloudFormation, Ansible with policy-as-code (OPA, Checkov, tfsec).
  • Enforce misconfiguration scanning and automated remediation.

CI/CD Security:

  • Secure Jenkins, GitHub, GitLab pipelines with SAST, DAST, SCA, secrets scanning, image scanning.
  • Implement secure build, artifact signing, and deployment workflows.

Containers & Kubernetes:

  • Harden Docker images, private registries, runtime policies.
  • Enforce EKS security: RBAC, IRSA, PSP/PSS, network policies, runtime monitoring.
  • Apply CIS Benchmarks for Kubernetes and Linux.

Monitoring & Reliability:

  • Secure observability stack: Grafana, CloudWatch, logging, alerting, anomaly detection.
  • Ensure audit logging across cloud/platform layers.


3.     MLOps Security (Airflow, EMR, Spark, Data Platforms, ML Pipelines)-

Pipeline & Workflow Security:

  • Secure Airflow/MWAA connections, secrets, DAGs, execution environments.
  • Harden EMR, Spark jobs, Glue jobs, IAM roles, S3 buckets, encryption, and access policies.

ML Platform Security:

  • Secure Jupyter/JupyterHub environments, containerized ML workspaces, and experiment tracking systems.
  • Control model access, artifact protection, model registry security, and ML metadata integrity.

Data Security:

  • Secure ETL/ML data flows across S3, Redshift, RDS, Glue, Kinesis.
  • Enforce data versioning security, lineage tracking, PII protection, and access governance.

ML Observability:

  • Implement drift detection (data drift/model drift), feature monitoring, audit logging.
  • Integrate ML monitoring with Grafana/Prometheus/CloudWatch.


4.     Network & Endpoint Security-

  • Manage firewall policies, VPN, IDS/IPS, endpoint protection, secure LAN/WAN, Zero Trust principles.
  • Conduct vulnerability assessments, penetration test coordination, and network segmentation.
  • Secure remote workforce connectivity and internal office networks.


5.     Threat Detection, Incident Response & Compliance-

  • Centralize log management (CloudWatch, OpenSearch/ELK, SIEM).
  • Build security alerts, automated threat detection, and incident workflows.
  • Lead incident containment, forensics, RCA, and remediation.
  • Ensure compliance with ISO 27001, SOC 2, GDPR, HIPAA (as applicable).
  • Maintain security policies, procedures, RRPs (Runbooks), and audits.


IDEAL CANDIDATE:

  • 8+ years in DevSecOps, Cloud Security, Platform Security, or equivalent.
  • Proven ability securing AWS cloud ecosystems (IAM, EKS, EMR, MWAA, VPC, WAF, GuardDuty, KMS, Inspector, Macie).
  • Strong hands-on experience with Docker, Kubernetes (EKS), CI/CD tools, and Infrastructure-as-Code.
  • Experience securing ML platforms, data pipelines, and MLOps systems (Airflow/MWAA, Spark/EMR).
  • Strong Linux security (CIS hardening, auditing, intrusion detection).
  • Proficiency in Python, Bash, and automation/scripting.
  • Excellent knowledge of SIEM, observability, threat detection, monitoring systems.
  • Understanding of microservices, API security, serverless security.
  • Strong understanding of vulnerability management, penetration testing practices, and remediation plans.


EDUCATION:

  • Master’s degree in Cybersecurity, Computer Science, Information Technology, or related field.
  • Relevant certifications (AWS Security Specialty, CISSP, CEH, CKA/CKS) are a plus.


PERKS, BENEFITS AND WORK CULTURE:

  • Competitive Salary Package
  • Generous Leave Policy
  • Flexible Working Hours
  • Performance-Based Bonuses
  • Health Care Benefits
Read more
AdTech Industry

AdTech Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
8 - 12 yrs
₹30L - ₹40L / yr
DevOps
skill iconDocker
CI/CD
skill iconAmazon Web Services (AWS)
AWS CloudFormation
+43 more

REVIEW CRITERIA:

MANDATORY:

  • Strong Senior/Lead DevOps Engineer Profile
  • Must have 8+ years of hands-on experience in DevOps engineering, with a strong focus on AWS cloud infrastructure and services (EC2, VPC, EKS, RDS, Lambda, CloudFront, etc.).
  • Must have strong system administration expertise (installation, tuning, troubleshooting, security hardening)
  • Must have solid experience in CI/CD pipeline setup and automation using tools such as Jenkins, GitHub Actions, or similar
  • Must have hands-on experience with Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Ansible
  • Must have strong database expertise across MongoDB and Snowflake (administration, performance optimization, integrations)
  • Must have experience with monitoring and observability tools such as Prometheus, Grafana, ELK, CloudWatch, or Datadog
  • Must have good exposure to containerization and orchestration using Docker and Kubernetes (EKS)
  • Must be currently working in an AWS-based environment (AWS experience must be in the current organization)
  • Its an IC role


PREFERRED:

  • Must be proficient in scripting languages (Bash, Python) for automation and operational tasks.
  • Must have strong understanding of security best practices, IAM, WAF, and GuardDuty configurations.
  • Exposure to DevSecOps and end-to-end automation of deployments, provisioning, and monitoring.
  • Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field.
  • Candidates from NCR region only (No outstation candidates).


ROLES AND RESPONSIBILITIES:

We are seeking a highly skilled Senior DevOps Engineer with 8+ years of hands-on experience in designing, automating, and optimizing cloud-native solutions on AWS. AWS and Linux expertise are mandatory. The ideal candidate will have strong experience across databases, automation, CI/CD, containers, and observability, with the ability to build and scale secure, reliable cloud environments.


KEY RESPONSIBILITIES:

Cloud & Infrastructure as Code (IaC)-

  • Architect and manage AWS environments ensuring scalability, security, and high availability.
  • Implement infrastructure automation using Terraform, CloudFormation, and Ansible.
  • Configure VPC Peering, Transit Gateway, and PrivateLink/Connect for advanced networking.


CI/CD & Automation:

  • Build and maintain CI/CD pipelines (Jenkins, GitHub, SonarQube, automated testing).
  • Automate deployments, provisioning, and monitoring across environments.


Containers & Orchestration:

  • Deploy and operate workloads on Docker and Kubernetes (EKS).
  • Implement IAM Roles for Service Accounts (IRSA) for secure pod-level access.
  • Optimize performance of containerized and microservices applications.


Monitoring & Reliability:

  • Implement observability with Prometheus, Grafana, ELK, CloudWatch, M/Monit, and Datadog.
  • Establish logging, alerting, and proactive monitoring for high availability.


Security & Compliance:

  • Apply AWS security best practices including IAM, IRSA, SSO, and role-based access control.
  • Manage WAF, Guard Duty, Inspector, and other AWS-native security tools.
  • Configure VPNs, firewalls, and secure access policies and AWS organizations.


Databases & Analytics:

  • Must have expertise in MongoDB, Snowflake, Aerospike, RDS, PostgreSQL, MySQL/MariaDB, and other RDBMS.
  • Manage data reliability, performance tuning, and cloud-native integrations.
  • Experience with Apache Airflow and Spark.


IDEAL CANDIDATE:

  • 8+ years in DevOps engineering, with strong AWS Cloud expertise (EC2, VPC, TG, RDS, S3, IAM, EKS, EMR, SCP, MWAA, Lambda, CloudFront, SNS, SES etc.).
  • Linux expertise is mandatory (system administration, tuning, troubleshooting, CIS hardening etc).
  • Strong knowledge of databases: MongoDB, Snowflake, Aerospike, RDS, PostgreSQL, MySQL/MariaDB, and other RDBMS.
  • Hands-on with Docker, Kubernetes (EKS), Terraform, CloudFormation, Ansible.
  • Proven ability with CI/CD pipeline automation and DevSecOps practices.
  • Practical experience with VPC Peering, Transit Gateway, WAF, Guard Duty, Inspector and advanced AWS networking and security tools.
  • Expertise in observability tools: Prometheus, Grafana, ELK, CloudWatch, M/Monit, and Datadog.
  • Strong scripting skills (Shell/bash, Python, or similar) for automation.
  • Bachelor / Master’s degree
  • Effective communication skills


PERKS, BENEFITS AND WORK CULTURE:

  • Competitive Salary Package
  • Generous Leave Policy
  • Flexible Working Hours
  • Performance-Based Bonuses
  • Health Care Benefits
Read more
Lovoj

at Lovoj

2 candid answers
LOVOJ CONTACT
Posted by LOVOJ CONTACT
Delhi
3 - 10 yrs
₹8L - ₹14L / yr
skill iconAmazon Web Services (AWS)
AWS Lambda
CI/CD
DevOps

Key Responsibilities

  • Design, implement, and maintain CI/CD pipelines for backend, frontend, and mobile applications.
  • Manage cloud infrastructure using AWS (EC2, Lambda, S3, VPC, RDS, CloudWatch, ECS/EKS).
  • Configure and maintain Docker containers and/or Kubernetes clusters.
  • Implement and maintain Infrastructure as Code (IaC) using Terraform / CloudFormation.
  • Automate build, deployment, and monitoring processes.
  • Manage code repositories using Git/GitHub/GitLab, enforce branching strategies.
  • Implement monitoring and alerting using tools like Prometheus, Grafana, CloudWatch, ELK, Splunk.
  • Ensure system scalability, reliability, and security.
  • Troubleshoot production issues and perform root-cause analysis.
  • Collaborate with engineering teams to improve deployment and development workflows.
  • Optimize infrastructure costs and improve performance.

Required Skills & Qualifications

  • 3+ years of experience in DevOps, SRE, or Cloud Engineering.
  • Strong hands-on knowledge of AWS cloud services.
  • Experience with Docker, containers, and orchestrators (ECS, EKS, Kubernetes).
  • Strong understanding of CI/CD tools: GitHub Actions, Jenkins, GitLab CI, or AWS CodePipeline.
  • Experience with Linux administration and shell scripting.
  • Strong understanding of Networking, VPC, DNS, Load Balancers, Security Groups.
  • Experience with monitoring/logging tools: CloudWatch, ELK, Prometheus, Grafana.
  • Experience with Terraform or CloudFormation (IaC).
  • Good understanding of Node.js or similar application deployments.
  • Knowledge of NGINX/Apache and load balancing concepts.
  • Strong problem-solving and communication skills.

Preferred/Good to Have

  • Experience with Kubernetes (EKS).
  • Experience with Serverless architectures (Lambda).
  • Experience with Redis, MongoDB, RDS.
  • Certification in AWS Solutions Architect / DevOps Engineer.
  • Experience with security best practices, IAM policies, and DevSecOps.
  • Understanding of cost optimization and cloud cost management.


Read more
Eyther
Atharva Kulkarni
Posted by Atharva Kulkarni
Delhi, Faridabad, Bhopal
2 - 5 yrs
₹6L - ₹8L / yr
skill iconKubernetes
skill iconAmazon Web Services (AWS)

About the Role


We are looking for a DevOps Engineer to build and maintain scalable, secure, and high-

performance infrastructure for our next-generation healthcare platform. You will be

responsible for automation, CI/CD pipelines, cloud infrastructure, and system reliability,

ensuring seamless deployment and operations.


Responsibilities


1. Infrastructure & Cloud Management


• Design, deploy, and manage cloud-based infrastructure (AWS, Azure, GCP)

• Implement containerization (Docker, Kubernetes) and microservices orchestration

• Optimize infrastructure cost, scalability, and performance


2. CI/CD & Automation


• Build and maintain CI/CD pipelines for automated deployments

• Automate infrastructure provisioning using Terraform, Ansible, or CloudFormation

• Implement GitOps practices for streamlined deployments


3. Security & Compliance


• Ensure adherence to ABDM, HIPAA, GDPR, and healthcare security standards

• Implement role-based access controls, encryption, and network security best

practices

• Conduct Vulnerability Assessment & Penetration Testing (VAPT) and compliance

audits


4. Monitoring & Incident Management


• Set up monitoring, logging, and alerting systems (Prometheus, Grafana, ELK,

Datadog, etc.)

• Optimize system reliability and automate incident response mechanisms

• Improve MTTR (Mean Time to Recovery) and system uptime KPIs


5. Collaboration & Process Improvement


• Work closely with development and QA teams to streamline deployments

• Improve DevSecOps practices and cloud security policies

• Participate in architecture discussions and performance tuning


Required Skills & Qualifications


• 2+ years of experience in DevOps, cloud infrastructure, and automation

• Hands-on experience with AWS and Kubernetes

• Proficiency in Docker and CI/CD tools (Jenkins, GitHub Actions, ArgoCD, etc.)

• Experience with Terraform, Ansible, or CloudFormation

• Strong knowledge of Linux, shell scripting, and networking

• Experience with cloud security, monitoring, and logging solutions


Nice to Have


• Experience in healthcare or other regulated industries

• Familiarity with serverless architectures and AI-driven infrastructure automation

• Knowledge of big data pipelines and analytics workflows


What You'll Gain


• Opportunity to build and scale a mission-critical healthcare infrastructure

• Work in a fast-paced startup environment with cutting-edge technologies

• Growth potential into Lead DevOps Engineer or Cloud Architect roles

Read more
Careator Technologies Pvt Ltd
NCR (Delhi | Gurgaon | Noida)
3 - 9 yrs
₹5L - ₹20L / yr
skill iconGit
DevOps
Shell Scripting
skill iconJenkins
Chef
+3 more
Permanent positions with a Product Client. Essential Skills: 3+ years’ experience of Windows Server Management 3+ years’ experience in Microsoft Azure Administration, Deployment, Development and Operations Networking (Azure networking, on-premise) Firewalls & VPN Experience in Linux administration Continuous Integration on VSTS in particular Security administration, e.g. setup of appropriate authorisation groups, roles and permissions structures Security (SSL, PKI, SSO, SAML) Experience of Azure ARM based provisioning using Windows Powershell scripting and templates Experience of Azure IaaS and PaaS offerings Experience with automation/configuration management using either Puppet, Chef or runbook ability to use a wide variety of open source technologies and cloud services (experience with Azure is required) Application Deployment tools(CI/CD) and their strategies. Experience building or managing applications from the Application layer down Exposure to security concepts / best practices Familiarity with one or more version control systems mainly Git, source tree Advantageous: Experience of NoSQL technology (i.e. CouchBase) Desired State Configuration and deployment (Puppet) Experience in Container orchestration framework like docker will be definite plus Experience of Azure solution deployment and development Interest in, or experience of, mobile solution development (i.e. worked as part of a team to deliver a mobile application) Azure Service Fabric Visual Studio Team Services for build and deployment
Read more
Gedu Global
Lovelesh Dahiya
Posted by Lovelesh Dahiya
Noida
4 - 9 yrs
₹7L - ₹15L / yr
DevOps
MySQL
Microsoft Windows Azure
skill iconPython
skill iconC#
+6 more

**THIS IS A 100% WORK FROM OFFICE ROLE**

We are looking for an experienced DevOps engineer that will help our team establish DevOps practice. You will work closely with the technical lead to identify and establish DevOps practices in the company.
You will help us build scalable, efficient cloud infrastructure. You’ll implement monitoring for automated system health checks. Lastly, you’ll build our CI pipeline, and train and guide the team in DevOps practices.


ROLE and RESPONSIBILITIES:

 

• Understanding customer requirements and project KPIs
• Implementing various development, testing, automation tools, and IT infrastructure
• Planning the team structure, activities, and involvement in project management
activities.
• Managing stakeholders and external interfaces
• Setting up tools and required infrastructure
• Defining and setting development, test, release, update, and support processes for
DevOps operation
• Have the technical skill to review, verify, and validate the software code developed in
the project.
• Troubleshooting techniques and fixing the code bugs
• Monitoring the processes during the entire lifecycle for its adherence and updating or
creating new processes for improvement and minimizing the wastage
• Encouraging and building automated processes wherever possible
• Identifying and deploying cybersecurity measures by continuously performing
vulnerability assessment and risk management
• Incidence management and root cause analysis
• Coordination and communication within the team and with customers
• Selecting and deploying appropriate CI/CD tools
• Strive for continuous improvement and build continuous integration, continuous
development, and constant deployment pipeline (CI/CD Pipeline)
• Mentoring and guiding the team members
• Monitoring and measuring customer experience and KPIs
• Managing periodic reporting on the progress to the management and the customer

 

Essential Skills and Experience Technical Skills

 

• Proven 3+years of experience as DevOps
• A bachelor’s degree or higher qualification in computer science
• The ability to code and script in multiple languages and automation frameworks
like Python, C#, Java, Perl, Ruby, SQL Server, NoSQL, and MySQL
• An understanding of the best security practices and automating security testing and
updating in the CI/CD (continuous integration, continuous deployment) pipelines
• An ability to conveniently deploy monitoring and logging infrastructure using tools.
• Proficiency in container frameworks
• Mastery in the use of infrastructure automation toolsets like Terraform, Ansible, and command line interfaces for Microsoft Azure, Amazon AWS, and other cloud platforms
• Certification in Cloud Security
• An understanding of various operating systems
• A strong focus on automation and agile development
• Excellent communication and interpersonal skills
• An ability to work in a fast-paced environment and handle multiple projects
simultaneously

OTHER INFORMATION

 

The DevOps Engineer will also be expected to demonstrate their commitment:
• to gedu values and regulations, including equal opportunities policy.
• the gedu’s Social, Economic and Environmental responsibilities and minimise environmental impact in the performance of the role and actively contribute to the delivery of gedu’s Environmental Policy.
• to their Health and Safety responsibilities to ensure their contribution to a safe and secure working environment for staff, students, and other visitors to the campus.

Read more
Remote only
3 - 8 yrs
₹7L - ₹16L / yr
DevOps
Windows Azure
skill iconC++
skill iconNodeJS (Node.js)
skill iconDocker
+1 more
This is a very interesting job.   Our ~30 year old company seeks a generalist...somebody who can take on a variety of tasks.  For example: Updating our state-of-the art logistics systems,  moving more of our infrastructure to Azure, helping our Fortune 50 clients get the best value from our software.   
Read more
Opoyi Inc

at Opoyi Inc

3 recruiters
Bishwajeet Mishra
Posted by Bishwajeet Mishra
NCR (Delhi | Gurgaon | Noida)
3 - 10 yrs
₹5L - ₹12L / yr
skill iconAmazon Web Services (AWS)
Linux administration
Shell Scripting
DevOps
Linux/Unix
+12 more
Skill Required (Technical)

Technical Experience/Knowledge Needed :

  • Cloud-hosted services environment.
  • Proven ability to work in a Cloud-based environment.
  • Ability to manage and maintain Cloud Infrastructure on AWS
  • Must have strong experience in technologies such as Dockers, Kubernetes, Functions, etc.
  • Knowledge in orchestration tools Ansible
  • Experience with ELK Stack
  • Strong knowledge in Micro Services, Container-based architecture and the corresponding deployment tools and techniques.
  • Hands-on knowledge of implementing multi-staged CI / CD with tools like Jenkins and Git.
  • Sound knowledge on tools like Kibana, Kafka, Grafana, Instana and so on.
  • Proficient in bash Scripting Languages.
  • Must have in-depth knowledge of Clustering, Load Balancing, High Availability and Disaster Recovery, Auto Scaling, etc.
Skill Required (Other) :
  • AWS Certified Solutions Architect or/and Linux System Administrator
  • Strong ability to work independently on complex issues
  • Collaborate efficiently with internal experts to resolve customer issues quickly
  • No objection to working night shifts as the production support team works on 24*7 basis. Hence, rotational shifts will be assigned to the candidates weekly to get equal opportunity to work in a day and night shifts. But if you get candidates willing to work the night shift on a need basis, discuss with us.
  • Early Joining
  • Willingness to work in Delhi NCR
Read more
Indus OS

at Indus OS

1 video
2 recruiters
Gunjan Rastogi
Posted by Gunjan Rastogi
Noida, NCR (Delhi | Gurgaon | Noida)
2 - 4 yrs
₹7L - ₹12L / yr
DevOps
Ansible
skill iconAmazon Web Services (AWS)
skill iconKubernetes
skill iconPython

What you do :

  • Developing automation for the various deployments core to our business
  • Documenting run books for various processes / improving knowledge bases
  • Identifying technical issues, communicating and recommending solutions
  • Miscellaneous support (user account, VPN, network, etc)
  • Develop continuous integration / deployment strategies
  • Production systems deployment/monitoring/optimization
  • Management of staging/development environments

What you know :

  • Ability to work with a wide variety of open source technologies and tools
  • Ability to code/script (Python, Ruby, Bash)
  • Experience with systems and IT operations
  • Comfortable with frequent incremental code testing and deployment
  • Strong grasp of automation tools (Chef, Packer, Ansible, or others)
  • Experience with cloud infrastructure and bare-metal systems
  • Experience optimizing infrastructure for high availability and low latencies
  • Experience with instrumenting systems for monitoring and reporting purposes
  • Well versed in software configuration management systems (git, others)
  • Experience with cloud providers (AWS or other) and tailoring apps for cloud deployment
  • Data management skills

Education :

  • Degree in Computer Engineering or Computer Science
  • 1-3 years of equivalent experience in DevOps roles.
  • Work conducted is focused on business outcomes
  • Can work in an environment with a high level of autonomy (at the individual and team level)
  • Comfortable working in an open, collaborative environment, reaching across functional.

Our Offering :

  • True start-up experience - no bureaucracy and a ton of tough decisions that have a real impact on the business from day one.
  • The camaraderie of an amazingly talented team that is working tirelessly to build a great OS for India and surrounding markets.

Perks :

  • Awesome benefits, social gatherings, etc.
  • Work with intelligent, fun and interesting people in a dynamic start-up environment.
Read more
Radical HealthTech

at Radical HealthTech

3 recruiters
Shibjash Dutt
Posted by Shibjash Dutt
NCR (Delhi | Gurgaon | Noida)
2 - 7 yrs
₹5L - ₹15L / yr
skill iconPython
Terraform
skill iconAmazon Web Services (AWS)
Linux/Unix
skill iconDocker
DevOps Engineer


Radical is a platform connecting data, medicine and people -- through machine learning, and usable, performant products. Software has never been the strong suit of the medical industry -- and we are changing that. We believe that the same sophistication and performance that powers our daily needs through millions of consumer applications -- be it your grocery, your food delivery or your movie tickets -- when applied to healthcare, has a massive potential to transform the industry, and positively impact lives of patients and doctors. Radical works with some of the largest hospitals and public health programmes in India, and has a growing footprint both inside the country and abroad.


As a DevOps Engineer at Radical, you will:

Work closely with all stakeholders in the healthcare ecosystem - patients, doctors, paramedics and administrators - to conceptualise and bring to life the ideal set of products that add value to their time
Work alongside Software Developers and ML Engineers to solve problems and assist in architecture design
Work on systems which have an extraordinary emphasis on capturing data that can help build better workflows, algorithms and tools
Work on high performance systems that deal with several million transactions, multi-modal data and large datasets, with a close attention to detail


We’re looking for someone who has:

Familiarity and experience with writing working, well-documented and well-tested scripts, Dockerfiles, Puppet/Ansible/Chef/Terraform scripts.
Proficiency with scripting languages like Python and Bash.
Knowledge of systems deployment and maintainence, including setting up CI/CD and working alongside Software Developers, monitoring logs, dashboards, etc.
Experience integrating with a wide variety of external tools and services
Experience navigating AWS and leveraging appropriate services and technologies rather than DIY solutions (such as hosting an application directly on EC2 vs containerisation, or an Elastic Beanstalk)


It’s not essential, but great if you have:

An established track record of deploying and maintaining systems.
Experience with microservices and decomposition of monolithic architectures
Proficiency in automated tests.
Proficiency with the linux ecosystem
Experience in deploying systems to production on cloud platforms such as AWS


The position is open now, and we are onboarding immediately.


Please write to us with an updated resume, and one thing you would like us to see as part of your application. This one thing can be anything that you think makes you stand apart among candidates.


Radical is based out of Delhi NCR, India, and we look forward to working with you!


We're looking for people who may not know all the answers, but are obsessive about finding them, and take pride in the code that they write. We are more interested in the ability to learn fast, think rigorously and for people who aren’t afraid to challenge assumptions, and take large bets -- only to work hard and prove themselves correct. You're encouraged to apply even if your experience doesn't precisely match the job description. Join us.

Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort