Cutshort logo
Fusion Jobs in Delhi, NCR and Gurgaon

11+ Fusion Jobs in Delhi, NCR and Gurgaon | Fusion Job openings in Delhi, NCR and Gurgaon

Apply to 11+ Fusion Jobs in Delhi, NCR and Gurgaon on CutShort.io. Explore the latest Fusion Job opportunities across top companies like Google, Amazon & Adobe.

icon
Global Digital Transformation Solutions Provider

Global Digital Transformation Solutions Provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore), Chennai, Hyderabad, Kochi (Cochin), Noida, Pune, Thiruvananthapuram
7 - 10 yrs
₹21L - ₹30L / yr
Perforce
DevOps
skill iconGit
skill iconGitHub
skill iconPython
+7 more

JOB DETAILS:

* Job Title: Specialist I - DevOps Engineering

* Industry: Global Digital Transformation Solutions Provider

* Salary: Best in Industry

* Experience: 7-10 years

* Location: Bengaluru (Bangalore), Chennai, Hyderabad, Kochi (Cochin), Noida, Pune, Thiruvananthapuram

 

Job Description

Job Summary:

As a DevOps Engineer focused on Perforce to GitHub migration, you will be responsible for executing seamless and large-scale source control migrations. You must be proficient with GitHub Enterprise and Perforce, possess strong scripting skills (Python/Shell), and have a deep understanding of version control concepts.

The ideal candidate is a self-starter, a problem-solver, and thrives on challenges while ensuring smooth transitions with minimal disruption to development workflows.

 

Key Responsibilities:

  • Analyze and prepare Perforce repositories — clean workspaces, merge streams, and remove unnecessary files.
  • Handle large files efficiently using Git Large File Storage (LFS) for files exceeding GitHub’s 100MB size limit.
  • Use git-p4 fusion (Python-based tool) to clone and migrate Perforce repositories incrementally, ensuring data integrity.
  • Define migration scope — determine how much history to migrate and plan the repository structure.
  • Manage branch renaming and repository organization for optimized post-migration workflows.
  • Collaborate with development teams to determine migration points and finalize migration strategies.
  • Troubleshoot issues related to file sizes, Python compatibility, network connectivity, or permissions during migration.

 

Required Qualifications:

  • Strong knowledge of Git/GitHub and preferably Perforce (Helix Core) — understanding of differences, workflows, and integrations.
  • Hands-on experience with P4-Fusion.
  • Familiarity with cloud platforms (AWS, Azure) and containerization technologies (Docker, Kubernetes).
  • Proficiency in migration tools such as git-p4 fusion — installation, configuration, and troubleshooting.
  • Ability to identify and manage large files using Git LFS to meet GitHub repository size limits.
  • Strong scripting skills in Python and Shell for automating migration and restructuring tasks.
  • Experience in planning and executing source control migrations — defining scope, branch mapping, history retention, and permission translation.
  • Familiarity with CI/CD pipeline integration to validate workflows post-migration.
  • Understanding of source code management (SCM) best practices, including version history and repository organization in GitHub.
  • Excellent communication and collaboration skills for cross-team coordination and migration planning.
  • Proven practical experience in repository migration, large file management, and history preservation during Perforce to GitHub transitions.

 

Skills: Github, Kubernetes, Perforce, Perforce (Helix Core), Devops Tools

 

Must-Haves

Git/GitHub (advanced), Perforce (Helix Core) (advanced), Python/Shell scripting (strong), P4-Fusion (hands-on experience), Git LFS (proficient)

Read more
WITS Innovation Lab
Prabhnoor Kaur
Posted by Prabhnoor Kaur
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
2 - 5 yrs
₹3L - ₹7L / yr
Terraform
skill iconKubernetes
skill iconJenkins
Ansible
skill iconAmazon Web Services (AWS)
+8 more

We are looking for a skilled DevOps Engineer with hands-on experience in cloud platforms, CI/CD pipelines, container orchestration, and infrastructure automation. The ideal candidate is someone who loves solving reliability challenges, automating everything, and ensuring seamless delivery across environments.

Key Responsibilities

  • Design, implement, and maintain CI/CD pipelines using GitHub Actions, Jenkins, and GitHub.
  • Manage and optimize infrastructure on AWS/GCP, ensuring scalability, security, and high availability.
  • Deploy and manage containerized applications using Docker and Kubernetes.
  • Build, automate, and manage infrastructure as code using Terraform.
  • Configure and manage automation tools and workflows using Ansible.
  • Monitor system performance, troubleshoot production issues, and ensure smooth operations.
  • Implement best practices for code management, release processes, and DevOps standards.
  • Collaborate closely with development teams to improve build pipelines and deployment workflows.
  • Write scripts in Python/Bash to automate operational tasks.

Required Skills & Experience

  • 2+ years of hands-on experience as a DevOps Engineer or in a similar role.
  • Strong expertise in AWS or GCP cloud services.
  • Solid understanding of Kubernetes (deployment, scaling, service mesh, packaging).
  • Proficiency with Terraform for infrastructure automation.
  • Experience with Git, GitHub, and GitHub Actions for source control and CI/CD.
  • Good knowledge of Jenkins pipelines and automation.
  • Hands-on experience with Ansible for configuration management.
  • Strong scripting skills using Python or Bash.
  • Understanding of monitoring, logging, and security best practices.


Read more
A leading Edtech company

A leading Edtech company

Agency job
via Jobdost by Sathish Kumar
Noida
5 - 8 yrs
₹12L - ₹17L / yr
DevOps
skill iconAmazon Web Services (AWS)
CI/CD
skill iconDocker
skill iconKubernetes
+4 more
  • Minimum 3+ yrs of Experience in DevOps with AWS Platform
  •     • Strong AWS knowledge and experience
  •     • Experience in using CI/CD automation tools (Git, Jenkins, Configuration deployment tools ( Puppet/Chef/Ansible)
  •     • Experience with IAC tools  Terraform
  •     • Excellent experience in operating a container orchestration cluster (Kubernetes, Docker)
  •     • Significant experience with Linux operating system environments
  •     • Experience with infrastructure scripting solutions such as Python/Shell scripting
  •     • Must have experience in designing Infrastructure automation framework.
  •     • Good experience in any of the Setting up Monitoring tools and Dashboards ( Grafana/kafka)
  •     • Excellent problem-solving, Log Analysis and troubleshooting skills
  •     • Experience in setting up centralized logging for system (EKS, EC2) and application
  •     • Process-oriented with great documentation skills
  •     • Ability to work effectively within a team and with minimal supervision
Read more
Eyther
Atharva Kulkarni
Posted by Atharva Kulkarni
Delhi, Faridabad, Bhopal
2 - 5 yrs
₹6L - ₹8L / yr
skill iconKubernetes
skill iconAmazon Web Services (AWS)

About the Role


We are looking for a DevOps Engineer to build and maintain scalable, secure, and high-

performance infrastructure for our next-generation healthcare platform. You will be

responsible for automation, CI/CD pipelines, cloud infrastructure, and system reliability,

ensuring seamless deployment and operations.


Responsibilities


1. Infrastructure & Cloud Management


• Design, deploy, and manage cloud-based infrastructure (AWS, Azure, GCP)

• Implement containerization (Docker, Kubernetes) and microservices orchestration

• Optimize infrastructure cost, scalability, and performance


2. CI/CD & Automation


• Build and maintain CI/CD pipelines for automated deployments

• Automate infrastructure provisioning using Terraform, Ansible, or CloudFormation

• Implement GitOps practices for streamlined deployments


3. Security & Compliance


• Ensure adherence to ABDM, HIPAA, GDPR, and healthcare security standards

• Implement role-based access controls, encryption, and network security best

practices

• Conduct Vulnerability Assessment & Penetration Testing (VAPT) and compliance

audits


4. Monitoring & Incident Management


• Set up monitoring, logging, and alerting systems (Prometheus, Grafana, ELK,

Datadog, etc.)

• Optimize system reliability and automate incident response mechanisms

• Improve MTTR (Mean Time to Recovery) and system uptime KPIs


5. Collaboration & Process Improvement


• Work closely with development and QA teams to streamline deployments

• Improve DevSecOps practices and cloud security policies

• Participate in architecture discussions and performance tuning


Required Skills & Qualifications


• 2+ years of experience in DevOps, cloud infrastructure, and automation

• Hands-on experience with AWS and Kubernetes

• Proficiency in Docker and CI/CD tools (Jenkins, GitHub Actions, ArgoCD, etc.)

• Experience with Terraform, Ansible, or CloudFormation

• Strong knowledge of Linux, shell scripting, and networking

• Experience with cloud security, monitoring, and logging solutions


Nice to Have


• Experience in healthcare or other regulated industries

• Familiarity with serverless architectures and AI-driven infrastructure automation

• Knowledge of big data pipelines and analytics workflows


What You'll Gain


• Opportunity to build and scale a mission-critical healthcare infrastructure

• Work in a fast-paced startup environment with cutting-edge technologies

• Growth potential into Lead DevOps Engineer or Cloud Architect roles

Read more
Compliance  Registration Service Pvt Ltd
Compliance & Registration  Services Private Limited
Posted by Compliance & Registration Services Private Limited
Delhi
1 - 2 yrs
₹2.5L - ₹4L / yr
skill iconPython
skill iconDjango
skill iconNodeJS (Node.js)
  1. Proficiency in Python , Django and Other Allied Frameworks;
  2. Expert in designing UI/UX interfaces;
  3. Expert in testing, troubleshooting, debugging and problem solving;
  4. Basic knowledge of SEO;
  5. Good communication;
  6. Team building and good acumen;
  7. Ability to perform; 
  8. Continuous learning
Read more
Dhwani Rural Information Systems

at Dhwani Rural Information Systems

1 candid answer
3 recruiters
Sunandan Madan
Posted by Sunandan Madan
gurgaon
2 - 6 yrs
₹4L - ₹10L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Windows Azure
+10 more
Job Overview
We are looking for an excellent experienced person in the Dev-Ops field. Be a part of a vibrant, rapidly growing tech enterprise with a great working environment. As a DevOps Engineer, you will be responsible for managing and building upon the infrastructure that supports our data intelligence platform. You'll also be involved in building tools and establishing processes to empower developers to
deploy and release their code seamlessly.

Responsibilities
The ideal DevOps Engineers possess a solid understanding of system internals and distributed systems.
Understanding accessibility and security compliance (Depending on the specific project)
User authentication and authorization between multiple systems,
servers, and environments
Integration of multiple data sources and databases into one system
Understanding fundamental design principles behind a scalable
application
Configuration management tools (Ansible/Chef/Puppet), Cloud
Service Providers (AWS/DigitalOcean), Docker+Kubernetes ecosystem is a plus.
Should be able to make key decisions for our infrastructure,
networking and security.
Manipulation of shell scripts during migration and DB connection.
Monitor Production Server Health of different parameters (CPU Load, Physical Memory, Swap Memory and Setup Monitoring tool to
Monitor Production Servers Health, Nagios
Created Alerts and configured monitoring of specified metrics to
manage their cloud infrastructure efficiently.
Setup/Managing VPC, Subnets; make connection between different zones; blocking suspicious ip/subnet via ACL.
Creating/Managing AMI/Snapshots/Volumes, Upgrade/downgrade
AWS resources (CPU, Memory, EBS)
 The candidate would be Responsible for managing microservices at scale maintain the compute and storage infrastructure for various product teams.

  
Strong Knowledge about Configuration Management Tools like –
Ansible, Chef, Puppet
Extensively worked with Change tracking tools like JIRA and log
Analysis, Maintaining documents of production server error log's
reports.
Experienced in Troubleshooting, Backup, and Recovery
Excellent Knowledge of Cloud Service Providers like – AWS, Digital
Ocean
Good Knowledge about Docker, Kubernetes eco-system.
Proficient understanding of code versioning tools, such as Git
Must have experience working in an automated environment.
Good knowledge of Amazon Web Service Architects like – Amazon EC2, Amazon S3 (Amazon Glacier), Amazon VPC, Amazon Cloud Watch.
Scheduling jobs using crontab, Create SWAP Memory
Proficient Knowledge about Access Management (IAM)
Must have expertise in Maven, Jenkins, Chef, SVN, GitHub, Tomcat, Linux, etc.
Candidate Should have good knowledge about GCP.

EducationalQualifications
B-Tech-IT/M-Tech -/MBA- IT/ BCA /MCA or any degree in the relevant field
EXPERIENCE: 2-6 yr
Read more
Coredge

at Coredge

3 recruiters
Sajal Saxena
Posted by Sajal Saxena
Bengaluru (Bangalore), Noida, Pune
5 - 10 yrs
₹20L - ₹35L / yr
OpenStack
Ansible
Ceph
skill iconDocker
skill iconKubernetes
+1 more

You need to drive automation for implementing scalable and robust applications. You would indulge your dedication and passion to build server-side optimization ensuring low-latency and high-end performance for the cloud deployed within datacentre. You should have sound knowledge of Open stack and Kubernetes domain.

YOUR ‘OKR’ SUMMARY

OKR means Objective and Key Results.

As a DevOps Engineer, you will understand the overall movement of data in the entire platform, find bottlenecks,define solutions, develop key pieces, write APIs and own deployment of those. You will work with internal and external development teams to discover these opportunities and to solve hard problems. You will also guide engineers in solving the complex problems, developing your acceptance tests for those and reviewing the work and

the test results.

What you will do

  • As a DevOps Engineer responsible for systems being used by customer across the globe.
  • Set the goals for overall system and divide into goals for the sub-system.
  • Guide/motivate/convince/mentor the architects on sub-systems and help them achieving improvements with agility and speed.
  • Identify the performance bottleneck and come up with the solution to optimize time and cost taken by build/test system.
  • Be a thought leader to contribute to the capacity planning for software/hardware, spanning internal and public cloud, solving the trade-off between turnaround time and utilization.
  • Bring in technologies enabling massively parallel systems to improve turnaround time by an order of magnitude.

What you will need

A strong sense of ownership, urgency, and drive. As an integral part of the development team, you will need the following skills to succeed.

  • BS or BE/B.Tech or equivalent experience in EE/CS with 10+ years of experience.
  • Strong background of Architecting and shipping distributed scalable software product with good understanding of system programming.
  • Excellent background of Cloud technologies like: OpenStack, Docker, Kubernetes, Ansible, Ceph is must.
  • Excellent understanding of hybrid, multi-cloud architecture and edge computing concepts.
  • Ability to identify the bottleneck and come up with solution to optimize it.
  • Programming and software development skills in Python, Shell-script along with good understanding of distributed systems and REST APIs.
  • Experience in working with SQL/NoSQL database systems such as MySQL, MongoDB or Elasticsearch.
  • Excellent knowledge and working experience with Docker containers and Virtual Machines.
  • Ability to effectively work across organizational boundaries to maximize alignment and productivity between teams.
  • Ability and flexibility to work and communicate effectively in a multi-national, multi-time-zone corporate.

Additional Advantage:
  • Deep understanding of technology and passionate about what you do.
  • Background in designing high performant scalable software systems with strong focus to optimizehardware cost.
  • Solid collaborative and interpersonal skills, specifically a proven ability to effectively guide andinfluence within a dynamic environment.
  • Strong commitment to get the most performance out of a system being worked on.
  • Prior development of a large software project using service-oriented architecture operating with real time constraints.

What's In It for You?

  • You will get a chance to work on cloud-native and hyper-scale products
  • You will be working with industry leaders in cloud.
  • You can expect a steep learning curve.
  • You will get the experience of solving real time problems, eventually you become a problem solver.

Benefits & Perks:

  • Competitive Salary
  • Health Insurance
  • Open Learning - 100% Reimbursement for online technical courses.
  • Fast Growth - opportunities to grow quickly and surely
  • Creative Freedom + Flat hierarchy
  • Sponsorship to all those employees who represent company in events and meet ups.
  • Flexible working hours
  • 5 days week
  • Hybrid Working model (Office and WFH)

Our Hiring Process:

Candidates for this position can expect the hiring process as follows (subject to successful clearing of every round)

  • Initial Resume screening call with our Recruiting team
  • Next, candidates will be invited to solve coding exercises.
  • Next, candidates will be invited for first technical interview
  • Next, candidates will be invited for final technical interview
  • Finally, candidates will be invited for Culture Plus interview with HR
  • Candidates may be asked to interview with the Leadership team
  • Successful candidates will subsequently be made an offer via email

As always, the interviews and screening call will be conducted via a mix of telephonic and video call.

So, if you are looking at an opportunity to really make a difference- make it with us…

Coredge.io provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by applicable central, state or local laws.

Read more
one of our MNC Client

one of our MNC Client

Agency job
via CETPA InfoTech by priya Gautam
Noida, Delhi, Gurugram, Ghaziabad, Faridabad
1 - 10 yrs
₹5L - ₹30L / yr
skill iconDocker
skill iconKubernetes
DevOps
Linux/Unix
SQL Azure
+9 more

Mandatory:
● A minimum of 1 year of development, system design or engineering experience ●
Excellent social, communication, and technical skills
● In-depth knowledge of Linux systems
● Development experience in at least two of the following languages: Php, Go, Python,
JavaScript, C/C++, Bash
● In depth knowledge of web servers (Apache, NgNix preferred)
● Strong in using DevOps tools - Ansible, Jenkins, Docker, ELK
● Knowledge to use APM tools, NewRelic is preferred
● Ability to learn quickly, master our existing systems and identify areas of improvement
● Self-starter that enjoys and takes pride in the engineering work of their team ● Tried
and Tested Real-world Cloud Computing experience - AWS/ GCP/ Azure ● Strong
Understanding of Resilient Systems design
● Experience in Network Design and Management
Read more
Searce Inc

at Searce Inc

64 recruiters
Yashodatta Deshapnde
Posted by Yashodatta Deshapnde
Pune, Noida, Bengaluru (Bangalore), Mumbai, Chennai
3 - 10 yrs
₹5L - ₹20L / yr
DevOps
skill iconKubernetes
Google Cloud Platform (GCP)
Terraform
skill iconJenkins
+2 more
Role & Responsibilities :
• At least 4 years of hands-on experience with cloud infrastructure on GCP
• Hands-on-Experience on Kubernetes is a mandate
• Exposure to configuration management and orchestration tools at scale (e.g. Terraform, Ansible, Packer)
• Knowledge and hand-on-experience in DevOps tools (e.g. Jenkins, Groovy, and Gradle)
• Knowledge and hand-on-experience on the various platforms (e.g. Gitlab, CircleCl and Spinnakar)
• Familiarity with monitoring and alerting tools (e.g. CloudWatch, ELK stack, Prometheus)
• Proven ability to work independently or as an integral member of a team

Preferable Skills:
• Familiarity with standard IT security practices such as encryption,
credentials and key management.
• Proven experience on various coding languages (Java, Python-) to
• support DevOps operation and cloud transformation
• Familiarity and knowledge of the web standards (e.g. REST APIs, web security mechanisms)
• Hands on experience with GCP
• Experience in performance tuning, services outage management and troubleshooting.

Attributes:
• Good verbal and written communication skills
• Exceptional leadership, time management, and organizational skill Ability to operate independently and make decisions with little direct supervision
Read more
Indus OS

at Indus OS

1 video
2 recruiters
Gunjan Rastogi
Posted by Gunjan Rastogi
Noida, NCR (Delhi | Gurgaon | Noida)
2 - 4 yrs
₹7L - ₹12L / yr
DevOps
Ansible
skill iconAmazon Web Services (AWS)
skill iconKubernetes
skill iconPython

What you do :

  • Developing automation for the various deployments core to our business
  • Documenting run books for various processes / improving knowledge bases
  • Identifying technical issues, communicating and recommending solutions
  • Miscellaneous support (user account, VPN, network, etc)
  • Develop continuous integration / deployment strategies
  • Production systems deployment/monitoring/optimization
  • Management of staging/development environments

What you know :

  • Ability to work with a wide variety of open source technologies and tools
  • Ability to code/script (Python, Ruby, Bash)
  • Experience with systems and IT operations
  • Comfortable with frequent incremental code testing and deployment
  • Strong grasp of automation tools (Chef, Packer, Ansible, or others)
  • Experience with cloud infrastructure and bare-metal systems
  • Experience optimizing infrastructure for high availability and low latencies
  • Experience with instrumenting systems for monitoring and reporting purposes
  • Well versed in software configuration management systems (git, others)
  • Experience with cloud providers (AWS or other) and tailoring apps for cloud deployment
  • Data management skills

Education :

  • Degree in Computer Engineering or Computer Science
  • 1-3 years of equivalent experience in DevOps roles.
  • Work conducted is focused on business outcomes
  • Can work in an environment with a high level of autonomy (at the individual and team level)
  • Comfortable working in an open, collaborative environment, reaching across functional.

Our Offering :

  • True start-up experience - no bureaucracy and a ton of tough decisions that have a real impact on the business from day one.
  • The camaraderie of an amazingly talented team that is working tirelessly to build a great OS for India and surrounding markets.

Perks :

  • Awesome benefits, social gatherings, etc.
  • Work with intelligent, fun and interesting people in a dynamic start-up environment.
Read more
Radical HealthTech

at Radical HealthTech

3 recruiters
Shibjash Dutt
Posted by Shibjash Dutt
NCR (Delhi | Gurgaon | Noida)
2 - 7 yrs
₹5L - ₹15L / yr
skill iconPython
Terraform
skill iconAmazon Web Services (AWS)
Linux/Unix
skill iconDocker
DevOps Engineer


Radical is a platform connecting data, medicine and people -- through machine learning, and usable, performant products. Software has never been the strong suit of the medical industry -- and we are changing that. We believe that the same sophistication and performance that powers our daily needs through millions of consumer applications -- be it your grocery, your food delivery or your movie tickets -- when applied to healthcare, has a massive potential to transform the industry, and positively impact lives of patients and doctors. Radical works with some of the largest hospitals and public health programmes in India, and has a growing footprint both inside the country and abroad.


As a DevOps Engineer at Radical, you will:

Work closely with all stakeholders in the healthcare ecosystem - patients, doctors, paramedics and administrators - to conceptualise and bring to life the ideal set of products that add value to their time
Work alongside Software Developers and ML Engineers to solve problems and assist in architecture design
Work on systems which have an extraordinary emphasis on capturing data that can help build better workflows, algorithms and tools
Work on high performance systems that deal with several million transactions, multi-modal data and large datasets, with a close attention to detail


We’re looking for someone who has:

Familiarity and experience with writing working, well-documented and well-tested scripts, Dockerfiles, Puppet/Ansible/Chef/Terraform scripts.
Proficiency with scripting languages like Python and Bash.
Knowledge of systems deployment and maintainence, including setting up CI/CD and working alongside Software Developers, monitoring logs, dashboards, etc.
Experience integrating with a wide variety of external tools and services
Experience navigating AWS and leveraging appropriate services and technologies rather than DIY solutions (such as hosting an application directly on EC2 vs containerisation, or an Elastic Beanstalk)


It’s not essential, but great if you have:

An established track record of deploying and maintaining systems.
Experience with microservices and decomposition of monolithic architectures
Proficiency in automated tests.
Proficiency with the linux ecosystem
Experience in deploying systems to production on cloud platforms such as AWS


The position is open now, and we are onboarding immediately.


Please write to us with an updated resume, and one thing you would like us to see as part of your application. This one thing can be anything that you think makes you stand apart among candidates.


Radical is based out of Delhi NCR, India, and we look forward to working with you!


We're looking for people who may not know all the answers, but are obsessive about finding them, and take pride in the code that they write. We are more interested in the ability to learn fast, think rigorously and for people who aren’t afraid to challenge assumptions, and take large bets -- only to work hard and prove themselves correct. You're encouraged to apply even if your experience doesn't precisely match the job description. Join us.

Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort