Cutshort logo
Fusion Jobs in Delhi, NCR and Gurgaon

11+ Fusion Jobs in Delhi, NCR and Gurgaon | Fusion Job openings in Delhi, NCR and Gurgaon

Apply to 11+ Fusion Jobs in Delhi, NCR and Gurgaon on CutShort.io. Explore the latest Fusion Job opportunities across top companies like Google, Amazon & Adobe.

icon
Global Digital Transformation Solutions Provider

Global Digital Transformation Solutions Provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore), Chennai, Hyderabad, Kochi (Cochin), Noida, Pune, Thiruvananthapuram
7 - 10 yrs
₹21L - ₹30L / yr
Perforce
DevOps
skill iconGit
skill iconGitHub
skill iconPython
+7 more

JOB DETAILS:

* Job Title: Specialist I - DevOps Engineering

* Industry: Global Digital Transformation Solutions Provider

* Salary: Best in Industry

* Experience: 7-10 years

* Location: Bengaluru (Bangalore), Chennai, Hyderabad, Kochi (Cochin), Noida, Pune, Thiruvananthapuram

 

Job Description

Job Summary:

As a DevOps Engineer focused on Perforce to GitHub migration, you will be responsible for executing seamless and large-scale source control migrations. You must be proficient with GitHub Enterprise and Perforce, possess strong scripting skills (Python/Shell), and have a deep understanding of version control concepts.

The ideal candidate is a self-starter, a problem-solver, and thrives on challenges while ensuring smooth transitions with minimal disruption to development workflows.

 

Key Responsibilities:

  • Analyze and prepare Perforce repositories — clean workspaces, merge streams, and remove unnecessary files.
  • Handle large files efficiently using Git Large File Storage (LFS) for files exceeding GitHub’s 100MB size limit.
  • Use git-p4 fusion (Python-based tool) to clone and migrate Perforce repositories incrementally, ensuring data integrity.
  • Define migration scope — determine how much history to migrate and plan the repository structure.
  • Manage branch renaming and repository organization for optimized post-migration workflows.
  • Collaborate with development teams to determine migration points and finalize migration strategies.
  • Troubleshoot issues related to file sizes, Python compatibility, network connectivity, or permissions during migration.

 

Required Qualifications:

  • Strong knowledge of Git/GitHub and preferably Perforce (Helix Core) — understanding of differences, workflows, and integrations.
  • Hands-on experience with P4-Fusion.
  • Familiarity with cloud platforms (AWS, Azure) and containerization technologies (Docker, Kubernetes).
  • Proficiency in migration tools such as git-p4 fusion — installation, configuration, and troubleshooting.
  • Ability to identify and manage large files using Git LFS to meet GitHub repository size limits.
  • Strong scripting skills in Python and Shell for automating migration and restructuring tasks.
  • Experience in planning and executing source control migrations — defining scope, branch mapping, history retention, and permission translation.
  • Familiarity with CI/CD pipeline integration to validate workflows post-migration.
  • Understanding of source code management (SCM) best practices, including version history and repository organization in GitHub.
  • Excellent communication and collaboration skills for cross-team coordination and migration planning.
  • Proven practical experience in repository migration, large file management, and history preservation during Perforce to GitHub transitions.

 

Skills: Github, Kubernetes, Perforce, Perforce (Helix Core), Devops Tools

 

Must-Haves

Git/GitHub (advanced), Perforce (Helix Core) (advanced), Python/Shell scripting (strong), P4-Fusion (hands-on experience), Git LFS (proficient)

Read more
Careator Technologies Pvt Ltd
NCR (Delhi | Gurgaon | Noida)
3 - 9 yrs
₹5L - ₹20L / yr
skill iconGit
DevOps
Shell Scripting
skill iconJenkins
Chef
+3 more
Permanent positions with a Product Client. Essential Skills: 3+ years’ experience of Windows Server Management 3+ years’ experience in Microsoft Azure Administration, Deployment, Development and Operations Networking (Azure networking, on-premise) Firewalls & VPN Experience in Linux administration Continuous Integration on VSTS in particular Security administration, e.g. setup of appropriate authorisation groups, roles and permissions structures Security (SSL, PKI, SSO, SAML) Experience of Azure ARM based provisioning using Windows Powershell scripting and templates Experience of Azure IaaS and PaaS offerings Experience with automation/configuration management using either Puppet, Chef or runbook ability to use a wide variety of open source technologies and cloud services (experience with Azure is required) Application Deployment tools(CI/CD) and their strategies. Experience building or managing applications from the Application layer down Exposure to security concepts / best practices Familiarity with one or more version control systems mainly Git, source tree Advantageous: Experience of NoSQL technology (i.e. CouchBase) Desired State Configuration and deployment (Puppet) Experience in Container orchestration framework like docker will be definite plus Experience of Azure solution deployment and development Interest in, or experience of, mobile solution development (i.e. worked as part of a team to deliver a mobile application) Azure Service Fabric Visual Studio Team Services for build and deployment
Read more
Dhwani Rural Information Systems

at Dhwani Rural Information Systems

1 candid answer
3 recruiters
Sunandan Madan
Posted by Sunandan Madan
gurgaon
2 - 6 yrs
₹4L - ₹10L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Windows Azure
+10 more
Job Overview
We are looking for an excellent experienced person in the Dev-Ops field. Be a part of a vibrant, rapidly growing tech enterprise with a great working environment. As a DevOps Engineer, you will be responsible for managing and building upon the infrastructure that supports our data intelligence platform. You'll also be involved in building tools and establishing processes to empower developers to
deploy and release their code seamlessly.

Responsibilities
The ideal DevOps Engineers possess a solid understanding of system internals and distributed systems.
Understanding accessibility and security compliance (Depending on the specific project)
User authentication and authorization between multiple systems,
servers, and environments
Integration of multiple data sources and databases into one system
Understanding fundamental design principles behind a scalable
application
Configuration management tools (Ansible/Chef/Puppet), Cloud
Service Providers (AWS/DigitalOcean), Docker+Kubernetes ecosystem is a plus.
Should be able to make key decisions for our infrastructure,
networking and security.
Manipulation of shell scripts during migration and DB connection.
Monitor Production Server Health of different parameters (CPU Load, Physical Memory, Swap Memory and Setup Monitoring tool to
Monitor Production Servers Health, Nagios
Created Alerts and configured monitoring of specified metrics to
manage their cloud infrastructure efficiently.
Setup/Managing VPC, Subnets; make connection between different zones; blocking suspicious ip/subnet via ACL.
Creating/Managing AMI/Snapshots/Volumes, Upgrade/downgrade
AWS resources (CPU, Memory, EBS)
 The candidate would be Responsible for managing microservices at scale maintain the compute and storage infrastructure for various product teams.

  
Strong Knowledge about Configuration Management Tools like –
Ansible, Chef, Puppet
Extensively worked with Change tracking tools like JIRA and log
Analysis, Maintaining documents of production server error log's
reports.
Experienced in Troubleshooting, Backup, and Recovery
Excellent Knowledge of Cloud Service Providers like – AWS, Digital
Ocean
Good Knowledge about Docker, Kubernetes eco-system.
Proficient understanding of code versioning tools, such as Git
Must have experience working in an automated environment.
Good knowledge of Amazon Web Service Architects like – Amazon EC2, Amazon S3 (Amazon Glacier), Amazon VPC, Amazon Cloud Watch.
Scheduling jobs using crontab, Create SWAP Memory
Proficient Knowledge about Access Management (IAM)
Must have expertise in Maven, Jenkins, Chef, SVN, GitHub, Tomcat, Linux, etc.
Candidate Should have good knowledge about GCP.

EducationalQualifications
B-Tech-IT/M-Tech -/MBA- IT/ BCA /MCA or any degree in the relevant field
EXPERIENCE: 2-6 yr
Read more
Extramarks

at Extramarks

4 recruiters
Prachi Sharma
Posted by Prachi Sharma
Remote, Noida, Delhi, Gurugram, Ghaziabad, Faridabad
3 - 7 yrs
₹8L - ₹14L / yr
DevOps
Terraform
skill iconDocker
skill iconJenkins
Ansible
+2 more

Job Description

• Minimum 3+ yrs of Experience in DevOps with AWS Platform

• Strong AWS knowledge and experience

• Experience in using CI/CD automation tools (Git, Jenkins, Configuration deployment tools ( Puppet/Chef/Ansible)

• Experience with IAC tools Terraform

• Excellent experience in operating a container orchestration cluster (Kubernetes, Docker)

• Significant experience with Linux operating system environments

• Experience with infrastructure scripting solutions such as Python/Shell scripting

• Must have experience in designing Infrastructure automation framework.

• Good experience in any of the Setting up Monitoring tools and Dashboards ( Grafana/kafka)

• Excellent problem-solving, Log Analysis and troubleshooting skills

• Experience in setting up centralized logging for system (EKS, EC2) and application

• Process-oriented with great documentation skills

• Ability to work effectively within a team and with minimal supervision

Read more
Crisp Analytics

at Crisp Analytics

8 recruiters
Sneha Pandey
Posted by Sneha Pandey
Mumbai, Noida, NCR (Delhi | Gurgaon | Noida)
1 - 4 yrs
₹5L - ₹9L / yr
DevOps
skill iconAmazon Web Services (AWS)
Network
skill iconDocker
skill iconJenkins
+1 more

DevOps Engineer

 

The DevOps team is one of the core technology teams of Lumiq.ai and is responsible for managing network activities, automating Cloud setups and application deployments. The team also interacts with our customers to work out solutions. If you are someone who is always pondering how to make things better, how technologies can interact, how various tools, technologies, and concepts can help a customer or how you can use various technologies to improve user experience, then Lumiq is the place of opportunities.

 

Job Description

 

  • Explore about the newest innovations in scalable and distributed systems.
  • Helps in designing the architecture of the project, solutions to the existing problems and future improvements to be done.
  • Make the cloud infrastructure and services smart by implementing automation and trigger based solutions.
  • Interact with Data Engineers and Application Engineers to create continuous integration and deployment frameworks and pipelines.
  • Playing around with large clusters on different clouds to tune your jobs or to learn.
  • Researching about new technologies, proving the concepts and planning how to integrate or update.
  • Be part of discussions of other projects to learn or to help.

Responsibilities

  • 2+years of experience as DevOps Engineer.
  • You understand actual networking to Software defined networking.
  • You like containers and open source orchestration system like Kubernetes, Mesos.
  • Should have experience to secure system by creating robust access policy and network restrictions enforcement.
  • Should have  knowledge about how applications work are very important to design distributed systems.
  • Should have experience to open source projects and have discussed the shortcomings or problems with the community on several occasions.
  • You understand that provisioning a Virtual Machine is not DevOps.
  • You know you are not a SysAdmin but DevOps Engineer who is the person behind developing operations for the system to run efficiently and scalably.
  • Exposure on Private Cloud, Subnets, VPNs, Peering, Load Balancers and have worked with them.
  • You check logs before screaming about error.
  • Multiple Screens makes you more efficient.
  • You are a doer who don’t say the word impossible.
  • You understand the value of documentation of your work.
  • You understand the Big Data ecosystem and how can you leverage cloud for it.
  • You know these buddies - #airflow, #aws, #azure, #gcloud, #docker, #kubernetes, #mesos, #acs

 

Read more
one of our MNC Client

one of our MNC Client

Agency job
via CETPA InfoTech by priya Gautam
Noida, Delhi, Gurugram, Ghaziabad, Faridabad
1 - 10 yrs
₹5L - ₹30L / yr
skill iconDocker
skill iconKubernetes
DevOps
Linux/Unix
SQL Azure
+9 more

Mandatory:
● A minimum of 1 year of development, system design or engineering experience ●
Excellent social, communication, and technical skills
● In-depth knowledge of Linux systems
● Development experience in at least two of the following languages: Php, Go, Python,
JavaScript, C/C++, Bash
● In depth knowledge of web servers (Apache, NgNix preferred)
● Strong in using DevOps tools - Ansible, Jenkins, Docker, ELK
● Knowledge to use APM tools, NewRelic is preferred
● Ability to learn quickly, master our existing systems and identify areas of improvement
● Self-starter that enjoys and takes pride in the engineering work of their team ● Tried
and Tested Real-world Cloud Computing experience - AWS/ GCP/ Azure ● Strong
Understanding of Resilient Systems design
● Experience in Network Design and Management
Read more
Neurosensum

at Neurosensum

5 recruiters
Tanuj Diwan
Posted by Tanuj Diwan
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
2 - 3 yrs
₹4L - ₹10L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

At Neurosensum we are committed to make customer feedback more actionable. We have developed a platform called SurveySensum which breaks the conventional market research turnaround time. 

SurveySensum is becoming a great tool to not only capture the feedbacks but also to extract some useful insights with the quick workflow setups and dashboards. We have more than 7 channels through which we can collect the feedbacks. This makes us challenge the conventional software development design principles. The team likes to grind and helps each other to lift in tough situations. 

Day to day responsibilities include:

  1. Work on the deployment of code via Bitbucket, AWS CodeDeploy and manual
  2. Work on Linux/Unix OS and Multi tech application patching
  3. Manage, coordinate, and implement software upgrades, patches, and hotfixes on servers.
  4. Create and modify scripts or applications to perform tasks
  5. Provide input on ways to improve the stability, security, efficiency, and scalability of the environment
  6. Easing developers’ life so that they can focus on the business logic rather than deploying and maintaining it. 
  7. Managing release of the sprint. 
  8. Educating team of the best practices.
  9. Finding ways to avoid human error and save time by automating the processes using Terraform, CloudFormation, Bitbucket pipelines, CodeDeploy, scripting
  10. Implementing cost effective measure on cloud and minimizing existing costs.

Skills and prerequisites

  1. OOPS knowledge
  2. Problem solving nature
  3. Willing to do the R&D
  4. Works with the team and support their queries patiently 
  5. Bringing new things on the table - staying updated 
  6. Pushing solution above a problem. 
  7. Willing to learn and experiment
  8. Techie at heart
  9. Git basics
  10. Basic AWS or any cloud platform – creating and managing ec2, lambdas, IAM, S3 etc
  11. Basic Linux handling 
  12. Docker and orchestration (Great to have)
  13. Scripting – python (preferably)/bash
Read more
Leading IT MNC Company

Leading IT MNC Company

Agency job
Ahmedabad, Bengaluru (Bangalore), Pune, Noida, Indore, Jaipur, Nagpur, Nashik
4 - 10 yrs
₹1L - ₹15L / yr
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
AWS CloudFormation
Linux administration
+6 more

Rules & Responsibilities:

 

  • Design, implement and maintain all AWS infrastructure and services within a managed service environment
  • Should be able to work on 24 X 7 shifts for support of infrastructure.
  • Design, Deploy and maintain enterprise class security, network and systems management applications within an AWS environment
  • Design and implement availability, scalability, and performance plans for the AWS managed service environment
  • Continual re-evaluation of existing stack and infrastructure to maintain optimal performance, availability and security
  • Manage the production deployment and deployment automation
  • Implement process and quality improvements through task automation
  • Institute infrastructure as code, security automation and automation or routine maintenance tasks
  • Experience with containerization and orchestration tools like docker, Kubernetes
  • Build, Deploy and Manage Kubernetes clusters thru automation
  • Create and deliver knowledge sharing presentations and documentation for support teams
  • Learning on the job and explore new technologies with little supervision
  • Work effectively with onsite/offshore teams

 

Qualifications:

  • Must have Bachelor's degree in Computer Science or related field and 4+ years of experience in IT
  • Experience in designing, implementing, and maintaining all AWS infrastructure and services
  • Design and implement availability, scalability, and performance plans for the AWS managed service environment
  • Continual re-evaluation of existing stack and infrastructure to maintain optimal performance, availability, and security
  • Hands-on technical expertise in Security Architecture, automation, integration, and deployment
  • Familiarity with compliance & security standards across the enterprise IT landscape
  • Extensive experience with Kubernetes and AWS(IAM, Route53, SSM, S3, EFS, EBS, ELB, Lambda, CloudWatch, CloudTrail, SQS, SNS, RDS, Cloud Formation, DynamoDB)
  • Solid understanding of AWS IAM Roles and Policies
  • Solid Linux experience with a focus on web (Apache Tomcat/Nginx)
  • Experience with automation/configuration management using Terraform\Chef\Ansible or similar.
  • Understanding of protocols/technologies like Microservices, HTTP/HTTPS, SSL/TLS, LDAP, JDBC, SQL, HTML
  • Experience in managing and working with the offshore teams
  • Familiarity with CI/CD systems such as Jenkins, GitLab CI
  • Scripting experience (Python, Bash, etc.)
  • AWS, Kubernetes Certification is preferred
  • Ability to work with and influence Engineering teams
Read more
Remote, NCR (Delhi | Gurgaon | Noida), Gurgaon
4 - 10 yrs
₹15L - ₹20L / yr
skill iconAmazon Web Services (AWS)
Ansible
Automation

We are a team of SysOps/DevOps engineers

with one goal in mind -to keep your online business up and running 24/7.Solving problems is

our thing, but we believe that dealing with the same issues over and over again is

unnecessary and counterproductive. Our objective is to find the ultimate solution for each

and every problem! Though, each one of us is different, we have the same goal: to make

our customer's life easier and enjoy it at the same time.


Scope & Responsibilities


As a Department Leader, you will be responsible for managing and mentoring the team of

Linux systems administrators on a daily basis. You will help build and grow the sys admins

team and lead them to success. As Systems Management Department Leader, you will work

both as a People Manager and a Senior Systems Administrator.

As Manager you will recruit and retain a high motivated and professional team of

administrators. Together with our CTO and HR team, youre going to define recruitment

needs in your team and participate in the recruitment process. You will also work on

improving the onboarding process for new employees.

More tech part of the job contains designing and managing systems, tools, and infrastructure

to match our customers needs and provide them with outstanding, safe and reliable

solutions.

This position requires you to be able to devise proactive solutions to project-related issues

and inspire trust in teammates. Are you a team player who is ready to work with our team to

find solutions and are AWS Certified ? We await your CV


Skills & Certifications

- Proven team building and management background (2+ years)

- Strong project, process and people management skills

- Excellent communication skills in English

- Strong background in Linux administration (4+ years experience)

- Amazon AWS certification preferred (EC2, ECS, EKS, Lambda, IAM, KMS)

- Automation (we use Ansible)


- Infrastructure as code (we use Terraform)

- Knowledge of CI/CD tools and best practices

- Comfort with collaboration across multiple projects

- Ability to lead and coordinate team of professionals on daily basis

This position also requires you to be able to participate in on-call

schedules.Roles and Responsibilities

Read more
Radical HealthTech

at Radical HealthTech

3 recruiters
Shibjash Dutt
Posted by Shibjash Dutt
NCR (Delhi | Gurgaon | Noida)
2 - 7 yrs
₹5L - ₹15L / yr
skill iconPython
Terraform
skill iconAmazon Web Services (AWS)
Linux/Unix
skill iconDocker
DevOps Engineer


Radical is a platform connecting data, medicine and people -- through machine learning, and usable, performant products. Software has never been the strong suit of the medical industry -- and we are changing that. We believe that the same sophistication and performance that powers our daily needs through millions of consumer applications -- be it your grocery, your food delivery or your movie tickets -- when applied to healthcare, has a massive potential to transform the industry, and positively impact lives of patients and doctors. Radical works with some of the largest hospitals and public health programmes in India, and has a growing footprint both inside the country and abroad.


As a DevOps Engineer at Radical, you will:

Work closely with all stakeholders in the healthcare ecosystem - patients, doctors, paramedics and administrators - to conceptualise and bring to life the ideal set of products that add value to their time
Work alongside Software Developers and ML Engineers to solve problems and assist in architecture design
Work on systems which have an extraordinary emphasis on capturing data that can help build better workflows, algorithms and tools
Work on high performance systems that deal with several million transactions, multi-modal data and large datasets, with a close attention to detail


We’re looking for someone who has:

Familiarity and experience with writing working, well-documented and well-tested scripts, Dockerfiles, Puppet/Ansible/Chef/Terraform scripts.
Proficiency with scripting languages like Python and Bash.
Knowledge of systems deployment and maintainence, including setting up CI/CD and working alongside Software Developers, monitoring logs, dashboards, etc.
Experience integrating with a wide variety of external tools and services
Experience navigating AWS and leveraging appropriate services and technologies rather than DIY solutions (such as hosting an application directly on EC2 vs containerisation, or an Elastic Beanstalk)


It’s not essential, but great if you have:

An established track record of deploying and maintaining systems.
Experience with microservices and decomposition of monolithic architectures
Proficiency in automated tests.
Proficiency with the linux ecosystem
Experience in deploying systems to production on cloud platforms such as AWS


The position is open now, and we are onboarding immediately.


Please write to us with an updated resume, and one thing you would like us to see as part of your application. This one thing can be anything that you think makes you stand apart among candidates.


Radical is based out of Delhi NCR, India, and we look forward to working with you!


We're looking for people who may not know all the answers, but are obsessive about finding them, and take pride in the code that they write. We are more interested in the ability to learn fast, think rigorously and for people who aren’t afraid to challenge assumptions, and take large bets -- only to work hard and prove themselves correct. You're encouraged to apply even if your experience doesn't precisely match the job description. Join us.

Read more
Squareboat Solutions Private Limited

at Squareboat Solutions Private Limited

1 video
8 recruiters
Ayushi Rathour
Posted by Ayushi Rathour
NCR (Delhi | Gurgaon | Noida)
2 - 6 yrs
₹3L - ₹4L / yr
DevOps
System Administration
Google Cloud Storage
Linux/Unix
Job Description Are you passionate about system administration, coding, scripting and process automation? Responsibilities: - Deploying, automating, maintaining and managing AWS cloud based production system, to ensure the availability, performance, scalability and security of productions systems. - Build, release and configuration management of production systems. - Pre-production Acceptance Testing to help assure the quality of our products / services. - System troubleshooting and problem solving across platform and application domains. - Suggesting architecture improvements, recommending process improvements. - Evaluate new technology options and vendor products. - Ensuring critical system security through the use of best in class cloud security solutions. - Keeping an update related to developer tools,DevOps cloud computing,Continuous Integration,Continuous Deployment, Blue Green Deployment, Continuous Monitoring, Automate Infrastructure, Continuous Delivery and Continuous Build, Continuous Testing. Requirements: - AWS: 1+ years experience with using a broad range of AWS technologies which will be covering EC2, RDS, ELB, EBD, S3, VPC, Glacier, IAM, CloudWatch, Docker, Lambda etc. to develop and maintain AWS based cloud solutions, with focus on practicing cloud security. - Solid experience as a DevOps Engineer in a 24x7 uptime Amazon AWS environment, including automation experience with configuration management tools. - Scripting Skills: Strong scripting and automation skills. - Operating Systems: Windows and Linux system administration. - Monitoring Tools: Experience with system monitoring tools . - Problem Solving: Ability to analyze and resolve complex infrastructure resource and application deployment issues. Skills: System Administration, Linux, DevOps, AWS/EC2/ELB/S3/DynamoDB, Google Cloud Platform, AWS
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort