11+ Fusion Jobs in Pune | Fusion Job openings in Pune
Apply to 11+ Fusion Jobs in Pune on CutShort.io. Explore the latest Fusion Job opportunities across top companies like Google, Amazon & Adobe.
JOB DETAILS:
* Job Title: Specialist I - DevOps Engineering
* Industry: Global Digital Transformation Solutions Provider
* Salary: Best in Industry
* Experience: 7-10 years
* Location: Bengaluru (Bangalore), Chennai, Hyderabad, Kochi (Cochin), Noida, Pune, Thiruvananthapuram
Job Description
Job Summary:
As a DevOps Engineer focused on Perforce to GitHub migration, you will be responsible for executing seamless and large-scale source control migrations. You must be proficient with GitHub Enterprise and Perforce, possess strong scripting skills (Python/Shell), and have a deep understanding of version control concepts.
The ideal candidate is a self-starter, a problem-solver, and thrives on challenges while ensuring smooth transitions with minimal disruption to development workflows.
Key Responsibilities:
- Analyze and prepare Perforce repositories — clean workspaces, merge streams, and remove unnecessary files.
- Handle large files efficiently using Git Large File Storage (LFS) for files exceeding GitHub’s 100MB size limit.
- Use git-p4 fusion (Python-based tool) to clone and migrate Perforce repositories incrementally, ensuring data integrity.
- Define migration scope — determine how much history to migrate and plan the repository structure.
- Manage branch renaming and repository organization for optimized post-migration workflows.
- Collaborate with development teams to determine migration points and finalize migration strategies.
- Troubleshoot issues related to file sizes, Python compatibility, network connectivity, or permissions during migration.
Required Qualifications:
- Strong knowledge of Git/GitHub and preferably Perforce (Helix Core) — understanding of differences, workflows, and integrations.
- Hands-on experience with P4-Fusion.
- Familiarity with cloud platforms (AWS, Azure) and containerization technologies (Docker, Kubernetes).
- Proficiency in migration tools such as git-p4 fusion — installation, configuration, and troubleshooting.
- Ability to identify and manage large files using Git LFS to meet GitHub repository size limits.
- Strong scripting skills in Python and Shell for automating migration and restructuring tasks.
- Experience in planning and executing source control migrations — defining scope, branch mapping, history retention, and permission translation.
- Familiarity with CI/CD pipeline integration to validate workflows post-migration.
- Understanding of source code management (SCM) best practices, including version history and repository organization in GitHub.
- Excellent communication and collaboration skills for cross-team coordination and migration planning.
- Proven practical experience in repository migration, large file management, and history preservation during Perforce to GitHub transitions.
Skills: Github, Kubernetes, Perforce, Perforce (Helix Core), Devops Tools
Must-Haves
Git/GitHub (advanced), Perforce (Helix Core) (advanced), Python/Shell scripting (strong), P4-Fusion (hands-on experience), Git LFS (proficient)
• Strong hands-on experience with AWS services.
• Expertise in Terraform and IaC principles.
• Experience building CI/CD pipelines and working with Git.
• Proficiency with Docker and Kubernetes.
• Solid understanding of Linux administration, networking fundamentals, and IAM.
• Familiarity with monitoring and observability tools (CloudWatch, Prometheus, Grafana, ELK, Datadog).
• Knowledge of security and compliance tools (Trivy, SonarQube, Checkov, Snyk).
• Scripting experience in Bash, Python, or PowerShell.
• Exposure to GCP, Azure, or multi-cloud architectures is a plus.
Job Summary:
We are seeking a highly skilled and proactive DevOps Engineer with 4+ years of experience to join our dynamic team. This role requires strong technical expertise across cloud infrastructure, CI/CD pipelines, container orchestration, and infrastructure as code (IaC). The ideal candidate should also have direct client-facing experience and a proactive approach to managing both internal and external stakeholders.
Key Responsibilities:
- Collaborate with cross-functional teams and external clients to understand infrastructure requirements and implement DevOps best practices.
- Design, build, and maintain scalable cloud infrastructure on AWS (EC2, S3, RDS, ECS, etc.).
- Develop and manage infrastructure using Terraform or CloudFormation.
- Manage and orchestrate containers using Docker and Kubernetes (EKS).
- Implement and maintain CI/CD pipelines using Jenkins or GitHub Actions.
- Write robust automation scripts using Python and Shell scripting.
- Monitor system performance and availability, and ensure high uptime and reliability.
- Execute and optimize SQL queries for MSSQL and PostgreSQL databases.
- Maintain clear documentation and provide technical support to stakeholders and clients.
Required Skills:
- Minimum 4+ years of experience in a DevOps or related role.
- Proven experience in client-facing engagements and communication.
- Strong knowledge of AWS services – EC2, S3, RDS, ECS, etc.
- Proficiency in Infrastructure as Code using Terraform or CloudFormation.
- Hands-on experience with Docker and Kubernetes (EKS).
- Strong experience in setting up and maintaining CI/CD pipelines with Jenkins or GitHub.
- Solid understanding of SQL and working experience with MSSQL and PostgreSQL.
- Proficient in Python and Shell scripting.
Preferred Qualifications:
AWS Certifications (e.g., AWS Certified DevOps Engineer) are a plus.
Experience working in Agile/Scrum environments.
Strong problem-solving and analytical skills.
Roles and Responsibilities:
▪ Data Pipeline Development: Build, deploy, and maintain efficient ETL/ELT pipelines using Azure
Data Factory, Data Factory & Azure Synapse Analytics.
▪ We are only looking for senior candidates with over 5 yrs of relevant exp with ample client
facing exp.
· Finance/Insurance experience is also a must.
▪ Data Modelling & Warehousing: Design and optimize data models, warehouses, and lakes for
structured/unstructured data.
▪ SQL & Query Optimization: Write complex SQL queries, optimize performance, and manage
databases. · Python Automation: Develop scripts for data processing, automation, and
integration using Python (Pandas, NumPy).
Technical Skills:
▪ Cloud Technologies: Azure Synapse Analytics, Azure Fabric, Azure Databricks and AWS(good to
have)
▪ Knowledge of Python, Pyspark, SQL, ETL concepts
▪ Good understanding of Insurance Operations and KPI reporting is an advantage.
Job Summary:
Seeking a seasoned SQL + ETL Developer with 4+ years of experience in managing large-scale datasets and cloud-based data pipelines. The ideal candidate is hands-on with MySQL, PySpark, AWS Glue, and ETL workflows, with proven expertise in AWS migration and performance optimization.
Key Responsibilities:
- Develop and optimize complex SQL queries and stored procedures to handle large datasets (100+ million records).
- Build and maintain scalable ETL pipelines using AWS Glue and PySpark.
- Work on data migration tasks in AWS environments.
- Monitor and improve database performance; automate key performance indicators and reports.
- Collaborate with cross-functional teams to support data integration and delivery requirements.
- Write shell scripts for automation and manage ETL jobs efficiently.
Required Skills:
- Strong experience with MySQL, complex SQL queries, and stored procedures.
- Hands-on experience with AWS Glue, PySpark, and ETL processes.
- Good understanding of AWS ecosystem and migration strategies.
- Proficiency in shell scripting.
- Strong communication and collaboration skills.
Nice to Have:
- Working knowledge of Python.
- Experience with AWS RDS.
environment. He/she must demonstrate a high level of ownership, integrity, and leadership
skills and be flexible and adaptive with a strong desire to learn & excel.
Required Skills:
- Strong experience working with tools and platforms like Helm charts, Circle CI, Jenkins,
- and/or Codefresh
- Excellent knowledge of AWS offerings around Cloud and DevOps
- Strong expertise in containerization platforms like Docker and container orchestration platforms like Kubernetes & Rancher
- Should be familiar with leading Infrastructure as Code tools such as Terraform, CloudFormation, etc.
- Strong experience in Python, Shell Scripting, Ansible, and Terraform
- Good command over monitoring tools like Datadog, Zabbix, Elk, Grafana, CloudWatch, Stackdriver, Prometheus, JFrog, Nagios, etc.
- Experience with Linux/Unix systems administration.
We are hiring for https://www.linkedin.com/feed/hashtag/?keywords=devops&highlightedUpdateUrns=urn%3Ali%3Aactivity%3A7003255016740294656" target="_blank">#Devops Engineer for a reputed https://www.linkedin.com/feed/hashtag/?keywords=mnc&highlightedUpdateUrns=urn%3Ali%3Aactivity%3A7003255016740294656" target="_blank">#MNC
Job Description:
Total exp- 6+Years
Must have:
Minimum 3-4 years hands-on experience in https://www.linkedin.com/feed/hashtag/?keywords=kubernetes&highlightedUpdateUrns=urn%3Ali%3Aactivity%3A7003255016740294656" target="_blank">#Kubernetes and https://www.linkedin.com/feed/hashtag/?keywords=docker&highlightedUpdateUrns=urn%3Ali%3Aactivity%3A7003255016740294656" target="_blank">#Docker
Proficiency in https://www.linkedin.com/feed/hashtag/?keywords=aws&highlightedUpdateUrns=urn%3Ali%3Aactivity%3A7003255016740294656" target="_blank">#AWS Cloud
Good to have Kubernetes admin certification
Job Responsibilities:
Responsible for managing Kubernetes cluster
Deploying infrastructure for the project
Build https://www.linkedin.com/feed/hashtag/?keywords=cicd&highlightedUpdateUrns=urn%3Ali%3Aactivity%3A7003255016740294656" target="_blank">#CICD pipeline
Looking for https://www.linkedin.com/feed/hashtag/?keywords=immediate&highlightedUpdateUrns=urn%3Ali%3Aactivity%3A7003255016740294656" target="_blank">#Immediate Joiners only
Location: Pune
Salary: As per market standards
Mode: https://www.linkedin.com/feed/hashtag/?keywords=work&highlightedUpdateUrns=urn%3Ali%3Aactivity%3A7003255016740294656" target="_blank">#Work from office
Devops Engineer Position - 3+ years
Kubernetes, Helm - 3+ years (dev & administration)
Monitoring platform setup experience - Prometheus, Grafana
Azure/ AWS/ GCP Cloud experience - 1+ years.
Ansible/Terraform/Puppet - 1+ years
CI/CD - 3+ years

Hiring for one of the product based org for PAN India
Skills required:
Strong knowledge and experience of cloud infrastructure (AWS, Azure or GCP), systems, network design, and cloud migration projects.
Strong knowledge and understanding of CI/CD processes tools (Jenkins/Azure DevOps) is a must.
Strong knowledge and understanding of Docker & Kubernetes is a must.
Strong knowledge of Python, along with one more language (Shell, Groovy, or Java).
Strong prior experience using automation tools like Ansible, Terraform.
Architect systems, infrastructure & platforms using Cloud Services.
Strong communication skills. Should have demonstrated the ability to collaborate across teams and organizations.
Benefits of working with OpsTree Solutions:
Opportunity to work on the latest cutting edge tools/technologies in DevOps
Knowledge focused work culture
Collaboration with very enthusiastic DevOps experts
High growth trajectory
Opportunity to work with big shots in the IT industry
Position Summary:
Technology Lead provides technical leadership with in-depth DevOps experience and is responsible for enabling delivery of high-quality projects to Saviant clients through highly effective DevOps process. This is a highly technical role, with a focus on analysing, designing, documenting, and implementing a complete DevOps process for enterprise applications using the most advanced technology stacks, methodologies, and best practices within the agreed timelines.
Individuals in this role will need to have good technical and communication skills and strive to be on the cutting edge, innovate, and explore to deliver quality solutions to Saviant Clients.
Your Role & Responsibilities at Saviant:
• Design, analyze, document, and develop the technical architecture for on-premise as well as cloud-based DevOps solutions around customers’ business problems.
• Lead end to end process and setup implementation of configuration management, CI, CD, and monitoring platforms.
• Conduct reviews of design and implementation of DevOps processes while establishing, and maintaining best practices
• Setup new processes to improve the quality of development, delivery and deployment processes
• Provide technical support and guidance to project team members.
• Upgrade by learning technologies beyond traditional area of expertise
• Contribute to pre-sales, proposal creation, POCs, technology incubation from technical and architecture perspective
• Participate in recruitment and people development initiatives.
Job Requirements/Qualifications:
• Educational Qualification: BE, BTech, MTech, MCA from a reputed institute • 6 to 8 years of hands-on experience of the DevOps process using technologies like Dot Net Core, Python, C#, MVC, ReactJS, Python, Android, IOS, Linux, Windows
• Strong hands-on experience of the full life cycle of DevOps: DevOps Orchestration/Configuration/Security/CI-CD/Release Management and Environment management • Solid hands-on knowledge of DevOps technologies and tools such as Jenkins, Spinnaker, Azure for DevOps, Chef, Puppet, JIRA, TFS, Git, SVN, various scripting tools, etc. • Solid hands-on knowledge of containerization technologies and tools such as Docker, Kubernetes, Cloud Foundry • In-depth understanding of various development and deployment architectures from a DevOps perspective
• Expertise in Grounds-up DevOps projects involving multiple agile teams spread across geographies.
• Experience in a various Agile Project Management software /techniques / tools
• Strong analytical and problem solving skills
• Excellent written and oral communication skills
• Enjoys working as part of agile software teams in a startup environment.
Who Should Apply?
• You have independently managed end-to-end DevOps projects, including understanding requirements, design solutions and implementing, setting up best practices with different business domain over last 2 years.
• You are well versed with Agile development methodologies and have successfully implemented them across at least 2-3 projects
• You have lead development team of 5 to 8 developers with Technology responsibility
• You have served as “Single Point of Contact” for managing technical escalations and decisions



