Cutshort logo
Perforce Jobs in Bangalore (Bengaluru)

11+ Perforce Jobs in Bangalore (Bengaluru) | Perforce Job openings in Bangalore (Bengaluru)

Apply to 11+ Perforce Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest Perforce Job opportunities across top companies like Google, Amazon & Adobe.

icon
Global Digital Transformation Solutions Provider

Global Digital Transformation Solutions Provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore), Chennai, Hyderabad, Kochi (Cochin), Noida, Pune, Thiruvananthapuram
7 - 10 yrs
₹21L - ₹30L / yr
Perforce
DevOps
skill iconGit
skill iconGitHub
skill iconPython
+7 more

JOB DETAILS:

* Job Title: Specialist I - DevOps Engineering

* Industry: Global Digital Transformation Solutions Provider

* Salary: Best in Industry

* Experience: 7-10 years

* Location: Bengaluru (Bangalore), Chennai, Hyderabad, Kochi (Cochin), Noida, Pune, Thiruvananthapuram

 

Job Description

Job Summary:

As a DevOps Engineer focused on Perforce to GitHub migration, you will be responsible for executing seamless and large-scale source control migrations. You must be proficient with GitHub Enterprise and Perforce, possess strong scripting skills (Python/Shell), and have a deep understanding of version control concepts.

The ideal candidate is a self-starter, a problem-solver, and thrives on challenges while ensuring smooth transitions with minimal disruption to development workflows.

 

Key Responsibilities:

  • Analyze and prepare Perforce repositories — clean workspaces, merge streams, and remove unnecessary files.
  • Handle large files efficiently using Git Large File Storage (LFS) for files exceeding GitHub’s 100MB size limit.
  • Use git-p4 fusion (Python-based tool) to clone and migrate Perforce repositories incrementally, ensuring data integrity.
  • Define migration scope — determine how much history to migrate and plan the repository structure.
  • Manage branch renaming and repository organization for optimized post-migration workflows.
  • Collaborate with development teams to determine migration points and finalize migration strategies.
  • Troubleshoot issues related to file sizes, Python compatibility, network connectivity, or permissions during migration.

 

Required Qualifications:

  • Strong knowledge of Git/GitHub and preferably Perforce (Helix Core) — understanding of differences, workflows, and integrations.
  • Hands-on experience with P4-Fusion.
  • Familiarity with cloud platforms (AWS, Azure) and containerization technologies (Docker, Kubernetes).
  • Proficiency in migration tools such as git-p4 fusion — installation, configuration, and troubleshooting.
  • Ability to identify and manage large files using Git LFS to meet GitHub repository size limits.
  • Strong scripting skills in Python and Shell for automating migration and restructuring tasks.
  • Experience in planning and executing source control migrations — defining scope, branch mapping, history retention, and permission translation.
  • Familiarity with CI/CD pipeline integration to validate workflows post-migration.
  • Understanding of source code management (SCM) best practices, including version history and repository organization in GitHub.
  • Excellent communication and collaboration skills for cross-team coordination and migration planning.
  • Proven practical experience in repository migration, large file management, and history preservation during Perforce to GitHub transitions.

 

Skills: Github, Kubernetes, Perforce, Perforce (Helix Core), Devops Tools

 

Must-Haves

Git/GitHub (advanced), Perforce (Helix Core) (advanced), Python/Shell scripting (strong), P4-Fusion (hands-on experience), Git LFS (proficient)

Read more
Infilect

at Infilect

3 recruiters
Indira Ashrit
Posted by Indira Ashrit
Bengaluru (Bangalore)
2 - 3 yrs
₹12L - ₹15L / yr
skill iconKubernetes
skill iconDocker
cicd
Google Cloud Platform (GCP)

Job Description:


Infilect is a GenAI company pioneering the use of Image Recognition in Consumer Packaged Goods retail.


We are looking for a Senior DevOps Engineer to be responsible and accountable for the smooth running of our Cloud, AI workflows, and AI-based Computer Systems. Furthermore, the candidate will supervise the implementation and maintenance of the company’s computing needs including the in-house GPU & AI servers along with AI workloads.



Responsibilities

  • Understanding and automating AI based deployment an AI based workflows
  • Implementing various development, testing, automation tools, and IT infrastructure
  • Manage Cloud, computer systems and other IT assets.
  • Strive for continuous improvement and build continuous integration, continuous development, and constant deployment pipeline (CI/CD Pipeline)
  • Design, develop, implement, and coordinate systems, policies, and procedures for Cloud and on-premise systems
  • Ensure the security of data, network access, and backup systems
  • Act in alignment with user needs and system functionality to contribute to organizational policy
  • Identify problematic areas, perform RCA and implement strategic solutions in time
  • Preserve assets, information security, and control structures
  • Handle monthly/annual cloud budget and ensure cost effectiveness


Requirements and skills

  • Well versed in automation tools such as Docker, Kubernetes, Puppet, Ansible etc.
  • Working Knowledge of Python, SQL database stack or any full-stack with relevant tools.
  • Understanding agile development, CI/CD, sprints, code reviews, Git and GitHub/Bitbucket workflows
  • Well versed with ELK stack or any other logging, monitoring and analysis tools
  • Proven working experience of 2+ years as an DevOps/Tech lead/IT Manager or relevant positions
  • Excellent knowledge of technical management, information analysis, and of computer hardware/software systems
  • Hands-on experience with computer networks, network administration, and network installation
  • Knowledge in ISO/SOC Type II implementation with be a 
  • BE/B.Tech/ME/M.Tech in Computer Science, IT, Electronics or a similar field


Read more
EnterpriseMinds

at EnterpriseMinds

2 recruiters
phani kalyan
Posted by phani kalyan
Bengaluru (Bangalore)
7 - 9 yrs
₹10L - ₹35L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Windows Azure
+5 more
  • Candidate should have good Platform experience on Azure with Terraform.
  • The devops engineer  needs to help developers, create the Pipelines and K8s Deployment Manifests.
  • Good to have experience on migrating data from (AWS) to Azure. 
  • To manage/automate infrastructure automatically using Terraforms. Jenkins is the key CI/CD tool which we uses and it will be used to run these Terraforms.
  • VMs to be provisioned on Azure Cloud and managed.
  • Good hands on experience of Networking on Cloud is required.
  • Ability to setup Database on VM as well as managed DB and Proper set up of cloud hosted microservices needs to be done to communicate with the db services.
  • Kubernetes, Storage, KeyValult, Networking(load balancing and routing) and VMs are the key infrastructure expertise which are essential. 
  • Requirement is  to administer Kubernetes cluster end to end. (Application deployment, managing namespaces, load balancing, policy setup, using blue-green/canary deployment models etc).
  • The experience in AWS is desirable 
  • Python experience is optional however Power shell is mandatory
  • Know-how on the use of GitHub
Administration of Azure Kubernetes services
Read more
Webtiga Private limited
Bengaluru (Bangalore)
6 - 9 yrs
Best in industry
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
elk

We are looking for a DevOps Lead to join our team.


Responsibilities


• A technology Professional who understands software development and can solve IT Operational and deployment challenges using software engineering tools and processes. This position requires an understanding of both Software development (Dev) and deployment

Operations (Ops)

• Identity manual processes and automate them using various DevOps automation tools

• Maintain the organization’s growing cloud infrastructure

• Monitor and maintain DevOps environment stability

• Collaborate with distributed Agile teams to define technical requirements and resolve technical design issues

• Orchestrating builds and test setups using Docker and Kubernetes.

• Participate in designing and building Kubernetes, Cloud, and on-prem environments for maximum performance, reliability and scalability

• Share business and technical learnings with the broader engineering and product organization, while adapting approaches for different audiences


Requirements


• Candidates working for this position should possess at least 5 years of work experience as a DevOps Engineer.

• Candidate should have experience in ELK stack, Kubernetes, and Docker.

• Solid experience in the AWS environment.

• Should have experience in monitoring tools like DataDog or Newrelic.

• Minimum of 5 years experience with code repository management, code merge and quality checks, continuous integration, and automated deployment & management using tools like Jenkins, SVN, Git, Sonar, and Selenium.

• Candidates must possess ample knowledge and experience in system automation, deployment, and implementation.

• Candidates must possess experience in using Linux, Jenkins, and ample experience in configuring and automating the monitoring tools.

• The candidates should also possess experience in the software development process and tools and languages like SaaS, Python, Java, MongoDB, Shell scripting, Python, PostgreSQL, and Git.

• Candidates should demonstrate knowledge in handling distributed data systems.

Examples: Elastisearch, Cassandra, Hadoop, and others.

• Should have experience in GitLab- CIRoles and Responsibilities

Read more
Semperfi Solution

at Semperfi Solution

1 recruiter
Ambika Jituri
Posted by Ambika Jituri
Bengaluru (Bangalore)
8 - 9 yrs
₹20L - ₹25L / yr
DevOps
CI/CD
skill iconDocker
skill iconJenkins
Terraform
+3 more
Google Cloud Devops Engineer (Terraform/CI/CD Pipeline)
Exp:8 to 10 years notice periods 0 to 20 days

Job Description :

- Provision Gcp Resources Based On The Architecture Design And Features Aligned With Business Objectives

- Monitor Resource Availability, Usage Metrics And Provide Guidelines For Cost And Performance Optimization

- Assist It/Business Users Resolving Gcp Service Related Issues

- Provide Guidelines For Cluster Automation And Migration Approaches And Techniques Including Ingest, Store, Process, Analyse And Explore/Visualise Data.

- Provision Gcp Resources For Data Engineering And Data Science Projects.

- Assistance With Automated Data Ingestion, Data Migration And Transformation(Good To Have)

- Assistance With Deployment And Troubleshooting Applications In Kubernetes.

- Establish Connections And Credibility In How To Address The Business Needs Via Design And Operate Cloud-Based Data Solutions

Key Responsibilities / Tasks :

- Building complex CI/CD pipelines for cloud native PaaS services such as Databases, Messaging, Storage, Compute in Google Cloud Platform

- Building deployment pipeline with Github CI (Actions)

- Building terraform codes to deploy infrastructure as a code

- Working with deployment and troubleshooting of Docker, GKE, Openshift, and Cloud Run

- Working with Cloud Build, Cloud Composer, and Dataflow

- Configuring software to be monitored by Appdynamics

- Configuring stackdriver logging and monitoring in GCP

- Work with splunk, Kibana, Prometheus and grafana to setup dashboard

Your skills, experience, and qualification :

- Total experience of 5+ Years, in as Devops. Should have at least 4 year of experience in Google could and Github CI.

- Should have strong experience in Microservices/API.

- Should have strong experience in Devops tools like Gitbun CI, teamcity, Jenkins and Helm.

- Should know Application deployment and testing strategies in Google cloud platform.

- Defining and setting development, test, release, update, and support processes for DevOps operation

- Strive for continuous improvement and build continuous integration, continuous development, and constant deployment pipeline (CI/CD Pipeline)

- Excellent understanding of Java

- Knowledge on Kafka, ZooKeeper, Hazelcast, Pub/Sub is nice to have.

- Understanding of cloud networking, security such as software defined networking/firewalls, virtual networks and load balancers.

- Understanding of cloud identity and access

- Understanding of the compute runtime and the differences between native compute, virtual and containers

- Configuration and managing databases such as Oracle, Cloud SQL, and Cloud Spanner.

- Excellent troubleshooting

- Working knowledge of various tools, open-source technologies

- Awareness of critical concepts of Agile principles

- Certification in Google professional Cloud DevOps Engineer is desirable.

- Experience with Agile/SCRUM environment.

- Familiar with Agile Team management tools (JIRA, Confluence)

- Understand and promote Agile values: FROCC (Focus, Respect, Openness, Commitment, Courage)

- Good communication skills

- Pro-active team player

- Comfortable working in multi-disciplinary, self-organized teams

- Professional knowledge of English

- Differentiators : knowledge/experience about
Read more
Q2ebanking

at Q2ebanking

2 recruiters
Arjun Sadanand
Posted by Arjun Sadanand
Bengaluru (Bangalore)
4 - 8 yrs
₹16L - ₹30L / yr
skill iconJenkins
skill iconAmazon Web Services (AWS)
Ansible

Q2 is seeking a team-focused Lead Release Engineer with a passion for managing releases to ensure we release quality software developed using Agile Scrum methodology.  Working within the Development team, the Release Manager will work in a fast-paced environment with Development, Test Engineering, IT, Product Management, Design, Implementations, Support and other internal teams to drive efficiencies, transparency, quality and predictability in our software delivery pipeline.

 

RESPONSIBILITIES:

  • Provide leadership on cross-functional development focused software release process.
  • Management of the product release cycle to new and existing clients including the build release process and any hotfix releases
  • Support end-to-end process for production issue resolution including impact analysis of the issue, identifying the client impacts, tracking the fix through dev/testing and deploying the fix in various production branches.
  • Work with engineering team to understand impacts of branches and code merges.
  • Identify, communicate, and mitigate release delivery risks.
  • Measure and monitor progress to ensure product features are delivered on time.
  • Lead recurring release reporting/status meetings to include discussion around release scope, risks and challenges.
  • Responsible for planning, monitoring, executing, and implementing the software release strategy.
  • Establish completeness criteria for release of successfully tested software component and their dependencies to gate the delivery of releases to Implementation groups
  • Serve as a liaison between business units to guarantee smooth and timely delivery of software packages to our Implementations and Support teams
  • Create and analyze operational trends and data used for decision making, root cause analysis and performance measurement.
  • Build partnerships, work collaboratively, and communicate effectively to achieve shared objectives.
  • Make Improvements to processes to improve the experience and delivery for internal and external customers.
  • Responsible for ensuring that all security, availability, confidentiality and privacy policies and controls are adhered to.

EXPERIENCE AND KNOWLEDGE:

 

  • Bachelor’s degree in Computer Science, or related field or equivalent experience.
  • Minimum 4 years related experience in product release management role.
  • Excellent understanding of software delivery lifecycle.
  • Technical Background with experience in common Scrum and Agile practices preferred.
  • Deep knowledge of software development processes, CI/CD pipelines and Agile Methodology
  • Experience with tools like Jenkins, Bitbucket, Jira and Confluence.
  • Familiarity with enterprise software deployment architecture and methodologies.
  • Proven ability in building effective partnership with diverse groups in multiple locations/environments 
  • Ability to convey technical concepts to business-oriented teams.
  • Capable of assessing and communicating risks and mitigations while managing ambiguity.
  • Experience managing customer and internal expectations while understanding the organizational and customer impact.
  • Strong organizational, process, leadership, and collaboration skills.
  • Strong verbal, written, and interpersonal skills.
Read more
FinTech NBFC dedicated to driving Finance sector

FinTech NBFC dedicated to driving Finance sector

Agency job
via Jobdost by Mamatha A
Bengaluru (Bangalore)
2 - 4 yrs
₹8L - ₹10L / yr
CI/CD
skill iconAmazon Web Services (AWS)
skill iconKubernetes
skill iconGit
YAML
+2 more
Technical Skills: - Knowledge of infrastructure and cloud (preferably AWS); experience with infrastructure-as-code (preferably Terraform) - Experienced with one or more scripting languages, Yaml, Python, Ruby, Bash, and/or NodeJS. - Experience with web services standards and related technologies. - Experience in working with Git, or other source control and CI/CD technologies following Agile Development Methodology for Software Development and related Agile practices, and exposure to Agile tools
- Preferred experience in development associated with Kafka or big data technologies understand essential Kafka components like Zookeeper, Brokers, and optimization of Kafka clients applications (Producers & Consumers). -
Experience with Automation of Infrastructure, Testing , DB Deployment Automation, Logging/Monitoring/alerting
- AWS services experience on CloudFormation, ECS, Elastic Container Registry, Pipelines, Cloudwatch, Glue, and other related services.
- AWS Elastic Kubernetes Services (EKS) - Kubernetes and containers managing and auto-scaling -
Good knowledge and hands-on experiences with various AWS services like EC2, RDS, EKS, S3, Lambda, API, Cloudwatch, etc.
- Good and quick with log analysis to perform Root Cause Analysis (RCA) on production deployments and container errors on cloud watch. 
Working on ways to automate and improve deployment and release processes.
- High understanding of the Serverless architecture concept. - Good with Deployment automation tools and Investigating to resolve technical issues.
technical issues. - Sound knowledge of APIs, databases, and container-based ETL jobs.
- Planning out projects and being involved in project management decisions. Soft Skills
- Adaptability
- Collaboration with different teams
- Good communication skills
- Team player attitude










Read more
Innovatily

at Innovatily

5 recruiters
Bhavani P
Posted by Bhavani P
Bengaluru (Bangalore)
4 - 8 yrs
₹3L - ₹10L / yr
skill iconPython
skill iconAmazon Web Services (AWS)
AWS CloudFormation
DevOps
skill iconDocker
+1 more
- Deep hands-on experience in designing & developing Python based applications

- Hands-on experience building database-backed web applications using Python based frameworks

- Excellent knowledge of Linux and experience developing Python applications that are deployed in Linux environments

- Experience building client-side and server-side API-level integrations in Python

- Experience in containerization and container orchestration systems like Docker, Kubernetes, etc.

- Experience with NoSQL document stores like the Elastic Stack (Elasticsearch, Logstash, Kibana)

- Experience in using and managing Git based version control systems - Azure DevOps, GitHub, Bitbucket etc.

- Experience in using project management tools like Jira, Azure DevOps etc.

- Expertise in Cloud based development and deployment using cloud providers like AWS or Azure
Read more
Renowned
Bengaluru (Bangalore)
3 - 7 yrs
₹7L - ₹15L / yr
Data Structures
SQL Azure
ADF
  • Azure Data Factory, Azure Data Bricks, Talend, BODS, Jenkins
  • Microsoft Office (mandatory)
  • Strong knowledge on Databases, Azure Synapse, data management, SQL
  • Knowledge on any cloud platforms (Azure, AWS etc.,)
Advanced Excel
Read more
Dunya Labs

at Dunya Labs

2 recruiters
Muralidhar BS
Posted by Muralidhar BS
Bengaluru (Bangalore)
4 - 7 yrs
₹10L - ₹20L / yr
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
Agile/Scrum
DevOps Specialist Dunya Labs is a deep tech product company currently focused on building infrastructure, developer tooling and middleware for deploying scalable blockchain applications. We combine a theoretical research team with a product team to lead cutting-edge developments in the blockchain space. Seeking DevOps Specialist for designing and supporting our infrastructure environments. Key Responsibilities: ● Support of our continuous integration processes that run on various platforms ● Design entire application environments that can be fully automated or replicated including network, compute, and data stores ● Develop solutions by working with product manager, scrum master, architects, developers and business stakeholders ● Create and enhance of Continuous Integration automation across multiple platforms including Java, Nodejs, and Swif. ● Create and enhance of Continuous Deployment automation built on Docker and Kubernetes ● Create and enhance of dynamic monitoring and alerting solutions using industry leading services ● Develop automation to ensure security across a geographically dispersed hosting environment ● Leverage new technology paradigms (e.g., serverless, containers, microservices) Qualifications: ● Bachelor's or Masters degree in Information Systems, Information Technology, Computer Science or Engineering or equivalent experience ● Good exposure to Agile software development and DevOps practices such as Infrastructure as Code (IaC), Continuous Integration and automated deployment ● Expertise with Continuous Integration and Continuous Delivery (CI/CD) ● Architecting, designing and developing applications on PCF ● Designing and building application and serverless technologies ● Experience architecting highly available systems that utilize load balancing, horizontal scalability and high availability ● Strong communication and analytical/problem-solving skills, should be a team player Experience: ● Managing large production environment in Cloud & Operational Support ● Automating infrastructure deployment, management and monitoring ● Hands-on with Devops technologies ● Troubleshooting and Problem solving in Infrastructure issues Skills and Competencies: ● Linux/Unix Administration ● Infrastructure Automation with Chef/Puppet/Ansible/Terraform ● Cloud Operations & Services - AWS/GCP/Azure (AWS preferable) ● Networking - Unix Networking and Understanding of Cloud Networking ■ AWS VPC, NAT, Firewalls, Subnets etc ■ TCP/IP and HTTP(S) protocols ● Source Code Control : github / gitlab ● CI/CD : Jenkins ● Container Technologies : Docker and Kubernetes ● Monitoring Tools : Nagios/Cloudwatch/Prometheus ● SQL and NoSQL Database : Mysql/Postgresql and Mongodb ● Scripting through Python/Ruby/Shell ● Softwares : Apache Web Server, Tomcat, Nginx, Kafka, Zookeeper etc ● Security : Https/TLS, Certificates, Digital Signature, VPN, Firewalls, AWS Security Group, IAM, DMZ Architecture Dunya Labs is an equal opportunities employer and welcomes applications from all sections of society and does not discriminate on grounds of race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, or any other basis as protected by applicable law.
Read more
Healthifyme

at Healthifyme

1 video
11 recruiters
Sri Harsha Gadde
Posted by Sri Harsha Gadde
Bengaluru (Bangalore)
3 - 7 yrs
₹20L - ₹25L / yr
DevOps
Chef
Puppet
About HealthifyMe : We were founded in 2012 by Tushar Vashisht and Sachin Shenoy, and incubated by Microsoft Accelerator. Today, we happen to be India's largest and most loved health & fitness app with over 4 million users from 220+ cities in India. What makes us unique is our ability to bring together the power of artificial intelligence powered technology and human empathy to deliver measurable impact in our customers' lives. We do this through our team of elite nutritionists & trainers working together with the world's first AI-powered virtual nutritionist - "Ria", our proudest creation till date. Ria references data from over 200 million food & workout logs and 14 million conversations to deliver intelligent health & fitness suggestions to our customers. Ria also happens to be multi-lingual, "she" understands English, French, German, Italian & Hindi. Recently Russia's Sistema and Samsung's AI focussed fund - NEXT, led a USD 12 Million Series B funding into our business. We are the most liked app in India across categories, we've been consistently rated as the no:1 health & fitness app on play store for 3 years running and received Google's "editor's choice award" in 2017. Some of the marquee corporates in the country such as Cognizant, Accenture, Deloitte, Metlife amongst others have also benefited from our employee engagement and wellness programs. Our global aspirations have taken us to MENA, SEA and LATAM regions with more markets to follow. Desired Skills & Experience : Requirements : - Background in Linux/Unix Administration - Experience with Automation/Configuration Management using either Puppet, Chef or an equivalent - Ability to use a wide variety of Open Source Tools - Exp with AWS is a must. - Hands-on exp with at least 3 of RDS, EC2, ELB, EBS, S3, SQS, CodeDeploy, CloudWatch - Strong exp in managing SQL and MySQL Databases - Hands on exp in Managing Web Servers Apache, Nginx, Lighttpd, Tomcat (Apache Exp is valued) - A working understanding of Code and Script (PHP, Python, Perl and/or Ruby) - Knowledge of best practices and IT operations in an always-up, always-available service - Good to have exp in Python and NodeJS Stack : - Python/Django, MySQL, NodeJS+MongoDB - ElasticCache, DyanmoDB, SQS, S3 - Deployed in AWS We'd love to see : - Experience in Python and data modeling - Git and distributed revision control experience - High-profile work on commercial or open-source projects - Take ownership in your work, and understand the need for code quality, elegance, and robust infrastructure Look forward to : - Working with a world-class team. - Fun & work at the same place with an amazing work culture and flexible timings. - Get ready to transform yourself into a health junkie Join HealthifyMe and make history!
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort