11+ Perforce Jobs in India
Apply to 11+ Perforce Jobs on CutShort.io. Find your next job, effortlessly. Browse Perforce Jobs and apply today!
JOB DETAILS:
* Job Title: Specialist I - DevOps Engineering
* Industry: Global Digital Transformation Solutions Provider
* Salary: Best in Industry
* Experience: 7-10 years
* Location: Bengaluru (Bangalore), Chennai, Hyderabad, Kochi (Cochin), Noida, Pune, Thiruvananthapuram
Job Description
Job Summary:
As a DevOps Engineer focused on Perforce to GitHub migration, you will be responsible for executing seamless and large-scale source control migrations. You must be proficient with GitHub Enterprise and Perforce, possess strong scripting skills (Python/Shell), and have a deep understanding of version control concepts.
The ideal candidate is a self-starter, a problem-solver, and thrives on challenges while ensuring smooth transitions with minimal disruption to development workflows.
Key Responsibilities:
- Analyze and prepare Perforce repositories — clean workspaces, merge streams, and remove unnecessary files.
- Handle large files efficiently using Git Large File Storage (LFS) for files exceeding GitHub’s 100MB size limit.
- Use git-p4 fusion (Python-based tool) to clone and migrate Perforce repositories incrementally, ensuring data integrity.
- Define migration scope — determine how much history to migrate and plan the repository structure.
- Manage branch renaming and repository organization for optimized post-migration workflows.
- Collaborate with development teams to determine migration points and finalize migration strategies.
- Troubleshoot issues related to file sizes, Python compatibility, network connectivity, or permissions during migration.
Required Qualifications:
- Strong knowledge of Git/GitHub and preferably Perforce (Helix Core) — understanding of differences, workflows, and integrations.
- Hands-on experience with P4-Fusion.
- Familiarity with cloud platforms (AWS, Azure) and containerization technologies (Docker, Kubernetes).
- Proficiency in migration tools such as git-p4 fusion — installation, configuration, and troubleshooting.
- Ability to identify and manage large files using Git LFS to meet GitHub repository size limits.
- Strong scripting skills in Python and Shell for automating migration and restructuring tasks.
- Experience in planning and executing source control migrations — defining scope, branch mapping, history retention, and permission translation.
- Familiarity with CI/CD pipeline integration to validate workflows post-migration.
- Understanding of source code management (SCM) best practices, including version history and repository organization in GitHub.
- Excellent communication and collaboration skills for cross-team coordination and migration planning.
- Proven practical experience in repository migration, large file management, and history preservation during Perforce to GitHub transitions.
Skills: Github, Kubernetes, Perforce, Perforce (Helix Core), Devops Tools
Must-Haves
Git/GitHub (advanced), Perforce (Helix Core) (advanced), Python/Shell scripting (strong), P4-Fusion (hands-on experience), Git LFS (proficient)
Job Title: DevOps Engineer
Location: Mumbai
Experience: 2–4 Years
Department: Technology
About InCred
InCred is a new-age financial services group leveraging technology and data science to make lending quick, simple, and hassle-free. Our mission is to empower individuals and businesses by providing easy access to financial services while upholding integrity, innovation, and customer-centricity. We operate across personal loans, education loans, SME financing, and wealth management, driving financial inclusion and socio-economic progress. [incred.com], [canvasbusi...smodel.com]
Role Overview
As a DevOps Engineer, you will play a key role in automating, scaling, and maintaining our cloud infrastructure and CI/CD pipelines. You will collaborate with development, QA, and operations teams to ensure high availability, security, and performance of our systems that power millions of transactions.
Key Responsibilities
- Cloud Infrastructure Management: Deploy, monitor, and optimize infrastructure on AWS (EC2, EKS, S3, VPC, IAM, RDS, Route53) or similar platforms.
- CI/CD Automation: Build and maintain pipelines using tools like Jenkins, GitLab CI, or similar.
- Containerization & Orchestration: Manage Docker and Kubernetes clusters for scalable deployments.
- Infrastructure as Code: Implement and maintain IaC using Terraform or equivalent tools.
- Monitoring & Logging: Set up and manage tools like Prometheus, Grafana, ELK stack for proactive monitoring.
- Security & Compliance: Ensure systems adhere to security best practices and regulatory requirements.
- Performance Optimization: Troubleshoot and optimize system performance, network configurations, and application deployments.
- Collaboration: Work closely with developers and QA teams to streamline release cycles and improve deployment efficiency. [nexthire.breezy.hr], [nexthire.breezy.hr]
Required Skills
- 2–4 years of hands-on experience in DevOps roles.
- Strong knowledge of Linux administration and shell scripting (Bash/Python).
- Experience with AWS services and cloud architecture.
- Proficiency in CI/CD tools (Jenkins, GitLab CI) and version control systems (Git).
- Familiarity with Docker, Kubernetes, and container orchestration.
- Knowledge of Terraform or similar IaC tools.
- Understanding of networking, security, and performance tuning.
- Exposure to monitoring tools (Prometheus, Grafana) and log management.
Preferred Qualifications
- Experience in financial services or fintech environments.
- Knowledge of microservices architecture and enterprise-grade SaaS setups.
- Familiarity with compliance standards in BFSI (Banking & Financial Services Industry).
Why Join InCred?
- Culture: High-performance, ownership-driven, and innovation-focused environment.
- Growth: Opportunities to work on cutting-edge tech and scale systems for millions of users.
- Rewards: Competitive compensation, ESOPs, and performance-based incentives.
- Impact: Be part of a mission-driven organization transforming India’s credit landscape.
Job Description
Role: Sr. DevOps – Architect
Location: Bangalore
Who are we looking for?
A senior level DevOps consultant with deep DevOps related expertise. The Individual should be passionate about technology and demonstrate depth and breadth of expertise in similar roles and Enterprise Systems/Enterprise Architecture Frameworks;
Technical Skills:
• 8+ years of relevant DevOps /Operations/Development experience working under Agile DevOps culture on large scale distributed systems.
• Experience in building a DevOps platform in integrating DevOps tool chain using REST/SOAP/ESB technologies.
• Required hands on programming skills on developing automation modules using one of these scripting languages Python/Perl/Ruby/Bash
• Require hands on experience with public cloud such as AWS, Azure, Openstack, Pivotal Cloud Foundry etc though Azure experience is must.
• Experience in working more than one of the configuration management tools like Chef/Puppet/Ansible and building own cookbook/manifest is required.
• Experience with Docker and Kubernetes.
• Experience in Building CI/CD pipelines using any of the continuous integration tools like Jenkins, Bamboo etc.
• Experience with planning tools like Jira, Rally etc.
• Hands on experience on continuous integration and build tools like (Jenkins, Bamboo, CruiseControl etc.) along with version control system like (GIT, SVN, GITHUB, TFS etc.), build automation tools like Maven/Gradle/ANT and dependency management tools like Artifactory/Nexus.
• Experience with more than one deployment automation tools like IBM urban code, CA automic, XL Deploy etc.
• Experience on setting up and managing DevOps tools on Repository, Monitoring, Log Analysis etc. using tools like (New Relic, Splunk, App Dynamics etc.)
• Understanding of Applications, Networking and Open source tools.
• Experience on security side of DevOps i.e. DevSecOps
• Good to have understanding of Micro services architecture.
• Experience working with remote/offshore teams is a huge plus
• Experience in building a Dashboard based on latest JS technologies like NodeJS
• Experience with NoSQL database like MongoDB
• Experience in working with REST APIs
• Experience with tools like NPM, Gulp
Process Skills:
• Ability in performing rapid assessments of clients’ internal technology landscape and targeting use cases and deployment targets
• Develop and create program blueprint, case study, supporting technical documentations for DevOps to be commercialized and duplicate work across different business customers
• Compile, deliver, and evangelize roadmaps that guide the evolution of services
• Grasp and communicate big-picture enterprise-wide issues to team
• Experience working in an Agile / Scrum / SAFe environment preferred
Behavioral Skills :
• Should have directly worked on creating enterprise level operating models, architecture options
• Model as-is and to-be architectures based on business requirements
• Good communication & presentation skills
• Self-driven + Disciplined + Organized + Result Oriented + Focused & Passionate about work
• Flexible for short term travel
Primary Duties / Responsibilities:
• Build Automations and modules for DevOps platform
• Build integrations between various DevOps tools
• Interface with another teams to provide support and understand the overall vision of the transformation platform.
• Understand the customer deployment scenarios, and Continuously improve and update the platform based on agile requirements.
• Preparing HLDs and LLDs.
• Presenting status to leadership and key stakeholders at regular intervals
Qualification:
• Somebody who has at least 12+ years of work experience in software development.
• 5+ years industry experience in DevOps architecture related to Continuous Integration/Delivery solutions, Platform Automation including technology consulting experience
• Education qualification: B.Tech, BE, BCA, MCA, M. Tech or equivalent technical degree from a reputed college
Job Description
We are seeking a skilled DevOps Specialist to join our global automotive team. As DevOps Specialist, you will be responsible for managing operations, system monitoring, troubleshooting, and supporting automation workflows to ensure operational stability and excellence for enterprise IT projects. You will be providing support for critical application environments for industry leaders in the automotive industry.
Responsibilities:
Daily maintenance tasks on application availability, response times, pro-active incident tracking on system logs and resources monitoring
Incident Management: Monitor and respond to tickets raised by the DevOps team or end-users.
Support users with prepared troubleshooting Maintain detailed incident logs, track SLAs, and prepare root cause analysis reports.
Change & Problem Management: Support scheduled changes, releases, and maintenance activities. Assist in identifying and tracking recurring issues.
Documentation & Communication: Maintain process documentation, runbooks, and knowledge base articles. Provide regular updates to stakeholders on incidents and resolutions.
Tool & Platform Support: Manage and troubleshoot CI/CD tools (e.g., Jenkins, GitLab), container platforms (e.g., Docker, Kubernetes), and cloud services (e.g., AWS, Azure).
Requirements:
DevOps Skillset: Logfile analysis /troubleshooting (ELK Stack), Linux administration, Monitoring (App Dynamics, Checkmk, Prometheus, Grafana), Security (Black Duck, SonarQube, Dependabot, OWASP or similar)
Experience with Docker.
Familiarity with DevOps principles and ticket tools like ServiceNow.
Experience in handling confidential data and safety sensitive systems
Strong analytical, communication, and organizational abilities. Easy to work with.
Optional: Experience with our relevant business domain (Automotive / Manufacturing industry, especially production management systems). Familiarity with IT process frameworks SCRUM, ITIL.
Skills & Requirements
DevOps, Logfile Analysis, Troubleshooting, ELK Stack, Linux Administration, Monitoring, AppDynamics, Checkmk, Prometheus, Grafana, Security, Black Duck, SonarQube, Dependabot, OWASP, Docker, CI/CD, Jenkins, GitLab, Kubernetes, AWS, Azure, ServiceNow, Incident Management, Change Management, Problem Management, Documentation, Communication, Analytical Skills, Organizational Skills, SCRUM, ITIL, Automotive Industry, Manufacturing Industry, Production Management Systems.
Company Overview
Adia Health revolutionizes clinical decision support by enhancing diagnostic accuracy and personalizing care. It modernizes the diagnostic process by automating optimal lab test selection and interpretation, utilizing a combination of expert medical insights, real-world data, and artificial intelligence. This approach not only streamlines the diagnostic journey but also ensures precise, individualized patient care by integrating comprehensive medical histories and collective platform knowledge.
Position Overview
We are seeking a talented and experienced Site Reliability Engineer/DevOps Engineer to join our dynamic team. The ideal candidate will be responsible for ensuring the reliability, scalability, and performance of our infrastructure and applications. You will collaborate closely with development, operations, and product teams to automate processes, implement best practices, and improve system reliability.
Key Responsibilities
- Design, implement, and maintain highly available and scalable infrastructure solutions using modern DevOps practices.
- Automate deployment, monitoring, and maintenance processes to streamline operations and increase efficiency.
- Monitor system performance and troubleshoot issues, ensuring timely resolution to minimize downtime and impact on users.
- Implement and manage CI/CD pipelines to automate software delivery and ensure code quality.
- Manage and configure cloud-based infrastructure services to optimize performance and cost.
- Collaborate with development teams to design and implement scalable, reliable, and secure applications.
- Implement and maintain monitoring, logging, and alerting solutions to proactively identify and address potential issues.
- Conduct periodic security assessments and implement appropriate measures to ensure the integrity and security of systems and data.
- Continuously evaluate and implement new tools and technologies to improve efficiency, reliability, and scalability.
- Participate in on-call rotation and respond to incidents promptly to ensure system uptime and availability.
Qualifications
- Bachelor's degree in Computer Science, Engineering, or related field
- Proven experience (5+ years) as a Site Reliability Engineer, DevOps Engineer, or similar role
- Strong understanding of cloud computing principles and experience with AWS
- Experience of building and supporting complex CI/CD pipelines using Github
- Experience of building and supporting infrastructure as a code using Terraform
- Proficiency in scripting and automating tools
- Solid understanding of networking concepts and protocols
- Understanding of security best practices and experience implementing security controls in cloud environments
- Knowing modern security requirements like SOC2, HIPAA, HITRUST will be a solid advantage.
Requirements:
• Previously help a DevOps Engineer or System Engineer role
• 4+ years of production Linux system admin experience in high traffic environment
• 1+ years of experience with Amazon AWS and related services (instances, ELB,
EBS, S3, etc.) and abstractions on top of AWS.
• Strong understanding of network fundamentals, IP and related services (DNS, VPN, firewalls, etc.) and
security concerns.
• Experience in running Docker and Kubernetes clusters in production.
• Love automating mundane tasks and make developers life easy
• Must be able to code in, at a minimum, Python (or Ruby) and Bash.
• Non-trivial production experience with Saltstack and/or Puppet, Composer,Jenkins, GIT
• Agile software development best practices - continuous integration, releases,branches, etc.
• Experience with modern monitoring tools; capacity planning.
• Some experience with MySQL, PostgreSQL, ElasticSearch, Node.js, and PHP is a plus.
• Self-motivated, fast learner, detail-oriented, team player with a sense of humor
Experience in managing CI/CD using Jenkins.
BlueOptima’s vision is to become the global reference for the optimisation of the performance of Software Engineers across all industries. We provide industry-leading objective metrics in software development. We enable large organisations to deliver better software, faster and at lower cost, with technology that pushes the limits of what has been done before.
We are a global company which has consistently doubled in headcount and revenue YoY, with no external investment. We currently are located in 4 countries: London (our HQ), Mexico, India and the US. A total number of 250+ employees (and increasing every day) from 34 different nationalities and with over 25 languages spoken.
We promote an open-minded environment and encourage our employees to create their own success story in this high-performance environment.
Location: Bangalore
Department: DevOps
Job Summary:
We are looking for skilled and talented engineers to join our Platform team and directly contribute to Continuous Delivery, and improve the state of art in CI/CD and Observability within BlueOptima.
As a Senior DevOps Engineer, you will define and outline CI/CD related aspects and collaborate with application teams on imparting training and enforcing best practices to follow for CI/CD and also directly implement, maintain, and consult on the observability and monitoring framework that supports the needs of multiple internal stakeholders.
Your team: The Platform team in BlueOptima works across Product lines and is responsible for providing a scalable technology platform which is used by the Product team to build their application, improve performance of it, or even improve the SDLC by improving the application delivery pipeline, etc.
Platform team is also responsible for driving technology adoption across the product development team. The team works on components that are common across product lines like IAM (Identity & Access Management), Auto Scaling, APM (Application Performance Monitoring) and CI/CD, etc
Responsibilities and tasks:
- Define & Outline of CI/CD and related aspects
- Own & Improve state of build process to reduce manual intervention
- Own & Improve state of deployment to make it 100% automated
- Define guidelines and standards of automated testing required for a good CI/CD pipeline, ensures alignment on an ongoing basis (includes artifacts generation, promotions, etc)
- Automating Deployment and Roll back into Production Environment.
- Collaborate with engineering teams, application developers, management and infrastructure teams to assess near- and long-term monitoring needs and provide them with Tooling to improve observability of application in production.
- Keep an eye on the emerging observability tools, trends and methodologies, and continuously enhance our existing systems and processes.
- Ability to choose the right set of tools for a given problem and apply that to all the applications which are available
- Collaborate with the application team for following
- Define and enforce logging standard
- Define metrics applications should track and provide support to application teams visualise same on Grafana (or similar tools)
- Define alerts for application health monitoring in Production
- Tooling like APM, E2E, etc
- Continuously improve the state of the art on above
- Assist in scheduling and hosting regular tool training sessions to better enable tool adoption and best practices, also making sure training materials are maintained.
Qualifications
What You Need to Succeed at BlueOptima:
- Minimum bachelor's degree in Computer Science or equivalent
- Demonstrable years of experience with implementation, operations, maintenance of IT systems and/or administration of software functions in multi-platform and multi-system environments.
- At least 1 year of experience leading or mentoring a small team.
- Demonstrable experience having developed containerized application components, using docker or similar solutions in previous roles
- Have extensive experience with metrics and logging libraries and aggregators, data analysis and visualization tools.
- Experience in defining, creating, and supporting monitoring dashboards
- 2+ Years of Experience with CI tools and building pipelines using Jenkins.
- 2 + Years of Experience with monitoring and observability tools and methodology of products such as; Grafana, Prometheus, ElasticSearch, Splunk, AppDynamics, Dynatrace, Nagios, Graphite ,Datadog etc.
- Ability to write and read simple scripts using Python / Shell Scripts.
- Familiarity with configuration languages such as Ansible.
- Ability to work autonomously with minimum supervision
- Demonstrate strong oral and written communication skill
Additional information
Why join our team?
Culture and Growth:
- Global team with a creative, innovative and welcoming mindset.
- Rapid career growth and opportunity to be an outstanding and visible contributor to the company's success.
- Freedom to create your own success story in a high-performance environment.
- Training programs and Personal Development Plans for each employee
Benefits:
- 32 days of holidays - this includes public and religious holidays
- Contributions to your Provident Fund which can be matched by the company above the statutory minimum as agreed
- Private Medical Insurance provided by the company
- Gratuity payments
- Claim Mobile/Internet expenses and Professional Development costs
- Leave Travel Allowance
- Flexible Work from Home policy - 2 days home p/w
- International travel opportunities
- Global annual meet up (most recent meetups have been held in Cancun and India Thailand, Oct 2022.
- High quality equipment (Ergonomic chairs and 32’ screens)
- Pet friendly offices
- Creche Policy for working parents.
- Paternity and Maternity leave.
Stay connected with us on https://www.linkedin.com/company/blueoptima">LinkedIn or keep an eye on our https://www.blueoptima.com/careers">career page for future opportunities!
• Expertise in any one hyper-scale (AWS/AZURE/GCP), including basic services like networking, data and workload management.
o AWS
Networking: VPC, VPC Peering, Transit Gateway, RouteTables, SecurityGroups, etc.
Data: RDS, DynamoDB, ElasticSearch
Workload: EC2, EKS, Lambda, etc.
o Azure
Networking: VNET, VNET Peering,
Data: Azure MySQL, Azure MSSQL, etc.
Workload: AKS, VirtualMachines, AzureFunctions
o GCP
Networking: VPC, VPC Peering, Firewall, Flowlogs, Routes, Static and External IP Addresses
Data: Cloud Storage, DataFlow, Cloud SQL, Firestore, BigTable, BigQuery
Workload: GKE, Instances, App Engine, Batch, etc.
• Experience in any one of the CI/CD tools (Gitlab/Github/Jenkins) including runner setup, templating and configuration.
• Kubernetes experience or Ansible Experience (EKS/AKS/GKE), basics like pod, deployment, networking, service mesh. Used any package manager like helm.
• Scripting experience (Bash/python), automation in pipelines when required, system service.
• Infrastructure automation (Terraform/pulumi/cloudformation), write modules, setup pipeline and version the code.
Optional
• Experience in any programming language is not required but is appreciated.
• Good experience in GIT, SVN or any other code management tool is required.
• DevSecops tools like (Qualys/SonarQube/BlackDuck) for security scanning of artifacts, infrastructure and code.
• Observability tools (Opensource: Prometheus, Elasticsearch, OpenTelemetry; Paid: Datadog, 24/7, etc)
DevOps Engineer, Plum & Empuls
As DevOps Engineer, you are responsible to setup and maintain GIT repository, DevOps tools like Jenkins, UCD, Docker, Kubernetes, Jfrog Artifactory, Cloud monitoring tools, Cloud security.
Key Responsibilities
- Setup, configure, and maintain GIT repos, Jenkins, UCD, etc. for multi hosting cloud environments.
- Architect and maintain the server infrastructure in AWS. Build highly resilient infrastructure following industry best practices.
- Working on Docker images and maintaining Kubernetes clusters.
- Develop and maintain the automation scripts using Ansible or other available tools.
- Maintain and monitor cloud Kubernetes Clusters and patching when necessary.
- Working on Cloud security tools to keep applications secured.
- Participate in software development lifecycle, specifically infra design, execution, and debugging required to achieve successful implementation of integrated solutions within the portfolio.
- Required Technical and Professional Expertise.
- Minimum 4-6 years of experience in IT industry.
- Expertise in implementing and managing Devops CI/CD pipeline.
- Experience in DevOps automation tools. And Very well versed with DevOps Frameworks, Agile.
- Working knowledge of scripting using shell, Python, Terraform, Ansible or puppet or chef.
- Experience and good understanding in any of Cloud like AWS, Azure, Google cloud.
- Knowledge of Docker and Kubernetes is required.
- Proficient in troubleshooting skills with proven abilities in resolving complex technical issues.
- Experience with working with ticketing tools.
- Middleware technologies knowledge or database knowledge is desirable.
- Experience and well versed with Jira tool is a plus.
We are
A fast-growing SaaS commerce company based in Bangalore with offices in Delhi, San Francisco, and Dublin. Empuls works with over 100 global clients. We help our clients engage and motivate their employees, sales teams, channel partners, or consumers for better business results.
Way forward
We look forward to connecting with you. As you may take time to review this opportunity, we will wait for a reasonable time of around 3-5 days before we screen the collected applications and start lining up job discussions with the hiring manager. We however assure you that we will attempt to maintain a reasonable time window for successfully closing this requirement. The candidates will be kept informed and updated on the feedback and application status
Position Level: Senior Engineer
Company Overview:
AskSid.ai is a 4 years old start-up based in Bangalore, is fast growing and cofounded by
two ex-Mindtree employees each with 20+ years of experience. We were rated the No1
emerging SaaS company in India and won the NASSCOM EMERGE 50- League of 10
awards in 2019. Also got rated as the most innovative AI company in India for 2020 by
CII and Accenture Ventures. As a growing company, we are looking for passionate
engineers who aspire to build world class technology products of internet scale.
Job purpose:
Setup, optimize, and maintain Kubernetes clusters on Microsoft Azure Cloud.
Responsibilities
● Setup, maintain, optimize, and secure various Kubernetes clusters on MS Azure
Cloud
● Setup and maintain containers, container availability, auto-scalability, storage
management, DNS, Proxy setup and maintain firewall, app gateway, and load
balancers on MS Azure Cloud.
● Build and manage backup, restore, and DR activities
Knowledge and skills
Education and Experience
- Engineering in computer science
- 3-5 years of experience in setup and management of Kubernetes infrastructure
- Expert level skills in analytical & problem solving
- Ability to communicate clearly in English
- Microsoft Azure
- Kubernetes, AKS services as well as custom clusters on bare metal infrastructure
- Linux internals & services
- Docker, Docker Registry
- NGINX, Load Balancing, Firewall, Security, PKI
- Shell & Awk Script, Azure Templates & scripting, Python Scripting
- Knowledge of NoSQL Databases
Technical Experience/Knowledge Needed :
- Cloud-hosted services environment.
- Proven ability to work in a Cloud-based environment.
- Ability to manage and maintain Cloud Infrastructure on AWS
- Must have strong experience in technologies such as Dockers, Kubernetes, Functions, etc.
- Knowledge in orchestration tools Ansible
- Experience with ELK Stack
- Strong knowledge in Micro Services, Container-based architecture and the corresponding deployment tools and techniques.
- Hands-on knowledge of implementing multi-staged CI / CD with tools like Jenkins and Git.
- Sound knowledge on tools like Kibana, Kafka, Grafana, Instana and so on.
- Proficient in bash Scripting Languages.
- Must have in-depth knowledge of Clustering, Load Balancing, High Availability and Disaster Recovery, Auto Scaling, etc.
-
AWS Certified Solutions Architect or/and Linux System Administrator
- Strong ability to work independently on complex issues
- Collaborate efficiently with internal experts to resolve customer issues quickly
- No objection to working night shifts as the production support team works on 24*7 basis. Hence, rotational shifts will be assigned to the candidates weekly to get equal opportunity to work in a day and night shifts. But if you get candidates willing to work the night shift on a need basis, discuss with us.
- Early Joining
- Willingness to work in Delhi NCR





