11+ SAP CO Jobs in Hyderabad | SAP CO Job openings in Hyderabad
Apply to 11+ SAP CO Jobs in Hyderabad on CutShort.io. Explore the latest SAP CO Job opportunities across top companies like Google, Amazon & Adobe.
SAP Controlling Analyst
Callaway Golf is a growing collection of brands that blend of experience and diverse backgrounds, and our leaders have a strong history of building and selling successful initiatives. We are working to build a truly groundbreaking company, and we want top-notch people to join us in that mission.
JOB OVERVIEW
The Principal Analyst, SAP Controlling will be the functional lead for retail processes and capabilities and be an advocate for both business and technology decisions within our global IT organization.
This position is responsible for the design and management of business system solutions for Callaway Golf. We leverage ECC and SAP S/4. The incumbent will partner with corporate IT, business leaders and stakeholders to identify solutions to business needs, leveraging technology to add value to the company.
Must have superior business relationship skills and functional knowledge with change management experience. Our company is going through a technology evolution and this role will be key to many of these initiatives. This role will need to be a strong team leader who can develop their own resources while building strong cross-team collaboration and solutions.
ROLES AND RESPONSIBILITIES
- Provide solutions for business needs in all areas of the company.
- Coordinate with corporate IT, vendors and business users on project deliverables
- Manage the design, development and implementation of global business solutions.
- Design solutions with innovation and ensuring the ability to deliver value over time.
- Manage the design for a matrix of plants and retail location and the related inventory movements
- Responsible to provide global visibility, collaboration, and analytics to business leaders.
- Establish, grow, and enhance collaborative relationships with regional business leads.
- Provide value-added input and alternatives and facilitate requirements definition whenever appropriate.
- Manage multiple projects/solutions for multiple business areas.
- Collaborate within the department to ensure alignment and adherence to stated goals.
- Collaborate with business areas to understand needs and develop or introduce solutions to maximize value.
- Leverage matrix approach to sharing of resources in support of business needs.
- Manage internal/external resources as “virtual” project teams.
- Act as an active change agent within the organization challenging the team and business to improve and excel.
TECHNICAL COMPETENCIES (Knowledge, Skills & Abilities)
- Have the necessary level of expertise in the following SAP modules (preferably on S/4):
o Expert – SAP CO – Product Costing
o Expert – SAP CO – Profitability Analysis
o Expert – SAP CO – Cost Center Accounting
o Expert – SAP CO – Internal Orders
o Strong – SAP FI – General Ledger
o Strong – SAP FI – Accounts Receivable
o Strong – SAP FI – Accounts Payable
o Strong – SAP CO – Project Systems
o Strong – SAP EC – Profit Center Accounting
o Desired but not required – SAP Material Ledger
o Desired but not required – SAP FSCM
o Desired but not required – SAP Fixed Assets
o Desired but not required – SAP EC-CS
- Expert Accounting experience – CMA or CA preferred
- Experience with SAP IS-Retail or IS-Fashion is a plus
- Business process skills to facilitate conversations about best practices and solutions
- Excellent management and leadership skills: people, process and technology.
- Demonstrated achievement in developing and maintaining business partnership at all levels and from varying cultures around the world.
- Strong interpersonal communication skills; flexibility; responsiveness.
- Participative management style, advocate team-based concepts.
- Diverse background with broad knowledge of solution platforms, custom development, package implementations to work on highly complex projects.
Job Title : Kafka Admin
Experience : 5+ Years
Location : Hyderabad / Bangalore / Pune
Work Mode : Hybrid (3 Days Work From Office)
Shift Timings :
- 07:00 AM – 04:00 PM IST
- 02:00 PM – 11:00 PM IST
- 10:30 PM – 07:30 AM IST
Notice Period : Immediate to 15 Days
Job Description :
We are seeking an experienced Kafka Administrator with 8+ years of hands-on experience.
The ideal candidate should have deep expertise in managing Kafka environments, particularly in Confluent Kafka and EEH Kafka, and should be capable of working independently on configuration, integration, and administration.
Key Responsibilities :
- Administer and manage Kafka clusters, brokers, topics, and security configurations
- Design and implement Kafka-based integrations
- Configure, monitor, and troubleshoot Kafka environments
- Collaborate with development and infrastructure teams to ensure high availability and performance
- Work on Confluent Cloud and EEH Kafka environments
Must-Have Skills :
- Kafka Administration
- EEH Kafka
- Confluent Kafka / Confluent Cloud
- Kafka Configuration and Integration
- Strong troubleshooting and monitoring skills
JOB DETAILS:
* Job Title: Specialist I - DevOps Engineering
* Industry: Global Digital Transformation Solutions Provider
* Salary: Best in Industry
* Experience: 7-10 years
* Location: Bengaluru (Bangalore), Chennai, Hyderabad, Kochi (Cochin), Noida, Pune, Thiruvananthapuram
Job Description
Job Summary:
As a DevOps Engineer focused on Perforce to GitHub migration, you will be responsible for executing seamless and large-scale source control migrations. You must be proficient with GitHub Enterprise and Perforce, possess strong scripting skills (Python/Shell), and have a deep understanding of version control concepts.
The ideal candidate is a self-starter, a problem-solver, and thrives on challenges while ensuring smooth transitions with minimal disruption to development workflows.
Key Responsibilities:
- Analyze and prepare Perforce repositories — clean workspaces, merge streams, and remove unnecessary files.
- Handle large files efficiently using Git Large File Storage (LFS) for files exceeding GitHub’s 100MB size limit.
- Use git-p4 fusion (Python-based tool) to clone and migrate Perforce repositories incrementally, ensuring data integrity.
- Define migration scope — determine how much history to migrate and plan the repository structure.
- Manage branch renaming and repository organization for optimized post-migration workflows.
- Collaborate with development teams to determine migration points and finalize migration strategies.
- Troubleshoot issues related to file sizes, Python compatibility, network connectivity, or permissions during migration.
Required Qualifications:
- Strong knowledge of Git/GitHub and preferably Perforce (Helix Core) — understanding of differences, workflows, and integrations.
- Hands-on experience with P4-Fusion.
- Familiarity with cloud platforms (AWS, Azure) and containerization technologies (Docker, Kubernetes).
- Proficiency in migration tools such as git-p4 fusion — installation, configuration, and troubleshooting.
- Ability to identify and manage large files using Git LFS to meet GitHub repository size limits.
- Strong scripting skills in Python and Shell for automating migration and restructuring tasks.
- Experience in planning and executing source control migrations — defining scope, branch mapping, history retention, and permission translation.
- Familiarity with CI/CD pipeline integration to validate workflows post-migration.
- Understanding of source code management (SCM) best practices, including version history and repository organization in GitHub.
- Excellent communication and collaboration skills for cross-team coordination and migration planning.
- Proven practical experience in repository migration, large file management, and history preservation during Perforce to GitHub transitions.
Skills: Github, Kubernetes, Perforce, Perforce (Helix Core), Devops Tools
Must-Haves
Git/GitHub (advanced), Perforce (Helix Core) (advanced), Python/Shell scripting (strong), P4-Fusion (hands-on experience), Git LFS (proficient)
- Development/Technical support experience in preferably DevOps.
- Looking for an engineer to be part of GitHub Actions support. Experience with CI/CD tools like Bamboo, Harness, Ansible, Salt Scripting.
- Hands-on expertise with GitHub Actions and CICD Tools like Bamboo, Harness, CI/CD Pipeline stages, Build Tools, SonarQube, Artifactory, Nuget, Proget Veracode, LaunchDarkly, GitHub/Bitbucket repos, Monitoring tools.
- Handelling Xmatters,Techlines,Incidents
- Strong Scripting skills (PowerShell, Python, Bash/Shell Scripting) for Implementing automation scripts and Tools to streamline administrative tasks and improve efficiency.
- An Atlassian Tools Administrator is responsible for managing and maintaining Atlassian products such as Jira, Confluence, Bitbucket, and Bamboo.
- Expertise in Bitbucket, GitHub for version control and collaboration global level.
- Good experience on Linux/Windows systems activities, Databases.
- Aware of SLA and Error concepts and their implementations; provide support and participate in Incident management & Jira Stories. Continuously Monitoring system performance and availability, and responding to incidents promptly to minimize downtime.
- Well-versed with Observability tool as Splunk for Monitoring, alerting and logging solutions to identify and address potential issues, especially in infrastructure.
- Expert with Troubleshooting production issues and bugs. Identifying and resolving issues in production environments.
- Experience in providing 24x5 support.
- GitHub Actions
- Atlassian Tools (Bamboo, Bitbucket, Jira, Confluence)
- Build Tools (Maven, Gradle, MS Build, NodeJS)
- SonarQube, Veracode.
- Nexus, JFrog, Nuget, Proget
- Harness
- Salt Services, Ansible
- PowerShell, Shell scripting
- Splunk
- Linux, Windows
Title: Azure Cloud Developer/Engineer
Exp: 5+ yrs
Location: T-Hub, Hyderabad
Work from office (5 days/week)
Interview rounds: 2-3
Excellent comm skills
Immediate Joiner
Job Description
Position Overview:
We are seeking a highly skilled Azure Cloud Developer/Engineer with experience in designing, developing, and managing cloud infrastructure solutions. The ideal candidate should have a strong background in Azure infrastructure deployment using Terraform,Kubernetes (AKS) with advanced networking, and Helm Charts for application management.Experience with AWS is a plus. This role requires hands-on expertise in deploying scalable, secure,and highly available cloud solutions with strong networking capabilities.
Key Responsibilities:
- Deploy and manage Azure infrastructure using Terraform through CI/CD pipelines.
- Design, deploy, and manage Azure Kubernetes Service (AKS) with advanced networking features, including on-premise connectivity.
- Create and manage Helm Charts, ensuring best practices for configuration, templating, and application lifecycle management.
- Collaborate with development, operations, and security teams to ensure optimal cloud infrastructure architecture.
- Implement high-level networking solutions including Azure Private Link, VNET Peering, ExpressRoute, Application Gateway, and Web Application Firewall (WAF).
- Monitor and optimize cloud environments for performance, cost, scalability, and security using tools like Azure Cost Management, Prometheus, Grafana, and Azure Monitor.
- Develop CI/CD pipelines for automated deployments using Azure DevOps, GitHub Actions, or Jenkins, integrating Terraform for infrastructure automation.
- Implement security best practices, including Azure Security Center, Azure Policy, and Zero Trust Architecture.
- Troubleshoot and resolve issues in the cloud environment using Azure Service Health, Log Analytics, and Azure Sentinel.
- Ensure compliance with industry standards (e.g., CIS, NIST, ISO 27001) and organizational security policies.
- Work with Azure Key Vault for secrets and certificate management.
- Explore multi-cloud strategies, integrating AWS services where necessary.
Key Skills Required:
- Azure Cloud Infrastructure Deployment: Expertise in provisioning and managing Azure resources using Terraform within CI/CD pipelines.
- Kubernetes (AKS) with Advanced Networking: Experience in designing AKS clusters with private networking, hybrid connectivity (ExpressRoute, VPN), and security best practices.
- Infrastructure as Code (Terraform, Azure Bicep): Deep understanding of defining and maintaining cloud infrastructure through code.
- Helm Charts: Strong expertise in creating, deploying, and managing Helm-based Kubernetes application deployments.
- Networking & Security: In-depth knowledge of VNET Peering, Private Link, ExpressRoute,Application Gateway, WAF, and hybrid networking.
- CI/CD Pipelines: Experience with building and managing Azure DevOps, GitHub Actions, or Jenkins pipelines for infrastructure and application deployment.
- Monitoring & Logging: Experience with Prometheus, Grafana, Azure Monitor, Log Analytics,and Azure Sentinel.
- Scripting & Automation: Proficiency in Bash, PowerShell, or Python.
- Cost Optimization (FinOps): Strong knowledge of Azure Cost Management and cloud financial governance.
Qualifications:
- Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field.
- 5+ years of experience in cloud engineering, preferably with Azure-focused infrastructure deployment and Kubernetes networking.
- Strong understanding of containerization, orchestration, and microservices architecture.
Certifications (Preferred):
Required:
- Microsoft Certified: Azure Solutions Architect Expert
- Microsoft Certified: Azure DevOps Engineer Expert
Nice to Have (AWS Experience):
- AWS Certified Solutions Architect – Associate or Professional
- AWS Certified DevOps Engineer – Professional
Nice to Have Skills:
- Experience with multi-cloud environments (Azure & AWS).
- Familiarity with container security tools (Aqua Security, Prisma Cloud).
- Experience with GitOps methodologies using tools like ArgoCD or Flux.
- Understanding of serverless computing and event-driven architectures (Azure Functions, Event Grid, Logic Apps).
Benefits
Why Join Us?
- Competitive salary with performance-based incentives.
- Opportunities for professional certifications (e.g., AWS, Kubernetes, Terraform).
- Access to training programs, workshops, and learning resources.
- Comprehensive health insurance coverage for employees and their families.
- Wellness programs and mental health support.
- Hands-on experience with large-scale, innovative cloud solutions.
- Opportunities to work with modern tools and technologies.
- Inclusive, supportive, and team-oriented environment.
- Opportunities to collaborate with global clients and cross-functional teams.
- Regular performance reviews with rewards for outstanding contributions.
- Employee appreciation events and programs.
DevOps & Automation:
- Experience in CI/CD tools like Azure DevOps, YAML, Git, and GitHub. Capable of automating build, test, and deployment processes to streamline application delivery.
- Hands-on experience with Infrastructure as Code (IaC) tools such as Bicep (preferred), Terraform, Ansible, and ARM Templates.
Cloud Services & Architecture:
- Experience in Azure Cloud services, including Web Apps, AKS, Application Gateway, APIM, and Logic Apps.
- Good understanding of cloud design patterns, security best practices, and cost optimization strategies.
Scripting & Automation:
- Experience in developing and maintaining automation scripts using PowerShell to manage, monitor, and support applications.
- Familiar with Azure CLI, REST APIs, and automating workflows using Azure DevOps Pipelines.
Data Integration & ADF:
- Working knowledge or basic hands-on experience with Azure Data Factory (ADF), focusing on developing and managing data pipelines and workflows.
- Knowledge of data integration practices, including ETL/ELT processes and data transformations.
Application Management & Monitoring:
- Ability to provide comprehensive support for both new and legacy applications.
- Proficient in managing and monitoring application performance using tools like Azure Monitor, Log Analytics, and Application Insights.
- Understanding of application security principles and best practices.
Database Skills:
- Basic experience of SQL and Azure SQL, including database backups, restores, and application data management.
Job Description
- Implement IAM policies and configure VPCs to create a scalable and secure network for the application workloads
- Will be client point of contact for High Priority technical issues and new requirements
- Should act as Tech Lead and guide the junior members of team and mentor them
- Work with client application developers to build, deploy and run both monolithic and microservices based applications on AWS Cloud
- Analyze workload requirements and work with IT stakeholders to define proper sizing for cloud workloads on AWS
- Build, Deploy and Manage production workloads including applications on EC2 instance, APIs on Lambda Functions and more
- Work with IT stakeholders to monitor system performance and proactively improve the environment for scale and security
Qualifications
- Prefer to have at least 5+ years of IT experience implementing enterprise applications
- Should be AWS Solution Architect Associate Certified
- Must have at least 3+ years of working as a Cloud Engineer focused on AWS services such as EC2, CloudFront, VPC, CloudWatch, RDS, DynamoDB, Systems Manager, Route53, WAF, API Gateway, Elastic beanstalk, ECS, ECR, Lambda, SQS, SNS, S3 bucket, Elastic Search, DocumentDB IAM, etc.
- Must have a strong understanding of EC2 instances, types and deploying applications to the cloud
- Must have a strong understanding of IAM policies, VPC creation, and other security/networking principles
- Must have through experience in doing on prem to AWS cloud workload migration
- Should be comfortable in using AWS and other migrations tools
- Should have experience is working on AWS performance, Cost and Security optimisation
- Should be experience in implementing automated patching and hardening of the systems
- Should be involved in P1 tickets and also guide team wherever needed
- Creating Backups and Managing Disaster Recovery
- Experience in using Infra as a code automation using scripts & tools like CloudFormation and Terraform
- Any exposure towards creating CI/CD pipelines on AWS using CodeBuild, CodeDeploy, etc. is an advantage
- Experience with Docker, Bitbucket, ELK and deploying applications on AWS
- Good understanding of Containerisation technologies like Docker, Kubernetes etc.
- Should be experience in using and configuring cloud monitoring tools and ITSM ticketing tools
- Good exposure to Logging & Monitoring tools like Dynatrace, Prometheus, Grafana, ELF/EFK
Position Summary
DevOps is a Department of Horizontal Digital, within which we have 3 different practices.
- Cloud Engineering
- Build and Release
- Managed Services
This opportunity is for Cloud Engineering role who also have some experience with Infrastructure migrations, this will be a complete hands-on job, with focus on migrating clients workloads to the cloud, reporting to the Solution Architect/Team Lead and along with that you are also expected to work on different projects for building out the Sitecore Infrastructure from scratch.
We are Sitecore Platinum Partner and majority of the Infrastructure work that we are doing is for Sitecore.
Sitecore is a .Net Based Enterprise level Web CMS, which can be deployed on On-Prem, IaaS, PaaS and Containers.
So, most of our DevOps work is currently planning, architecting and deploying infrastructure for Sitecore.
Key Responsibilities:
- This role includes ownership of technical, commercial and service elements related to cloud migration and Infrastructure deployments.
- Person who will be selected for this position will ensure high customer satisfaction delivering Infra and migration projects.
- Candidate must expect to work in parallel across multiple projects, along with that candidate must also have a fully flexible approach to working hours.
- Candidate should keep him/herself updated with the rapid technological advancements and developments that are taking place in the industry.
- Along with that candidate should also have a know-how on Infrastructure as a code, Kubernetes, AKS/EKS, Terraform, Azure DevOps, CI/CD Pipelines.
Requirements:
- Bachelor’s degree in computer science or equivalent qualification.
- Total work experience of 6 to 8 Years.
- Total migration experience of 4 to 6 Years.
- Multiple Cloud Background (Azure/AWS/GCP)
- Implementation knowledge of VMs, Vnet,
- Know-how of Cloud Readiness and Assessment
- Good Understanding of 6 R's of Migration.
- Detailed understanding of the cloud offerings
- Ability to Assess and perform discovery independently for any cloud migration.
- Working Exp. on Containers and Kubernetes.
- Good Knowledge of Azure Site Recovery/Azure Migrate/Cloud Endure
- Understanding on vSphere and Hyper-V Virtualization.
- Working experience with Active Directory.
- Working experience with AWS Cloud formation/Terraform templates.
- Working Experience of VPN/Express route/peering/Network Security Groups/Route Table/NAT Gateway, etc.
- Experience of working with CI/CD tools like Octopus, Teamcity, Code Build, Code Deploy, Azure DevOps, GitHub action.
- High Availability and Disaster Recovery Implementations, taking into the consideration of RTO and RPO aspects.
- Candidates with AWS/Azure/GCP Certifications will be preferred.

Our client is a fast-growing European passenger car company.
We are looking for a System Engineer who can manage requirements and data management in Rational Doors and Siemens Polarion. You will be part of a global development team with resources in China, Sweden and the US.
Responsibilities and tasks
- Import of requirement specifications to DOORS module
- Create module structure according to written specification (e-mail, word, etc)
- Format: reqif, word, excel, pdf, csv
- Make adjustments to data required to be able to import to tool
- Review that the result is readable and possible to work with
- Import of information to new or existing modules in DOORS
- Feedback of Compliance status from an excel compliance matrix to a module in DOORS
- Import requirements from one module to another based on baseline/filter…
- Import lists of items: Test cases, documents, etc in excel or csv to a module
- Provide guidance on format to information holder at client
- Link information/attribute data from one module to others
- Status, test results, comment
- Link requirements according to information from the client in any given format
- Export data and reports
- Assemble report based on data from one or several modules according to filters/baseline/written requests in any given format
- Export statistics from data in DOORS modules
- Create filters in DOORS modules
Note: Polarion activities same as DOORS activities, but process, results and structure may vary
Requirements – Must list (short, and real must, no order)
- =>10 years of overall experience in Automotive Industry
- Having requirement management experience in the automotive industry.
- =>3 years of experience in Rational Doors as user
- Knowledge in Siemens Polarion, working knowledge is a plus
- Experience in offshore delivery for more than 7 years
- Able to lead a team of 3 to 5 people and manage temporary additions to team
- Having working knowledge in ASPICE and handling requirements according to ASPICE L2
- Experience in setting up offshore delivery that best fits the expectations of the customer
- Experience in setting up quality processes and ways of working
- Experience in metrics management – propose, capture and share metrics with internal/ external stakeholders
- Good Communication skills in English
Requirements - Good to have list, strictly sorted in falling priority order
- Experience in DevOps framework of delivery
- Interest in learning new languages
- Handling requirements according to ASPICE L3
- Willingness in travel, travel to Sweden may be needed (approx. 1-2 per year)
Soft skills
- Candidate must be driving and proactive person, able to work with minimum supervision and will be asked to give example situations incoming interviews.
- Good team player with attention to detail, self-disciplined, able to manage their own time and workload, proactive and motivated.
- Strong sense of responsibility and commitment, innovative thinking.
Below is the Job details:
Role: DevOps Architect
Experience Level: 8-12 Years
Job Location: Hyderabad
Key Responsibilities :
Look through the various DevOps Tools/Technologies and identify the strengths and provide direction to the DevOps automation team
Out-of-box thought process on the DevOps Automation Platform implementation
Expose various tools and technologies and do POC on integration of the these tools
Evaluate Backend API's for various DevOps tools
Perform code reviews keep in context of RASUI
Mentor the team on the various E2E integrations
Be Liaison in evangelizing the automation solution currently implemented
Bring in various DevOps best Practices/Principles and participate in adoption with various app teams
Must have:
Should possess Bachelors/Masters in computer science with minimum of 8+ years of experience
Should possess minimum 3 years of strong experience in DevOps
Should possess expertise in using various DevOps tools libraries and API's (Jenkins/JIRA/AWX/Nexus/GitHub/BitBucket/SonarQube)
Should possess expertise in optimizing the DevOps stack ( Containers/Kubernetes/Monitoring )
2+ Experience in creating solutions and translate to the development team
Should have strong understanding of OOPs, SDLC (Agile Safe standards)
Proficient in Python , with a good knowledge of its ecosystems (IDEs and Frameworks)
Proficient in various cloud platforms (Azure/AWS/Google cloud platform)
Proficient in various DevOps offerings (Pivotal/OpenStack/Azure DevOps
Regards,
Talent acquisition team
Tetrasoft India
Stay home and Stay safe



