11+ Heuristic evaluation Jobs in Chennai | Heuristic evaluation Job openings in Chennai
Apply to 11+ Heuristic evaluation Jobs in Chennai on CutShort.io. Explore the latest Heuristic evaluation Job opportunities across top companies like Google, Amazon & Adobe.
Job Title : Senior SAP PPDS Consultant
Experience : 6+ Years
Location : Open to USI locations (Hyderabad / Bangalore / Mumbai / Pune / Chennai / Gurgaon)
Job Type : Full-Time
Start Date : Immediate Joiners Preferred
Job Description :
We are urgently seeking a Senior SAP PPDS (Production Planning and Detailed Scheduling) Consultant with strong implementation experience.
The ideal candidate will be responsible for leading and supporting end-to-end project delivery for SAP PPDS, contributing to solution design, configuration, testing, and deployment in both Greenfield and Brownfield environments.
Mandatory Skills : SAP PPDS, CIF Integration, Heuristics, Pegging Strategies, Production Scheduling, S/4 HANA or ECC, Greenfield/Brownfield Implementation.
Key Responsibilities :
- Lead the implementation of SAP PPDS modules including system configuration and integration with SAP ECC/S4 HANA.
- Collaborate with stakeholders to gather requirements and define functional specifications.
- Design, configure, and test SAP PPDS solutions to meet business needs.
- Provide support for system upgrades, patches, and enhancements.
- Participate in workshops, training sessions, and knowledge transfers.
- Troubleshoot and resolve issues during implementation and post-go-live.
- Ensure documentation of functional specifications, configuration, and user manuals.
Required Skills :
- Minimum 6+ Years of SAP PPDS experience.
- At least 1-2 Greenfield or Brownfield implementation projects.
- Strong understanding of supply chain planning and production scheduling.
- Hands-on experience in CIF integration, heuristics, optimization, and pegging strategies.
- Excellent communication and client interaction skills.
Preferred Qualifications :
- Experience in S/4 HANA environment.
- SAP PPDS Certification is a plus.
- Experience working in large-scale global projects.
JOB DETAILS:
* Job Title: Specialist I - DevOps Engineering
* Industry: Global Digital Transformation Solutions Provider
* Salary: Best in Industry
* Experience: 7-10 years
* Location: Bengaluru (Bangalore), Chennai, Hyderabad, Kochi (Cochin), Noida, Pune, Thiruvananthapuram
Job Description
Job Summary:
As a DevOps Engineer focused on Perforce to GitHub migration, you will be responsible for executing seamless and large-scale source control migrations. You must be proficient with GitHub Enterprise and Perforce, possess strong scripting skills (Python/Shell), and have a deep understanding of version control concepts.
The ideal candidate is a self-starter, a problem-solver, and thrives on challenges while ensuring smooth transitions with minimal disruption to development workflows.
Key Responsibilities:
- Analyze and prepare Perforce repositories — clean workspaces, merge streams, and remove unnecessary files.
- Handle large files efficiently using Git Large File Storage (LFS) for files exceeding GitHub’s 100MB size limit.
- Use git-p4 fusion (Python-based tool) to clone and migrate Perforce repositories incrementally, ensuring data integrity.
- Define migration scope — determine how much history to migrate and plan the repository structure.
- Manage branch renaming and repository organization for optimized post-migration workflows.
- Collaborate with development teams to determine migration points and finalize migration strategies.
- Troubleshoot issues related to file sizes, Python compatibility, network connectivity, or permissions during migration.
Required Qualifications:
- Strong knowledge of Git/GitHub and preferably Perforce (Helix Core) — understanding of differences, workflows, and integrations.
- Hands-on experience with P4-Fusion.
- Familiarity with cloud platforms (AWS, Azure) and containerization technologies (Docker, Kubernetes).
- Proficiency in migration tools such as git-p4 fusion — installation, configuration, and troubleshooting.
- Ability to identify and manage large files using Git LFS to meet GitHub repository size limits.
- Strong scripting skills in Python and Shell for automating migration and restructuring tasks.
- Experience in planning and executing source control migrations — defining scope, branch mapping, history retention, and permission translation.
- Familiarity with CI/CD pipeline integration to validate workflows post-migration.
- Understanding of source code management (SCM) best practices, including version history and repository organization in GitHub.
- Excellent communication and collaboration skills for cross-team coordination and migration planning.
- Proven practical experience in repository migration, large file management, and history preservation during Perforce to GitHub transitions.
Skills: Github, Kubernetes, Perforce, Perforce (Helix Core), Devops Tools
Must-Haves
Git/GitHub (advanced), Perforce (Helix Core) (advanced), Python/Shell scripting (strong), P4-Fusion (hands-on experience), Git LFS (proficient)
Job Title : IBM Sterling Integrator Developer
Experience : 3 to 5 Years
Locations : Hyderabad, Bangalore, Mumbai, Gurgaon, Chennai, Pune
Employment Type : Full-Time
Job Description :
We are looking for a skilled IBM Sterling Integrator Developer with 3–5 years of experience to join our team across multiple locations.
The ideal candidate should have strong expertise in IBM Sterling and integration, along with scripting and database proficiency.
Key Responsibilities :
- Develop, configure, and maintain IBM Sterling Integrator solutions.
- Design and implement integration solutions using IBM Sterling.
- Collaborate with cross-functional teams to gather requirements and provide solutions.
- Work with custom languages and scripting to enhance and automate integration processes.
- Ensure optimal performance and security of integration systems.
Must-Have Skills :
- Hands-on experience with IBM Sterling Integrator and associated integration tools.
- Proficiency in at least one custom scripting language.
- Strong command over Shell scripting, Python, and SQL (mandatory).
- Good understanding of EDI standards and protocols is a plus.
Interview Process :
- 2 Rounds of Technical Interviews.
Additional Information :
- Open to candidates from Hyderabad, Bangalore, Mumbai, Gurgaon, Chennai, and Pune.
Key responsibilities
• Design, build, and maintain robust CI/CD pipelines using Azure DevOps Services (Azure Pipelines) and Git-based workflows.
• Implement and manage infrastructure as code (IaC) using ARM templates, Bicep, and/or Terraform for repeatable environment provisioning.
• Containerize applications (Docker) and manage container orchestration platforms such as AKS (Azure Kubernetes Service).
• Automate build, test, release, and rollback processes; integrate automated testing and quality gates into pipelines.
• Monitor and improve platform reliability and observability using logging and monitoring tools (e.g., Azure Monitor, Application Insights, Prometheus, Grafana).
• Drive platform security and compliance through pipeline controls, secrets management (Key Vault / Vault), and secure configuration practices.
• Implement cost-optimization and governance for Azure resources (tags, policies, budgets).
• Troubleshoot build/release failures, production incidents, and performance bottlenecks; perform root-cause analysis and implement permanent fixes.
• Mentor developers in Git workflows, pipeline authoring, best practices for IaC, and cloud-native design.
• Maintain clear documentation: runbooks, deployment playbooks, architecture diagrams, and pipeline templates.
Required skills & experience
• 4+ years hands-on experience working with Azure and cloud-native application delivery.
• Deep experience with Azure DevOps (Repos, Pipelines, Artifacts, Boards).
• Strong IaC skills with Terraform, ARM templates, or Bicep.
• Solid experience with CI/CD design and YAML pipeline authoring.
• Practical knowledge of containerization (Docker) and Kubernetes — preferably AKS.
• Scripting skills: PowerShell, Bash, and/or Python for automation.
• Experience with Git workflows (branching strategies, PRs, code reviews).
• Familiarity with configuration management and secrets management (Azure Key Vault, HashiCorp Vault).
• Understanding of networking, identity (Azure AD), and security fundamentals in Azure.
• Strong troubleshooting, debugging, and incident response skills.
• Good collaboration and communication skills; ability to work across teams.
Certification
AZ-400: Microsoft Certified: DevOps Engineer Expert or AZ-104 or AZ 305 or Terraform Associate.
· Strong knowledge on Windows and Linux
· Experience working in Version Control Systems like git
· Hands-on experience in tools Docker, SonarQube, Ansible, Kubernetes, ELK.
· Basic understanding of SQL commands
· Experience working on Azure Cloud DevOps
The candidates should have:
· Strong knowledge on Windows and Linux OS
· Experience working in Version Control Systems like git
· Hands-on experience in tools Docker, SonarQube, Ansible, Kubernetes, ELK.
· Basic understanding of SQL commands
· Experience working on Azure Cloud DevOps
Laravel Developer Responsibilities:
- Discussing project aims with the client and development team.
- Designing and building web applications using Laravel.
- Troubleshooting issues in the implementation and debug builds.
- Working with front-end and back-end developers on projects.
- Testing functionality for users and the backend.
- Ensuring that integrations run smoothly.
- Scaling projects based on client feedback.
- Recording and reporting on work done in Laravel.
- Maintaining web-based applications.
- Presenting work in meetings with clients and management.
Laravel Developer Requirements:
- A degree in programming, computer science, or a related field.
- Experience working with PHP, performing unit testing, and managing APIs such as REST.
- A solid understanding of application design using Laravel.
- Knowledge of database design and querying using SQL.
- Proficiency in HTML and JavaScript.
- Practical experience using the MVC architecture.
- A portfolio of applications and programs to your name.
- Problem-solving skills and critical mindset.
- Great communication skills.
- The desire and ability to learn.
Note: We are hiring Tamil Nadu candidates only..

at Altimetrik
Java with cloud
|
Core Java, SpringBoot, MicroServices |
|
- DB2 or any RDBMS database application development |
|
- Linux OS, shell scripting, Batch Processing |
|
- Troubleshooting Large Scale application |
|
- Experience in automation and unit test framework is a must |
|
- AWS Cloud experience desirable |
|
- Agile Development Experience |
|
- Complete Development Cycle ( Dev, QA, UAT, Staging) |
|
- Good Oral and Written Communication Skills |
Job description
The ideal candidate is a self-motivated, multi-tasker, and demonstrated team player. You will be a lead developer responsible for the development of new software security policies and enhancements to security on existing products. You should excel in working with large-scale applications and frameworks and have outstanding communication and leadership skills.
Responsibilities
- Consulting with management on the operational requirements of software solutions.
- Contributing expertise on information system options, risk, and operational impact.
- Mentoring junior software developers in gaining experience and assuming DevOps responsibilities.
- Managing the installation and configuration of solutions.
- Collaborating with developers on software requirements, as well as interpreting test stage data.
- Developing interface simulators and designing automated module deployments.
- Completing code and script updates, as well as resolving product implementation errors.
- Overseeing routine maintenance procedures and performing diagnostic tests.
- Documenting processes and monitoring performance metrics.
- Conforming to best practices in network administration and cybersecurity.
Qualifications
- Minimum of 2 years of hands-on experience in software development and DevOps, specifically managing AWS Infrastructure such as EC2s, RDS, Elastic cache, S3, IAM, cloud trail and other services provided by AWS.
- Experience Building a multi-region highly available auto-scaling infrastructure that optimises performance and cost. plan for future infrastructure as well as Maintain & optimise existing infrastructure.
- Conceptualise, architect and build automated deployment pipelines in a CI/CD environment like Jenkins.
- Conceptualise, architect and build a containerised infrastructure using Docker, Mesosphere or similar SaaS platforms.
- Conceptualise, architect and build a secured network utilising VPCs with inputs from the security team.
- Work with developers & QA to institute a policy of Continuous Integration with Automated testing Architect, build and manage dashboards to provide visibility into delivery, production application functional and performance status.
- Work with developers to institute systems, policies and workflows which allow for rollback of deployments Triage release of applications to production environment on a daily basis.
- Interface with developers and triage SQL queries that need to be executed in production environments.
- Assist the developers and on calls for other teams with post mortem, follow up and review of issues affecting production availability.
- Minimum 2 years’ experience in Ansible.
- Must have written playbook to automate provisioning of AWS infrastructure as well as automation of routine maintenance tasks.
- Must have had prior experience automating deployments to production and lower environments.
- Experience with APM tools like New Relic and log management tools.
- Our entire platform is hosted on AWS, comprising of web applications, webservices, RDS, Redis and Elastic Search clusters and several other AWS resources like EC2, S3, Cloud front, Route53 and SNS.
- Essential Functions System Architecture Process Design and Implementation
- Minimum of 2 years scripting experience in Ruby/Python (Preferable) and Shell Web Application Deployment Systems Continuous Integration tools (Ansible)Establishing and enforcing Network Security Policy (AWS VPC, Security Group) & ACLs.
- Establishing and enforcing systems monitoring tools and standards
- Establishing and enforcing Risk Assessment policies and standards
- Establishing and enforcing Escalation policies and standards
The ideal person for the role will:
Possess a keen mind for solving tough problems by partnering effectively with various teams and stakeholders
Be comfortable working in a fast-paced, dynamic, and agile framework
Focus on implementing an end-to-end automated chain
Responsibilities
_____________________________________________________
Strengthen the application and environment security by applying standards and best practices and providing tooling to make development workflows more secure
Identify systems that can benefit from automation, monitoring and infrastructure-as-code and develop and scale products and services accordingly.
Implement sophisticated alerts and escalation mechanisms using automated processes
Help increase production system performance with a focus on high availability and scalability
Continue to keep the lights on (day-to-day administration)
Programmatically create infrastructure in AWS, leveraging Autoscaling Groups, Security Groups, Route53, S3 and IAM with Terraform and Ansible.
Enable our product development team to deliver new code daily through Continuous Integration and Deployment Pipelines.
Create a secure production infrastructure and protect our customer data with continuous security practices and monitoring. Design, develop and scale infrastructure-as-code
Establish SLAs for service uptime, and build the necessary telemetry and alerting platforms to enforce them
Architect and build continuous data pipelines for data lakes, Business Intelligence and AI practices of the company
Remain up to date on industry trends, share knowledge among teams and abide by industry best practices for configuration management and automation.
Qualifications and Background
_______________________________________________________
Graduate degree in Computer Science and Engineering or related technologies
Work or research project experience of 5-7 years, with a minimum of 3 years of experience directly related to the job description
Prior experience working in HIPAA / Hi-Trust frameworks will be given preference
About Witmer Health
_________________________________________________________
We exist to make mental healthcare more accessible, affordable, and effective. At Witmer, we are on a mission to build a research-driven, global mental healthcare company to work on developing novel solutions - by harnessing the power of AI/ML and data science - for a range of mental illnesses like depression, anxiety, OCD, and schizophrenia, among others. Our first foray will be in the space of workspace wellness, where we are building tools to help individual employees and companies improve their mental wellness and raise productivity levels.
Minimum 4 years exp
Skillsets:
- Build automation/CI: Jenkins
- Secure repositories: Artifactory, Nexus
- Build technologies: Maven, Gradle
- Development Languages: Python, Java, C#, Node, Angular, React/Redux
- SCM systems: Git, Github, Bitbucket
- Code Quality: Fisheye, Crucible, SonarQube
- Configuration Management: Packer, Ansible, Puppet, Chef
- Deployment: uDeploy, XLDeploy
- Containerization: Kubernetes, Docker, PCF, OpenShift
- Automation frameworks: Selenium, TestNG, Robot
- Work Management: JAMA, Jira
- Strong problem solving skills, Good verbal and written communication skills
- Good knowledge of Linux environment: RedHat etc.
- Good in shell scripting
- Good to have Cloud Technology : AWS, GCP and Azure



