
Greetings!!
We are looking for an Oracle HCM Functional consultant for one of our premium clients for their Chennai location.
Requirement:
• Provide Oracle HCM Cloud Fusion functional consulting services by acting as subject matter expert and leading clients through the entire cloud application services implementation lifecycle for Oracle HCM Cloud Fusion projects.
• Experience in Core HR, Time and labour, Talent Management, Recruiting, Payroll, Absence module (any 3 module).
• Identify business requirements and map them to the Oracle HCM Cloud Fusion functionality.
• Identify functionality gaps in Oracle HCM Cloud Fusion, and build extensions for them.
• Advise the client on options, risks, and any impacts on other processes or systems.
• Configure the Oracle HCM Cloud Fusion Applications to meet client requirements and document application set-ups.
• Write business requirement documents for reports, interfaces, data conversions and application extensions for Oracle HCM Cloud Fusion projects.
• Assist client in preparing validation scripts, testing scenarios and developing test scripts for Oracle HCM Cloud Fusion projects.
• Support clients with the execution of test scripts.
• Effectively communicate and drive project deliverables for Oracle HCM Cloud Fusion projects.
• Complete tasks efficiently and in a timely manner.
• Interact with the project team members responsible for developing reports, interfaces, data conversion programs, and application extensions.
• Provide status and issue reports to the project manager/client on a regular basis.
• Share knowledge to continually improve implementation methodology for Oracle HCM Cloud Fusion projects.

Similar jobs
JOB DETAILS:
* Job Title: DevOps Engineer (Azure)
* Industry: Technology
* Salary: Best in Industry
* Experience: 2-5 years
* Location: Bengaluru, Koramangala
Review Criteria
- Strong Azure DevOps Engineer Profiles.
- Must have minimum 2+ years of hands-on experience as an Azure DevOps Engineer with strong exposure to Azure DevOps Services (Repos, Pipelines, Boards, Artifacts).
- Must have strong experience in designing and maintaining YAML-based CI/CD pipelines, including end-to-end automation of build, test, and deployment workflows.
- Must have hands-on scripting and automation experience using Bash, Python, and/or PowerShell
- Must have working knowledge of databases such as Microsoft SQL Server, PostgreSQL, or Oracle Database
- Must have experience with monitoring, alerting, and incident management using tools like Grafana, Prometheus, Datadog, or CloudWatch, including troubleshooting and root cause analysis
Preferred
- Knowledge of containerisation and orchestration tools such as Docker and Kubernetes.
- Knowledge of Infrastructure as Code and configuration management tools such as Terraform and Ansible.
- Preferred (Education) – BE/BTech / ME/MTech in Computer Science or related discipline
Role & Responsibilities
- Build and maintain Azure DevOps YAML-based CI/CD pipelines for build, test, and deployments.
- Manage Azure DevOps Repos, Pipelines, Boards, and Artifacts.
- Implement Git branching strategies and automate release workflows.
- Develop scripts using Bash, Python, or PowerShell for DevOps automation.
- Monitor systems using Grafana, Prometheus, Datadog, or CloudWatch and handle incidents.
- Collaborate with dev and QA teams in an Agile/Scrum environment.
- Maintain documentation, runbooks, and participate in root cause analysis.
Ideal Candidate
- 2–5 years of experience as an Azure DevOps Engineer.
- Strong hands-on experience with Azure DevOps CI/CD (YAML) and Git.
- Experience with Microsoft Azure (OCI/AWS exposure is a plus).
- Working knowledge of SQL Server, PostgreSQL, or Oracle.
- Good scripting, troubleshooting, and communication skills.
- Bonus: Docker, Kubernetes, Terraform, Ansible experience.
- Comfortable with WFO (Koramangala, Bangalore).
We are looking for a DevOps Engineer with hands-on experience in managing production infrastructure using AWS, Kubernetes, and Terraform. The ideal candidate will have exposure to CI/CD tools and queueing systems, along with a strong ability to automate and optimize workflows.
Responsibilities:
* Manage and optimize production infrastructure on AWS, ensuring scalability and reliability.
* Deploy and orchestrate containerized applications using Kubernetes.
* Implement and maintain infrastructure as code (IaC) using Terraform.
* Set up and manage CI/CD pipelines using tools like Jenkins or Chef to streamline deployment processes.
* Troubleshoot and resolve infrastructure issues to ensure high availability and performance.
* Collaborate with cross-functional teams to define technical requirements and deliver solutions.
* Nice-to-have: Manage queueing systems like Amazon SQS, Kafka, or RabbitMQ.
Requirements:
* 2+ years of experience with AWS, including practical exposure to its services in production environments.
* Demonstrated expertise in Kubernetes for container orchestration.
* Proficiency in using Terraform for managing infrastructure as code.
* Exposure to at least one CI/CD tool, such as Jenkins or Chef.
* Nice-to-have: Experience managing queueing systems like SQS, Kafka, or RabbitMQ.
NOTE- This is a contractual role for a period of 3-6 months.
Responsibilities:
● Set up and maintain CI/CD pipelines across services and environments
● Monitor system health and set up alerts/logs for performance & errors ● Work closely with backend/frontend teams to improve deployment velocity
● Manage cloud environments (staging, production) with cost and reliability in mind
● Ensure secure access, role policies, and audit logging
● Contribute to internal tooling, CLI automation, and dev workflow improvements
Must-Haves:
● 2–3 years of hands-on experience in DevOps, SRE, or Platform Engineering
● Experience with Docker, CI/CD (especially GitHub Actions), and cloud providers (AWS/GCP)
● Proficiency in writing scripts (Bash, Python) for automation
● Good understanding of system monitoring, logs, and alerting
● Strong debugging skills, ownership mindset, and clear documentation habits
● Infra monitoring tools like Grafana dashboards
● Good understanding of how the web works
● Experience with at least one language like Java, Python etc
● Good with Shell scripting
● Experience with *Nix based operating systems
● Experience with k8s, containers
● Fairly good understanding of AWS/GCP/Azure
● Troubleshoot and fix outages and performance issues in infrastructure stack
● Identify gap and design automation tools for all feasible functions in infrastructure
● Good verbal and written communication skills
● Drive SLA/SLO of team
Benefits
This is an opportunity to work on a fairly complex set of systems and improve
them. You will get a chance to learn things like “how to think about code
simplicity”, “how to write for maintainability” and several other things.
● Comprehensive health insurance policy.
● Flexible working hours and a very friendly work environment.
● Flexibility to work either in the office (post Covid) or remotely.
About The Role:
The products/services of Eclat Engineering Pvt. Ltd. are being used by some of the leading institutions in India and abroad. Our services/Products are rapidly growing in demand. We are looking for a capable and dynamic Senior DevOps engineer to help setup, maintain and scale the infrastructure operations. This Individual will have the challenging responsibility of channelling our IT infrastructure and offering customer services with stringent international standard levels of service quality. This individual will leverage the latest IT tools to automate and streamline the delivery of our services while implementing industry-standard processes and knowledge management.
Roles & Responsibilities:
- Infrastructure and Deployment Automation: Design, implement, and maintain automation for infrastructure
provisioning and application deployment. Own the CI/CD pipelines and ensure they are efficient, reliable, and
scalable.
- System Monitoring and Performance: -Take ownership of monitoring systems and ensure the health and
performance of the infrastructure. Proactively identify and address performance bottlenecks and system issues.
- Cloud Infrastructure Management: Manage cloud infrastructure (e.g., AWS, Azure, GCP) and optimize resource
usage. Implement cost-saving measures while maintaining scalability and reliability.
- Configuration Management: Manage configuration management tools (e.g., Ansible, Puppet, Chef) to ensure
consistency across environments. Automate configuration changes and updates.
- Security and Compliance: Own security policies, implement best practices, and ensure compliance with industry
standards. Lead efforts to secure infrastructure and applications, including patch management and access controls.
- Collaboration with Development and Operations Teams: Foster collaboration between development and
operations teams, promoting a DevOps culture. Be the go-to person for resolving cross-functional infrastructure
issues and improving the development process.
- Disaster Recovery and Business Continuity: Develop and maintain disaster recovery plans and procedures. Ensure
business continuity in the event of system failures or other disruptions.
- Documentation and Knowledge Sharing: Create and maintain comprehensive documentation for configurations,
processes, and best practices. Share knowledge and mentor junior team members.
- Technical Leadership and Innovation: Stay up-to-date with industry trends and emerging technologies. Lead efforts
to introduce new tools and technologies that enhance DevOps practices.
- Problem Resolution and Troubleshooting: Be responsible for diagnosing and resolving complex issues related to
infrastructure and deployments. Implement preventive measures to reduce recurring problems.
Requirements:
● B.E / B.Tech / M.E / M.Tech / MCA / M.Sc.IT (if not should be able to demonstrate required skills)
● Overall 3+ years of experience in DevOps and Cloud operations specifically in AWS.
● Experience with Linux Administrator
● Experience with microservice architecture, containers, Kubernetes, and Helm is a must
● Experience in Configuration Management preferably Ansible
● Experience in Shell Scripting is a must
● Experience in developing and maintaining CI/CD processes using tools like Gitlab, Jenkins
● Experience in logging, monitoring and analytics
● An Understanding of writing Infrastructure as a Code using tools like Terraform
● Preferences - AWS, Kubernetes, Ansible
Must Have:
● Knowledge of AWS Cloud Platform.
● Good experience with microservice architecture, Kubernetes, helm and container-based technologies
● Hands-on experience with Ansible.
● Should have experience in working and maintaining CI/CD Processes.
● Hands-on experience in version control tools like GIT.
● Experience with monitoring tools such as Cloudwatch/Sysdig etc.
● Sound experience in administering Linux servers and Shell Scripting.
● Should have a good understanding of IT security and have the knowledge to secure production environments (OS and server software).
MLOps Engineer
Required Candidate profile :
- 3+ years’ experience in developing continuous integration and deployment (CI/CD) pipelines (e.g. Jenkins, Github Actions) and bringing ML models to CI/CD pipelines
- Candidate with strong Azure expertise
- Exposure of Productionize the models
- Candidate should have complete knowledge of Azure ecosystem, especially in the area of DE
- Candidate should have prior experience in Design, build, test, and maintain machine learning infrastructure to empower data scientists to rapidly iterate on model development
- Develop continuous integration and deployment (CI/CD) pipelines on top of Azure that includes AzureML, MLflow and Azure Devops
- Proficient knowledge of git, Docker and containers, Kubernetes
- Familiarity with Terraform
- E2E production experience with Azure ML, Azure ML Pipelines
- Experience in Azure ML extension for Azure Devops
- Worked on Model Drift (Concept Drift, Data Drift preferable on Azure ML.)
- Candidate will be part of a cross-functional team that builds and delivers production-ready data science projects. You will work with team members and stakeholders to creatively identify, design, and implement solutions that reduce operational burden, increase reliability and resiliency, ensure disaster recovery and business continuity, enable CI/CD, optimize ML and AI services, and maintain it all in infrastructure as code everything-in-version-control manner.
- Candidate with strong Azure expertise
- Candidate should have complete knowledge of Azure ecosystem, especially in the area of DE
- Candidate should have prior experience in Design, build, test, and maintain machine learning infrastructure to empower data scientists to rapidly iterate on model development
- Develop continuous integration and deployment (CI/CD) pipelines on top of Azure that includes AzureML, MLflow and Azure Devops
- Design, Develop, deploy, and run operations of infrastructure services in the Acqueon AWS cloud environment
- Manage uptime of Infra & SaaS Application
- Implement application performance monitoring to ensure platform uptime and performance
- Building scripts for operational automation and incident response
- Handle schedule and processes surrounding cloud application deployment
- Define, measure, and meet key operational metrics including performance, incidents and chronic problems, capacity, and availability
- Lead the deployment, monitoring, maintenance, and support of operating systems (Windows, Linux)
- Build out lifecycle processes to mitigate risk and ensure platforms remain current, in accordance with industry standard methodologies
- Run incident resolution within the environment, facilitating teamwork with other departments as required
- Automate the deployment of new software to cloud environment in coordination with DevOps engineers
- Work closely with Presales, understand customer requirement to deploy in Production
- Lead and mentor a team of operations engineers
- Drive the strategy to evolve and modernize existing tools and processes to enable highly secure and scalable operations
- AWS infrastructure management, provisioning, cost management and planning
- Prepare RCA incident reports for internal and external customers
- Participate in product engineering meetings to ensure product features and patches comply with cloud deployment standards
- Troubleshoot and analyse performance issues and customer reported incidents working to restore services within the SLA
- Monthly SLA Performance reports
As a Cloud Operations Manager in Acqueon you will need….
- 8 years’ progressive experience managing IT infrastructure and global cloud environments such as AWS, GCP (must)
- 3-5 years management experience leading a Cloud Operations / Site Reliability / Production Engineering team working with globally distributed teams in a fast-paced environment
- 3-5 years’ experience in IAC (Terraform, K8)
- 3+ years end-to-end incident management experience
- Experience with communicating and presenting to all stakeholders
- Experience with Cloud Security compliance and audits
- Detail-oriented. The ideal candidate is one who naturally digs as deep as they need to understand the why
- Knowledge on GCP will be added advantage
- Manage and monitor customer instances for uptime and reliability
- Staff scheduling and planning to ensure 24x7x365 coverage for cloud operations
- Customer facing, excellent communication skills, team management, troubleshooting
Your skills and experience should cover:
-
5+ years of experience with developing, deploying, and debugging solutions on the AWS platform using ALL AWS services such as S3, IAM, Lambda, API Gateway, RDS, Cognito, Cloudtrail, CodePipeline, Cloud Formation, Cloudwatch and WAF (Web Application Firewall).
-
Amazon Web Services (AWS) Certified Developer: Associate, is required; Amazon Web Services (AWS) DevOps Engineer: Professional, preferred.
-
5+ years of experience using one or more modern programming languages (Python, Node.js).
-
Hands-on experience migrating data to the AWS cloud platform
-
Experience with Scrum/Agile methodology.
-
Good understanding of core AWS services, uses, and basic AWS architecture best practices (including security and scalability)
-
Experience with AWS Data Storage Tools.
-
Experience in Configure and implement AWS tools such as CloudWatch, CloudTrail and direct system logs for monitoring.
-
Experience working with GIT, or similar tools.
-
Ability to communicate and represent AWS Recommendations and Standards.
The following areas are highly advantageous:
-
Experience with Docker
-
Experience with PostgreSQL database








