
Greetings!!
We are looking for an Oracle HCM Functional consultant for one of our premium clients for their Chennai location.
Requirement:
• Provide Oracle HCM Cloud Fusion functional consulting services by acting as subject matter expert and leading clients through the entire cloud application services implementation lifecycle for Oracle HCM Cloud Fusion projects.
• Experience in Core HR, Time and labour, Talent Management, Recruiting, Payroll, Absence module (any 3 module).
• Identify business requirements and map them to the Oracle HCM Cloud Fusion functionality.
• Identify functionality gaps in Oracle HCM Cloud Fusion, and build extensions for them.
• Advise the client on options, risks, and any impacts on other processes or systems.
• Configure the Oracle HCM Cloud Fusion Applications to meet client requirements and document application set-ups.
• Write business requirement documents for reports, interfaces, data conversions and application extensions for Oracle HCM Cloud Fusion projects.
• Assist client in preparing validation scripts, testing scenarios and developing test scripts for Oracle HCM Cloud Fusion projects.
• Support clients with the execution of test scripts.
• Effectively communicate and drive project deliverables for Oracle HCM Cloud Fusion projects.
• Complete tasks efficiently and in a timely manner.
• Interact with the project team members responsible for developing reports, interfaces, data conversion programs, and application extensions.
• Provide status and issue reports to the project manager/client on a regular basis.
• Share knowledge to continually improve implementation methodology for Oracle HCM Cloud Fusion projects.

Similar jobs
Hi,
We are looking for candidate with experience in DevSecOps.
Please find below JD for your reference.
Responsibilities:
Execute shell scripts for seamless automation and system management.
Implement infrastructure as code using Terraform for AWS, Kubernetes, Helm, kustomize, and kubectl.
Oversee AWS security groups, VPC configurations, and utilize Aviatrix for efficient network orchestration.
Contribute to Opentelemetry Collector for enhanced observability.
Implement microsegmentation using AWS native resources and Aviatrix for commercial routes.
Enforce policies through Open Policy Agent (OPA) integration.
Develop and maintain comprehensive runbooks for standard operating procedures.
Utilize packet tracing for network analysis and security optimization.
Apply OWASP tools and practices for robust web application security.
Integrate container vulnerability scanning tools seamlessly within CI/CD pipelines.
Define security requirements for source code repositories, binary repositories, and secrets managers in CI/CD pipelines.
Collaborate with software and platform engineers to infuse security principles into DevOps teams.
Regularly monitor and report project status to the management team.
Qualifications:
Proficient in shell scripting and automation.
Strong command of Terraform, AWS, Kubernetes, Helm, kustomize, and kubectl.
Deep understanding of AWS security practices, VPC configurations, and Aviatrix.
Familiarity with Opentelemetry for observability and OPA for policy enforcement.
Experience in packet tracing for network analysis.
Practical application of OWASP tools and web application security.
Integration of container vulnerability scanning tools within CI/CD pipelines.
Proven ability to define security requirements for source code repositories, binary repositories, and secrets managers in CI/CD pipelines.
Collaboration expertise with DevOps teams for security integration.
Regular monitoring and reporting capabilities.
Site Reliability Engineering experience.
Hands-on proficiency with source code management tools, especially Git.
Cloud platform expertise (AWS, Azure, or GCP) with hands-on experience in deploying and managing applications.
Please send across your updated profile.
Job Description :
Acceldata is creating the Data observability space. We make it possible for data-driven enterprises to effectively monitor, discover, and validate Data pipelines at Petabyte scale. Our customers include a Fortune 500 company, one of Asia's largest telecom companies, and a unicorn fintech startup. We are lean, hungry, customer-obsessed, and growing fast. Our Engineering team values productivity, integrity, and pragmatism. We provide a flexible, remote-friendly work environment.
Roles & responsibilities:
- Champion engineering and operational excellence.
- Establish a solid infrastructure framework and excellent development and deployment processes.
- Provide technical guidance to both your team members and your peers from the development team.
- Work with the development teams closely to gather system requirements, new service proposals and large system improvements and come up with the infrastructure architecture leading to stable, well-monitored fly, performant and secure systems.
- Be part of and help create a positive work environment based on accountability.
- Communicate across functions and drive engineering initiatives.
- Initiate cross team collaboration with product development teams to develop high quality, polished products and services.
Must haves:
- 5+ years of professional experience developing, and launching software products on Cloud.
- Basic understanding Java/Go Programming
- Good Understanding of Container Technologies/Orchestration platforms (e. g Docker, Kubernetes)
- Deep understanding of AWS or Any Cloud.
- Good understanding of data stores like Postgres, Redis, Kafka, and Elasticsearch.
- Good Understanding of Operating systems
- Strong technical background with track record of individual technical accomplishments
- Ability to handle multiple competing priorities in a fast paced environment
- Ability to establish credibility with smart engineers quickly.
- Most importantly, ability to learn and urge to learn new things.
- B.Tech/M.Tech in Computer Science or a related technical field.
Good to Have:
- Hands-on knowledge of Configuration Management and Deployment tools like – Ansible, Terraform etc.
- Proficient in scripting, and Git and Git workflows
- Experience in developing Continuous Integration/ Continuous Delivery pipelines
- Knowledge of Big Data systems.
Responsibilities:
- Writing and maintaining the automation for deployments across various cloud (AWS/Azure/GCP)
- Bring a passion to stay on top of DevOps trends, experiment, and learn new CI/CD technologies.
- Creating the Architecture Diagrams and documentation for various pieces
- Build tools and automation to improve the system's observability, availability, reliability, performance/latency, monitoring, emergency response
Requirements:
- 3 - 5 years of professional experience as a DevOps / System Engineer.
- Strong knowledge in Systems Administration & troubleshooting skills with Linux.
- Experience with CI/CD best practices and tooling, preferably Jenkins, Circle CI.
- Hands-on experience with Cloud platforms such as AWS/Azure/GCP or private cloud environments.
- Experience and understanding of modern container orchestration, Well-versed with the containerised applications (Docker, Docker-compose, Docker-swarm, Kubernetes).
- Experience in Infrastructure as code development using Terraform.
- Basic Networking knowledge VLAN, Subnet, VPC, Webserver like Nginx, Apache.
- Experience in handling different SQL and NoSQL databases (PostgreSQL, MySQL, Mongo).
- Experience with GIT Version Control Software.
- Proficiency in any programming or scripting language such as Shell Script, Python, Golang.
- Strong interpersonal and communication skills; ability to work in a team environment.
- AWS / Kubernetes Certifications: AWS Certified Solutions Architect / CKA.
- Setup and management of a Kubernetes cluster, including writing Docker files.
- Experience working in and advocating for agile environments.
- Knowledge in Microservice architecture.
JOB RESPONSIBILITIES:
- Responsible for design, implementation, and continuous improvement on automated CI/CD infrastructure
- Displays technical leadership and oversight of implementation and deployment planning, system integration, ongoing data validation processes, quality assurance, delivery, operations, and sustainability of technical solutions
- Responsible for designing topology to meet requirements for uptime, availability, scalability, robustness, fault tolerance & security
- Implement proactive measures for automated detection and resolution of recurring operational issues
- Lead operational support team manage incidents, document root cause and tracking preventive measures
- Identifying and deploying cybersecurity measures by continuously validating/fixing vulnerability assessment reports and risk management
- Responsible for the design and development of tools, installation procedures
- Develops and maintains accurate estimates, timelines, project plans, and status reports
- Organize and maintain packaging and deployment of various internal modules and third-party vendor libraries
- Responsible for the employment, timely performance evaluation, counselling, employee development, and discipline of assigned employees.
- Participates in calls and meetings with customers, vendors, and internal teams on regular basis.
- Perform infrastructure cost analysis and optimization
SKILLS & ABILITIES
Experience: Minimum of 10 years of experience with good technical knowledge regarding build, release, and systems engineering
Technical Skills:
- Experience with DevOps toolchains such as Docker, Rancher, Kubernetes, Bitbucket
- Experience with Apache, Nginx, Tomcat, Prometheus ,Grafana
- Ability to learn/use a wide variety of open-source technologies and tools
- Sound understanding of cloud technologies preferably AWS technologies
- Linux, Windows, Scripting, Configuration Management, Build and Release Engineering
- 6 years of experience in DevOps practices, with a good understanding of DevOps and Agile principles
- Good scripting skills (Python/Perl/Ruby/Bash)
- Experience with standard continuous integration tools Jenkins/Bitbucket Pipelines
- Work on software configuration management systems (Puppet/Chef/Salt/Ansible)
- Microsoft Office Suite (Word, Excel, PowerPoint, Visio, Outlook) and other business productivity tools
- Working knowledge on HSM and PKI (Good to have)
Location:
- Bangalore
Experience:
- 10 + Years.
Job description
The role requires you to design development pipelines from the ground up, Creation of Docker Files, design and operate highly available systems in AWS Cloud environments. Also involves Configuration Management, Web Services Architectures, DevOps Implementation, Database management, Backups, and Monitoring.
Key responsibility area
- Ensure reliable operation of CI/CD pipelines
- Orchestrate the provisioning, load balancing, configuration, monitoring and billing of resources in the cloud environment in a highly automated manner
- Logging, metrics and alerting management.
- Creation of Bash/Python scripts for automation
- Performing root cause analysis for production errors.Requirement
- Proficient in Linux Commands line and troubleshooting.
- Proficient in AWS Services. Deployment, Monitoring and troubleshooting applications in AWS.
- Hands-on experience with CI tooling preferably with Jenkins.
- Proficient in deployment using Ansible.
- Knowledge of infrastructure management tools (Infrastructure as cloud) such as terraform, AWS cloudformation etc.
- Proficient in deployment of applications behind load balancers and proxy servers such as nginx, apache.
- Scripting languages: Bash, Python, Groovy.
- Experience with Logging, Monitoring, and Alerting tools like ELK(Elastic-search, Logstash, Kibana), Nagios. Graylog, splunk Prometheus, Grafana is a plus.
Must Have:
Linux, CI/CD(Jenkin), AWS, Scripting(Bash,shell Python, Go), Ngnix, Docker.
Good to have
Configuration Management(Ansible or similar tool), Logging tool( ELK or similar), Monitoring tool(Ngios or similar), IaC(Terraform, cloudformation).
DevOps Engineer
Notice Period: 45 days / Immediate Joining
Banyan Data Services (BDS) is a US-based Infrastructure services Company, headquartered in San Jose, California, USA. It provides full-stack managed services to support business applications and data infrastructure. We do provide the data solutions and services on bare metal, On-prem, and all Cloud platforms. Our engagement service is built on the DevOps standard practice and SRE model.
We are looking for a DevOps Engineer to help us build functional systems that improve customer experience. we offer you an opportunity to join our rocket ship startup, run by a world-class executive team. We are looking for candidates that aspire to be a part of the cutting-edge solutions and services we offer, that address next-gen data evolution challenges. Candidates who are willing to use their experience in areas directly related to Infrastructure Services, Software as Service, and Cloud Services and create a niche in the market.
Key Qualifications
· 4+ years of experience as a DevOps Engineer with monitoring, troubleshooting, and diagnosing infrastructure systems.
· Experience in implementation of continuous integration and deployment pipelines using Jenkins, JIRA, JFrog, etc
· Strong experience in Linux/Unix administration.
· Experience with automation/configuration management using Puppet, Chef, Ansible, Terraform, or other similar tools.
· Expertise in multiple coding and scripting languages including Shell, Python, and Perl
· Hands-on experience Exposure to modern IT infrastructure (eg. Docker swarm/Mesos/Kubernetes/Openstack)
· Exposure to any of relation database technologies MySQL/Postgres/Oracle or any No-SQL database
· Worked on open-source tools for logging, monitoring, search engine, caching, etc.
· Professional Certificates in AWS or any other cloud is preferable
· Excellent problem solving and troubleshooting skills
· Must have good written and verbal communication skills
Key Responsibilities
Ambitious individuals who can work under their own direction towards agreed targets/goals.
Must be flexible to work on the office timings to accommodate the multi-national client timings.
Will be involved in solution designing from the conceptual stages through development cycle and deployments.
Involve development operations & support internal teams
Improve infrastructure uptime, performance, resilience, reliability through automation
Willing to learn new technologies and work on research-orientated projects
Proven interpersonal skills while contributing to team effort by accomplishing related results as needed.
Scope and deliver solutions with the ability to design solutions independently based on high-level architecture.
Independent thinking, ability to work in a fast-paced environment with creativity and brainstorming
http://www.banyandata.com" target="_blank">www.banyandata.com

Objectives of this Role
Improve reliability, quality, and time-to-market of our suite of software solutions
- Run the production environment by monitoring availability and taking a holistic view of system health
- Measure and optimize system performance, with an eye toward pushing our capabilities forward, getting ahead of customer - needs, and innovating to continually improve
- Provide primary operational support and engineering for multiple large distributed software applications
- Participate in system design consulting, platform management, and capacity planning
- Languages: Python, Java, Ruby DSL, Bash
- Databases : MySQL, Cassandra , Elastic Search
- Deployment: AWS CloudFormation
Essential Criteria:
- 8 or more years administrating production Linux systems in a 24x7 environment
- 3 or more years’ experience in a DevOps/ SRE role as an engineer or technical lead
- At least 1 year of team leadership experience
- Significant knowledge of Amazon Web Services (CLI/APIs, EC2, EBS, S3, VPCs, IAM, AWS Lambda)
- Experience deploying services into containerized orchestration environments such as Kubernetes
- Experience with infrastructure automation tools like CloudFormation, Terraform, etc.
- Experience with at least one of Python, Bash, Ruby, or equivalent
- Experience creating and managing CI/CD pipeline like Jenkins or Spinnaker
- Familiar with version control using Git
- Solid understanding of common security principles
Nice to Have:
- Preference for hands on experience with Serverless Architecture, Kubernetes and Docker
- Strong experience with open-source configuration management tools
- Managing distributed systems spanning multiple AWS regions / data-centers
- Experience with bootstrapping solutions
- Open source contributor
- We’re committed to client success: There are over 6,200 brand and retail websites in the Bazaarvoice network. Our clients represent some of the world’s leading companies across a wide range of industries including retail, apparel, automotive, consumer electronics and travel.
- We’re leaders in consumer-generated content: Each month, more than one billion consumers view and share authentic consumer-generated content, such as ratings and reviews, curated photos, social posts and videos, about products in our network. Thousands upon thousands or reviews are added to the Bazaarvoice network everyday.
- Our network delivers: Network analytics provide insights that help marketers and advertisers provide more engaging experiences that drive brand awareness, consideration, sales, and loyalty.
- We’re a great place to work: We pride ourselves on our unique culture. Join a company that values passion, innovation, authenticity, generosity, respect, teamwork, and performance.
At Karza technologies, we take pride in building one of the most comprehensive digital onboarding & due-diligence platforms by profiling millions of entities and trillions of associations amongst them using data collated from more than 700 publicly available government sources. Primarily in the B2B Fintech Enterprise space, we are headquartered in Mumbai in Lower Parel with 100+ strong workforce. We are truly furthering the cause of Digital India by providing the entire BFSI ecosystem with tech products and services that aid onboarding customers, automating processes and mitigating risks seamlessly, in real-time and at fraction of the current cost.
A few recognitions:
- Recognized as Top25 startups in India to work with 2019 by LinkedIn
- Winner of HDFC Bank's Digital Innovation Summit 2020
- Super Winners (Won every category) at Tecnoviti 2020 by Banking Frontiers
- Winner of Amazon AI Award 2019 for Fintech
- Winner of FinTech Spot Pitches at Fintegrate Zone 2018 held at BSE
- Winner of FinShare 2018 challenge held by ShareKhan
- Only startup in Yes Bank Global Fintech Accelerator to win the account during the Cohort
- 2nd place Citi India FinTech Challenge 2018 by Citibank
- Top 3 in Viacom18's Startup Engagement Programme VStEP
What your average day would look like:
- Deploy and maintain mission-critical information extraction, analysis, and management systems
- Manage low cost, scalable streaming data pipelines
- Provide direct and responsive support for urgent production issues
- Contribute ideas towards secure and reliable Cloud architecture
- Use open source technologies and tools to accomplish specific use cases encountered within the project
- Use coding languages or scripting methodologies to solve automation problems
- Collaborate with others on the project to brainstorm about the best way to tackle a complex infrastructure, security, or deployment problem
- Identify processes and practices to streamline development & deployment to minimize downtime and maximize turnaround time
What you need to work with us:
- Proficiency in at least one of the general-purpose programming languages like Python, Java, etc.
- Experience in managing the IAAS and PAAS components on popular public Cloud Service Providers like AWS, Azure, GCP etc.
- Proficiency in Unix Operating systems and comfortable with Networking concepts
- Experience with developing/deploying a scalable system
- Experience with the Distributed Database & Message Queues (like Cassandra, ElasticSearch, MongoDB, Kafka, etc.)
- Experience in managing Hadoop clusters
- Understanding of containers and have managed them in production using container orchestration services.
- Solid understanding of data structures and algorithms.
- Applied exposure to continuous delivery pipelines (CI/CD).
- Keen interest and proven track record in automation and cost optimization.
Experience:
- 1-4 years of relevant experience
- BE in Computer Science / Information Technology

Requirements:-
- Must have good understanding of Python and Shell scripting with industry standard coding conventions
- Must possess good coding debugging skills
- Experience in Design & Development of test framework
- Experience in Automation testing
- Good to have experience in Jenkins framework tool
- Good to have exposure to Continuous Integration process
- Experience in Linux and Windows OS
- Desirable to have Build & Release Process knowledge
- Experience in Automating Manual test cases
- Experienced in automating OS / FW related tasks
- Understanding of BIOS / FW QA is a strong plus
- OpenCV experience is a plus
- Good to have platform exposure
- Must have good Communication skills
- Good Leadership capabilities & collaboration capabilities, as individual will have to work with multiple teams and single handedly maintain the automation framework and enable the Manual validation team

