Position: Oracle EBS Technical Lead - SCM
Experience: 6-8 Years
Location: Remote
Technical & Professional requirement:
Minimum 6 years of professional Oracle EBS experience in Oracle EBusiness Suite release 12.2.x or higher with
emphasis on modules Oracle Order Management, Oracle Advance Pricing, O2C cycle.
Must have ability to support customizations, develop process documents and share implementation plans and
best practices advice.
Experience with Web services, Alerts, PLLs, DFFs, module related APIs is a must
Extensive experience in Oracle database and development technologies such as Oracle Forms,
Oracle Reports, OAF/ADF framework, Workflow and BI Publisher
Oracle Database 11g or higher with strong experience in SQL and PL/SQL.
Exposure to SOA stack is a plus .
Good knowledge of basic Unix shell scripting will be added advantage
Strong technical knowledge with database design architecture. Ability to design and implement tables, views,
procedures, constraints, and relationships
Solve complex issues using methodical troubleshooting based on expert knowledge of Oracle
EBS applications functionality and technology
Ability to identify technical risks, present solutions to non-technical personnel and influence technical decisions.
Design, Development, Testing, Migration, Documentation.
Adhere and follow the Organization and Client Processes.
Support milestone events, defect resolutions, status updates etc.
Coordinate and participate in interaction with Functional counterparts and Users, Infrastructure team for
issues/configurations.
To qualify for the role, you must have
Minimum 6 Years of experience as EBS Technical Consultant
Oracle EBS R12 Technical, Sound knowledge of EBS, PL/SQL, Oracle Application Framework. Good understanding
of business flows – Oracle Order Management, Oracle Advance Pricing, O2C cycle.
Bachelor’s Degree in any engineering.
Basic:
Excellent ability to convince multiple stakeholders - internal and external
Good communication skills
Good presentation skills
Experience throughout the software development life cycle
Familiarity with Agile methodologies
Ability to interface directly with client

Similar jobs
Springer Capital is a cross-border asset management firm focused on real estate investment banking in China and the USA. We are offering a remote internship for individuals passionate about automation, cloud infrastructure, and CI/CD pipelines. Start and end dates are flexible, and applicants may be asked to complete a short technical quiz or assignment as part of the application process.
Responsibilities:
▪ Assist in building and maintaining CI/CD pipelines to automate development workflows
▪ Monitor and improve system performance, reliability, and scalability
▪ Manage cloud-based infrastructure (e.g., AWS, Azure, or GCP)
▪ Support containerization and orchestration using Docker and Kubernetes
▪ Implement infrastructure as code using tools like Terraform or CloudFormation
▪ Collaborate with software engineering and data teams to streamline deployments
▪ Troubleshoot system and deployment issues across development and production environments
2. Kubernetes Engineer
DevOps Systems Engineer with experience in Docker Containers, Docker Swarm, Docker Compose, Ansible, Jenkins and other tools. As part of this role she/he must work with the container best practices in design, development and implementation.
At Least 4 Years of experience in DevOps with knowledge of:
- Docker
- Docker Cloud / Containerization
- DevOps Best Practices
- Distributed Applications
- Deployment Architecture
- Atleast AWS Experience
- Exposure to Kubernetes / Serverless Architecture
Skills:
- 3-7+ years of experience in DevOps Engineering
- Strong experience with Docker Containers, Implementing Docker Containers, Container Clustering
- Experience with Docker Swarm, Docker Compose, Docker Engine
- Experience with Provisioning and managing VM's - Virtual Machines
- Experience / strong knowledge with Network Topologies, Network Research,
- Jenkins, BitBucket, Jira
- Ansible or other Automation Configuration Management System tools
- Scripting & Programming using languages such as BASH, Perl, Python, AWK, SED, PHP, Shell
- Linux Systems Administration -: Redhat
Additional Preference:
Security, SSL configuration, Best Practices
- Candidate should be able to write the sample programs using the Tools (Bash, PowerShell, Python or Shell scripting)
- Analytical/logical reasoning
- GitHub Actions
- Should have good working experience with GitHub Actions
- Repository/Workflow Dispatch, writing reusable workflows, etc
- AZ CLI commands
- Hands-on experience with AZ CLI commands
Head Kubernetes
Docker, Kubernetes Engineer – Remote (Pan India)
ABOUT US
Established in 2009, Ashnik is a leading open-source solutions and consulting company in South East Asia and India, headquartered in Singapore. We enable digital transformation for large enterprises through our design, architecting, and solution skills. Over 100 large enterprises in the region have acknowledged our expertise in delivering solutions using key open-source technologies. Our offerings form critical part of Digital transformation, Big Data platform, Cloud and Web acceleration and IT modernization. We represent EDB, Pentaho, Docker, Couchbase, MongoDB, Elastic, NGINX, Sysdig, Redis Labs, Confluent, and HashiCorp as their key partners in the region. Our team members bring decades of experience in delivering confidence to enterprises in adopting open source software and are known for their thought leadership.
THE POSITION
Ashnik is looking for talented and passionate Technical consultant to be part of the training team and work with customers on DevOps Solution. You will be responsible for implementation and consultations related work for customers across SEA and India. We are looking for personnel with personal qualities like -
- Passion for working for different customers and different environments.
- Excellent communication and articulation skills
- Aptitude for learning new technology and willingness to understand technologies which he/she is not directly working on.
- Willingness to travel within and outside the country.
- Ability to independently work at the customer site and navigate through different teams.
SKILLS AND EXPERIENCE
ESSENTIAL SKILLS
- 3+ years of experience with B.E/ B.Tech, MCA, Graduation Degree with higher education in the technical field.
- Must have prior experience with docker containers, swarm and/or Kubernetes.
- Understanding of Operating System, processes, networking, and containers.
- Experience/exposure in writing and managing Dockerfile or creating container images.
- Experience of working on Linux platform.
- Ability to perform installations and system/software configuration on Linux.
- Should be aware of networking and SSL/TLS basics.
- Should be aware of tools used in complex enterprise IT infra e.g. LDAP, AD, Centralized Logging solution etc.
A preferred candidate will have
- Prior experience with open source Kubernetes or another container management platform like OpenShift, EKS, ECS etc.
- CNCF Kubernetes Certified Administrator/Developer
- Experience in container monitoring using Prometheus, Datadog etc.
- Experience with CI/CD tools and their usage
- Knowledge of scripting language e.g., shell scripting
RESPONSIBILITIES
First 2 months:
- Get an in-depth understanding of containers, Docker Engine and runtime, Swarm and Kubernetes.
- Get hands-on experience with various features of Mirantis Kubernetes Engine, and Mirantis Secure Registry
After 2 months:
- Work with Ashnik’s pre-sales and solution architects to deploy Mirantis Kubernetes Engine for customers
- Work with customer teams and provide them guidance on how Kubernetes Platform can be integrated with other tools in their environment (CI/CD, identity management, storage etc)
- Work with customers to containerize their applications.
- Write Dockerfile for customers during the implementation phase.
- Help customers design their network and security policy for effective management of applications deployed using Swarm and/or Kubernetes.
- Help customer design their deployments either with Swarm services or Kubernetes Deployments.
- Work with pre-sales and sales team to help customers during their evaluation of Mirantis Kubernetes platform
- Conduct workshops for customers as needed for technical hand-holding and technical handover.
Package: upto30 L
Office: Mumbai
Implementation Engineer
Implementation Engineer Duties and Responsibilities
- Understanding requirements from internal consumers about program functionality.
- Perform UAT tests on application with help of test cases and prepare documents for same and coordinate with team to resolve all issues within required timeframe and inform management of any delays.
- Collaborate with development team to design new programs for all client implementation activities and manage all communication with department to resolve all issues and assist implementation analyst to manage all production data.
- Perform research on all client issues and document all findings and implement all technical activities with help of JIRA.
- Assist internal teams to monitor all software implementation lifecycle and assist to track appropriate customization to all software for clients.
- Train technical staff on all OS and software issues and identify all issues in processes and provide solutions for same. Train other team members on processes, procedures, API functionality, and development specifications.
- Supervise/support crossed-functional teams to design, test and deploy to achieve on-time project completion.
- Implement, configure, and debug MySQL, JAVA, Redis, PHP, Node, ActiveMQ setups.
- Monitor and troubleshoot infrastructure utilizing SYSLOG, SNMP and other monitoring software.
- Install, configure, monitor and upgrade applications during installation/upgrade activities.
- Assisting team to identify network issue and help them with respective resolutions.
- Utilize JIRA for issue reporting, status, activity planning, tracking and updating project defects and tasks.
- Managing JIRA and tracking tickets to closure and follow-ups with team members.
- Troubleshoot software issues
- Provide on-call support as necessary
Implementation Engineer Requirements and Qualifications
- Bachelor’s degree in computer science, software engineering, or a related field
- Experience working with
- Linux & Windows Operating system
- Working on shell and bat scripts
- SIP/ISUP based solutions
- deploying / debugging Java, C++ based solutions.
- MySQL to install, backup, update and retrieve data
- Front-end or back-end software development for LINUX
- database management and security a plus
- Very good debugging and analytical skills
- Good Communication skills
-
4+ years of experience in IT and infrastructure
-
2+ years of experience in Azure Devops
-
Experience with Azure DevOps using both as CI / CD tool and Agile framework
-
Practical experience building and maintaining automated operational infrastructure
-
Experience in building React or Angular applications, .NET is must.
-
Practical experience using version control systems with Azure Repo
-
Developed and maintained scripts using Power Shell, ARM templates/ Terraform scripts for Infrastructure as a Code.
-
Experience in Linux shell scripting (Ubuntu) is must
-
Hands on experience with release automation, configuration and debugging.
-
Should have good knowledge of branching and merging
-
Integration of tools like static code analysis tools like SonarCube and Snky or static code analyser tools is a must.
A.P.T Portfolio, a high frequency trading firm that specialises in Quantitative Trading & Investment Strategies.Founded in November 2009, it has been a major liquidity provider in global Stock markets.
As a manager, you would be incharge of managing the devops team and your remit shall include the following
- Private Cloud - Design & maintain a high performance and reliable network architecture to support HPC applications
- Scheduling Tool - Implement and maintain a HPC scheduling technology like Kubernetes, Hadoop YARN Mesos, HTCondor or Nomad for processing & scheduling analytical jobs. Implement controls which allow analytical jobs to seamlessly utilize ideal capacity on the private cloud.
- Security - Implementing best security practices and implementing data isolation policy between different divisions internally.
- Capacity Sizing - Monitor private cloud usage and share details with different teams. Plan capacity enhancements on a quarterly basis.
- Storage solution - Optimize storage solutions like NetApp, EMC, Quobyte for analytical jobs. Monitor their performance on a daily basis to identify issues early.
- NFS - Implement and optimize latest version of NFS for our use case.
- Public Cloud - Drive AWS/Google-Cloud utilization in the firm for increasing efficiency, improving collaboration and for reducing cost. Maintain the environment for our existing use cases. Further explore potential areas of using public cloud within the firm.
- BackUps - Identify and automate back up of all crucial data/binary/code etc in a secured manner at such duration warranted by the use case. Ensure that recovery from back-up is tested and seamless.
- Access Control - Maintain password less access control and improve security over time. Minimize failures for automated job due to unsuccessful logins.
- Operating System -Plan, test and roll out new operating system for all production, simulation and desktop environments. Work closely with developers to highlight new performance enhancements capabilities of new versions.
- Configuration management -Work closely with DevOps/ development team to freeze configurations/playbook for various teams & internal applications. Deploy and maintain standard tools such as Ansible, Puppet, chef etc for the same.
- Data Storage & Security Planning - Maintain a tight control of root access on various devices. Ensure root access is rolled back as soon the desired objective is achieved.
- Audit access logs on devices. Use third party tools to put in a monitoring mechanism for early detection of any suspicious activity.
- Maintaining all third party tools used for development and collaboration - This shall include maintaining a fault tolerant environment for GIT/Perforce, productivity tools such as Slack/Microsoft team, build tools like Jenkins/Bamboo etc
Qualifications
- Bachelors or Masters Level Degree, preferably in CSE/IT
- 10+ years of relevant experience in sys-admin function
- Must have strong knowledge of IT Infrastructure, Linux, Networking and grid.
- Must have strong grasp of automation & Data management tools.
- Efficient in scripting languages and python
Desirables
- Professional attitude, co-operative and mature approach to work, must be focused, structured and well considered, troubleshooting skills.
- Exhibit a high level of individual initiative and ownership, effectively collaborate with other team members.
APT Portfolio is an equal opportunity employer
Hammoq is an exponentially growing Startup in US and UK.
Design and implement secure automation solutions for development, testing, and production environments
-
Build and deploy automation, monitoring, and analysis solutions
-
Manage our continuous integration and delivery pipeline to maximize efficiency
-
Implement industry best practices for system hardening and configuration management
-
Secure, scale, and manage Linux virtual environments
-
Develop and maintain solutions for operational administration, system/data backup, disaster recovery, and security/performance monitoring
-
Continuously evaluate existing systems with industry standards, and make recommendations for improvement
Desired Skills & Experiences
-
Bachelor’s or Master's degree in Computer Science, Engineering, or related field
-
Understanding of system administration in Linux environments
-
Strong knowledge of configuration management tools
-
Familiarity with continuous integration tools such as Jenkins, Travis CI, Circle CI
-
Proficiency in scripting languages including Bash, Python, and JavaScript
-
Strong communication and documentation skills
-
An ability to drive to goals and milestones while valuing and maintaining a strong attention to detail
-
Excellent judgment, analytical thinking, and problem-solving skills
-
Full understanding of software development lifecycle best practices
-
Self-motivated individual that possesses excellent time management and organizational skills
In PM's Words
Bash scripting, Containerd(or docker), Linux Operating system basics, kubernetes, git, Jenkins ( or any pipeline management), GCP ( or idea on any cloud technology)
Linux is major..most of the people are coming from Windows.. we need Linux.. and if windows is also there it will be added advantage
There is utmost certainilty that you will be working with an amazing team...
About Us
Zupee is India’s fastest-growing innovator in real money gaming with a focus on predominant skill-focused games. Started by 2 IIT-Kanpur alumni in 2018, we are backed by marquee global investors such as WestCap Group, Tomales Bay Capital, Matrix Partners, Falcon Edge, Orios Ventures, and Smile Group.
Know more about our recent funding: https://bit.ly/3AHmSL3
Our focus has been on innovating in the board, strategy, and casual games sub-genres. We innovate to ensure our games provide an intersection between skill and entertainment, enabling our users to earn while they play.
Location: We are location agnostic & our teams work from anywhere. Physically we are based out of Gurugram
Core Responsibilities:
- Handling all Devops activities
- Engage with development teams to document and implement best practices for our existing and new products
- Ensure reliability of services by making systems and applications stable
- Implement DevOps technologies and processes i.e. containerization, CI/CD, infrastructure as code, metrics, monitoring, etc.
- Good understanding of Terraform (or similar ‘Infrastructure as Code’ technologies)
- Good understanding of modern CI/CD methods and approaches
- Troubleshoot and resolve issues related to application development, deployment, and operations
- The expertise of AWS ecosystem (EC2, ECS, S3, VPC, ALB, SG)
- Experience in monitoring production systems and being proactive to ensure that outages are limited
- Develop and contribute to several infrastructure improvement projects around performance, security, autoscaling, and cost-optimization
What are we looking for:
- AWS cloud services – Compute, Networking, Security and Database services
- Strong with Linux fundamentals
- Experience with Docker & Kubernetes in a Production Environment
- Working knowledge of Terraform/Ansible
- Working Knowledge of Prometheus and Grafana or any other monitoring system
- Good at scripting with Bash/Python/Shell
- Experience with CI/CD pipeline preferably AWS DevOps
- Strong troubleshooting and problem-solving skills
- A top-notch DevOps Engineer will demonstrate excellent leadership skills and the capacity to mentor subordinates
- Good communication and collaboration skills
- Openness to learning new tools and technologies
- Startup Experience would be a plus
Senior Devops Engineer
Who are we?
Searce is a niche’ Cloud Consulting business with futuristic tech DNA. We do new-age tech to realise the “Next” in the “Now” for our Clients. We specialise in Cloud Data Engineering, AI/Machine Learning and Advanced Cloud infra tech such as Anthos and Kubernetes. We are one of the top & the fastest growing partners for Google Cloud and AWS globally with over 2,500 clients successfully moved to cloud.
What do we believe?
- Best practices are overrated
- Implementing best practices can only make one n ‘average’ .
- Honesty and Transparency
- We believe in naked truth. We do what we tell and tell what we do.
- Client Partnership
- Client - Vendor relationship: No. We partner with clients instead.
- And our sales team comprises 100% of our clients.
How do we work?
It’s all about being Happier first. And rest follows. Searce work culture is defined by HAPPIER.
- Humble: Happy people don’t carry ego around. We listen to understand; not to respond.
- Adaptable: We are comfortable with uncertainty. And we accept changes well. As that’s what life's about.
- Positive: We are super positive about work & life in general. We love to forget and forgive. We don’t hold grudges. We don’t have time or adequate space for it.
- Passionate: We are as passionate about the great street-food vendor across the street as about Tesla’s new model and so on. Passion is what drives us to work and makes us deliver the quality we deliver.
- Innovative: Innovate or Die. We love to challenge the status quo.
- Experimental: We encourage curiosity & making mistakes.
- Responsible: Driven. Self motivated. Self governing teams. We own it.
Are you the one? Quick self-discovery test:
- Love for cloud: When was the last time your dinner entailed an act on “How would ‘Jerry Seinfeld’ pitch Cloud platform & products to this prospect” and your friend did the ‘Sheldon’ version of the same thing.
- Passion for sales: When was the last time you went at a remote gas station while on vacation, and ended up helping the gas station owner saasify his 7 gas stations across other geographies.
- Compassion for customers: You listen more than you speak. When you do speak, people feel the need to listen.
- Humor for life: When was the last time you told a concerned CEO, ‘If Elon Musk can attempt to take humanity to Mars, why can’t we take your business to run on cloud ?
Introduction
When was the last time you thought about rebuilding your smart phone charger using solar panels on your backpack OR changed the sequencing of switches in your bedroom (on your own, of course) to make it more meaningful OR pointed out an engineering flaw in the sequencing of traffic signal lights to a fellow passenger, while he gave you a blank look? If the last time this happened was more than 6 months ago, you are a dinosaur for our needs. If it was less than 6 months ago, did you act on it? If yes, then let’s talk.
We are quite keen to meet you if:
- You eat, dream, sleep and play with Cloud Data Store & engineering your processes on cloud architecture
- You have an insatiable thirst for exploring improvements, optimizing processes, and motivating people.
- You like experimenting, taking risks and thinking big.
3 things this position is NOT about:
- This is NOT just a job; this is a passionate hobby for the right kind.
- This is NOT a boxed position. You will code, clean, test, build and recruit & energize.
- This is NOT a position for someone who likes to be told what needs to be done.
3 things this position IS about:
- Attention to detail matters.
- Roles, titles, ego does not matter; getting things done matters; getting things done quicker & better matters the most.
- Are you passionate about learning new domains & architecting solutions that could save a company millions of dollars?
Roles and Responsibilities
This is an entrepreneurial Cloud/DevOps Lead position that evolves to the Director- Cloud engineering .This position requires fanatic iterative improvement ability - architect a solution, code, research, understand customer needs, research more, rebuild and re-architect, you get the drift. We are seeking hard-core-geeks-turned-successful-techies who are interested in seeing their work used by millions of users the world over.
Responsibilities:
- Consistently strive to acquire new skills on Cloud, DevOps, Big Data, AI and ML technologies
- Design, deploy and maintain Cloud infrastructure for Clients – Domestic & International
- Develop tools and automation to make platform operations more efficient, reliable and reproducible
- Create Container Orchestration (Kubernetes, Docker), strive for full automated solutions, ensure the up-time and security of all cloud platform systems and infrastructure
- Stay up to date on relevant technologies, plug into user groups, and ensure our client are using the best techniques and tools
- Providing business, application, and technology consulting in feasibility discussions with technology team members, customers and business partners
- Take initiatives to lead, drive and solve during challenging scenarios
Requirements:
- 3 + Years of experience in Cloud Infrastructure and Operations domains
- Experience with Linux systems, RHEL/CentOS preferred
- Specialize in one or two cloud deployment platforms: AWS, GCP, Azure
- Hands on experience with AWS services (EC2, VPC, RDS, DynamoDB, Lambda)
- Experience with one or more programming languages (Python, JavaScript, Ruby, Java, .Net)
- Good understanding of Apache Web Server, Nginx, MySQL, MongoDB, Nagios
- Knowledge on Configuration Management tools such as Ansible, Terraform, Puppet, Chef
- Experience working with deployment and orchestration technologies (such as Docker, Kubernetes, Mesos)
- Deep experience in customer facing roles with a proven track record of effective verbal and written communications
- Dependable and good team player
- Desire to learn and work with new technologies
Key Success Factors
- Are you
- Likely to forget to eat, drink or pee when you are coding?
- Willing to learn, re-learn, research, break, fix, build, re-build and deliver awesome code to solve real business/consumer needs?
- An open source enthusiast?
- Absolutely technology agnostic and believe that business processes define and dictate which technology to use?
- Ability to think on your feet, and follow-up with multiple stakeholders to get things done
- Excellent interpersonal communication skills
- Superior project management and organizational skills
- Logical thought process; ability to grasp customer requirements rapidly and translate the same into technical as well as layperson terms
- Ability to anticipate potential problems, determine and implement solutions
- Energetic, disciplined, with a results-oriented approach
- Strong ethics and transparency in dealings with clients, vendors, colleagues and partners
- Attitude of ‘give me 5 sharp freshers and 6 months and I will rebuild the way people communicate over the internet.
- You are customer-centric, and feel strongly about building scalable, secure, quality software. You thrive and succeed in delivering high quality technology products in a growth environment where priorities shift fast.











