11+ Ariba Jobs in India
Apply to 11+ Ariba Jobs on CutShort.io. Find your next job, effortlessly. Browse Ariba Jobs and apply today!
1) Should have strong knowledge on Procure to Pay processes End to End
2) Purchase Requisitions creation, Processing, Approval workflow processes
3) Purchase Order creation, Processing, Approval workflow processes
4) Sending the PO to vendors
5) Order confirmation by vendors
6) PO approvals by the buyer considering vendor's order confirmation
7) Goods Receipt/Service entry sheet by the buyer
8) Two-way and three-way match
9) Invoice from vendor
10) Experience in configuring CIG
11) Invoice approvals workflow
12) Payment process (Desirable)
13) IT Consulting experience in any procurement applications like Ariba, SAP ECC, etc is needed
14) Working knowledge of SAP ECC procurement apps is an advantage
15) Ability to understand and mapping the procurement process to L1/L2/L3/L4/L5 levels based on multiple country/category of business flavours
16) Ability to map the processes to the tools(Ariba) functionality and conduct requirement elicitation workshops and do solution confirmation based on the available tool's functionality vs the business requirement
17) Strong communication skill is a must-have
18) Ability to manage/Govern Geo and function-wise distributed teams and ensure program delivery is a must.
19) Previous experience of managing large programs is needed
20) Project planning and delivery management is a must-have skill
Budget: 35 LPA to 45 LPA
Work schedule is Mon to Fri, 3:30am to 12:30pm IST
Key Responsibilities:
- Design, develop, and deploy computer vision and machine learning models for analyzing visual and document-based data.
- Build pipelines that convert unstructured visual inputs into structured and usable information.
- Develop and evaluate models for tasks such as object detection, segmentation, document parsing, and image understanding.
- Apply OCR and related techniques to extract meaningful information from complex documents and imagery.
- Work with large datasets and build efficient training and evaluation pipelines.
- Handle real-world visual datasets that may contain noise, inconsistencies, incomplete information, or varying formats.
- Experiment with different approaches to solve challenging computer vision problems and evaluate tradeoffs between accuracy, performance, and complexity.
- Collaborate with product and engineering teams to integrate machine learning models into scalable production systems.
- Continuously improve model performance, accuracy, and robustness in real-world environments.
- Stay up to date with the latest developments in AI and computer vision and apply relevant techniques where appropriate.
- Actively leverage modern AI tools and frameworks to accelerate experimentation, development, and engineering workflows.
Requirements:
- 5+ years of hands-on experience building and deploying machine learning models, particularly in Computer Vision or document understanding.
- Strong proficiency in Python for machine learning and data processing.
- Hands-on experience with modern ML frameworks such as PyTorch and libraries in the Hugging Face ecosystem.
- Experience with computer vision tooling such as OpenCV.
- Experience with common ML and data science libraries such as scikit-learn, NumPy, and Pandas.
- Experience developing models for tasks such as segmentation, object detection, or document analysis.
- Experience working with large image datasets and building training pipelines.
- Solid understanding of model evaluation, data preprocessing, and performance optimization.
- Strong problem-solving skills and ability to work in a fast-paced product environment.
- Ability to collaborate effectively with cross-functional engineering and product teams.
- The candidate should be based in India
- Willing to work remotely full-time
- Work schedule is Mon to Fri, 3:30am to 12:30pm IST
Preferred Qualifications:
- Experience with TensorFlow or other deep learning frameworks.
- Experience working with OCR pipelines or document analysis systems.
- Experience deploying machine learning models in production environments.
- Experience with containerized deployments such as Docker or Kubernetes.
- Experience working with complex technical documents, diagrams, or structured visual data.
- Familiarity with spatial or geometry-related data problems.
- Experience with libraries such as Detectron2, MMDetection, or similar.
- Familiarity with frameworks used to integrate modern AI models into applications (e.g., LangChain or similar tooling).
- Contributions to open-source ML or computer vision projects are a plus.
Additional Information:
- The problems we work on involve complex visual and document-based data, so we value engineers who enjoy tackling challenging technical problems and experimenting with different approaches to reach practical solutions.
- Candidates are required to include links to relevant projects, GitHub repositories, research work, or examples of machine learning systems they have built.
Benefits:
- Flexible remote work opportunities with career development opportunities
- Engagement with a supportive and collaborative global team
- Competitive market based salary
The Sr AWS/Azure/GCP Databricks Data Engineer at Koantek will use comprehensive
modern data engineering techniques and methods with Advanced Analytics to support
business decisions for our clients. Your goal is to support the use of data-driven insights
to help our clients achieve business outcomes and objectives. You can collect, aggregate, and analyze structured/unstructured data from multiple internal and external sources and
patterns, insights, and trends to decision-makers. You will help design and build data
pipelines, data streams, reporting tools, information dashboards, data service APIs, data
generators, and other end-user information portals and insight tools. You will be a critical
part of the data supply chain, ensuring that stakeholders can access and manipulate data
for routine and ad hoc analysis to drive business outcomes using Advanced Analytics. You are expected to function as a productive member of a team, working and
communicating proactively with engineering peers, technical lead, project managers, product owners, and resource managers. Requirements:
Strong experience as an AWS/Azure/GCP Data Engineer and must have
AWS/Azure/GCP Databricks experience. Expert proficiency in Spark Scala, Python, and spark
Must have data migration experience from on-prem to cloud
Hands-on experience in Kinesis to process & analyze Stream Data, Event/IoT Hubs, and Cosmos
In depth understanding of Azure/AWS/GCP cloud and Data lake and Analytics
solutions on Azure. Expert level hands-on development Design and Develop applications on Databricks. Extensive hands-on experience implementing data migration and data processing
using AWS/Azure/GCP services
In depth understanding of Spark Architecture including Spark Streaming, Spark Core, Spark SQL, Data Frames, RDD caching, Spark MLib
Hands-on experience with the Technology stack available in the industry for data
management, data ingestion, capture, processing, and curation: Kafka, StreamSets, Attunity, GoldenGate, Map Reduce, Hadoop, Hive, Hbase, Cassandra, Spark, Flume, Hive, Impala, etc
Hands-on knowledge of data frameworks, data lakes and open-source projects such
asApache Spark, MLflow, and Delta Lake
Good working knowledge of code versioning tools [such as Git, Bitbucket or SVN]
Hands-on experience in using Spark SQL with various data sources like JSON, Parquet and Key Value Pair
Experience preparing data for Data Science and Machine Learning with exposure to- model selection, model lifecycle, hyperparameter tuning, model serving, deep
learning, etc
Demonstrated experience preparing data, automating and building data pipelines for
AI Use Cases (text, voice, image, IoT data etc. ). Good to have programming language experience with. NET or Spark/Scala
Experience in creating tables, partitioning, bucketing, loading and aggregating data
using Spark Scala, Spark SQL/PySpark
Knowledge of AWS/Azure/GCP DevOps processes like CI/CD as well as Agile tools
and processes including Git, Jenkins, Jira, and Confluence
Working experience with Visual Studio, PowerShell Scripting, and ARM templates. Able to build ingestion to ADLS and enable BI layer for Analytics
Strong understanding of Data Modeling and defining conceptual logical and physical
data models. Big Data/analytics/information analysis/database management in the cloud
IoT/event-driven/microservices in the cloud- Experience with private and public cloud
architectures, pros/cons, and migration considerations. Ability to remain up to date with industry standards and technological advancements
that will enhance data quality and reliability to advance strategic initiatives
Working knowledge of RESTful APIs, OAuth2 authorization framework and security
best practices for API Gateways
Guide customers in transforming big data projects, including development and
deployment of big data and AI applications
Guide customers on Data engineering best practices, provide proof of concept, architect solutions and collaborate when needed
2+ years of hands-on experience designing and implementing multi-tenant solutions
using AWS/Azure/GCP Databricks for data governance, data pipelines for near real-
time data warehouse, and machine learning solutions. Over all 5+ years' experience in a software development, data engineering, or data
analytics field using Python, PySpark, Scala, Spark, Java, or equivalent technologies. hands-on expertise in Apache SparkTM (Scala or Python)
3+ years of experience working in query tuning, performance tuning, troubleshooting, and debugging Spark and other big data solutions. Bachelor's or Master's degree in Big Data, Computer Science, Engineering, Mathematics, or similar area of study or equivalent work experience
Ability to manage competing priorities in a fast-paced environment
Ability to resolve issues
Basic experience with or knowledge of agile methodologies
AWS Certified: Solutions Architect Professional
Databricks Certified Associate Developer for Apache Spark
Microsoft Certified: Azure Data Engineer Associate
GCP Certified: Professional Google Cloud Certified
Responsibilities:
- Software Development with C++ for Autonomous drive project.
- QT Library (no GUI features)
- Object Oriented Analysis / Object Oriented Design
- C++ Template implementation
- C++17 specifics like “std::optional”
- Macro implementation
- Implementation of Clean Code
- Static Code Analysis
- CMake
Qualifications:
- Excellent GIT knowledge especially how to merge, Rebase
- University degree in Electrical/Electronic engineering, Computer Science or similar
- Minimum 1 to 5 years of embedded software development experience on Yocto Linux based projects in automotive domain
- Expert in C++ programming
- Strong debugging skills
- Good communication skills
Responsibilities:
- Design, develop, and maintain responsive web applications using Node.js, Next.js, and React.js.
- Implement robust APIs and services using Node.js.
- Ensure the technical feasibility of UI/UX designs and optimize applications for maximum speed and scalability.
- Collaborate with cross-functional teams to define, design, and ship new features.
- Integrate data from various back-end services and databases.
- Manage and deploy applications on cloud platforms such as AWS, GCP, or Azure.
- Utilize version control tools such as Git to manage codebase changes.
- Implement continuous integration and continuous deployment (CI/CD) pipelines to streamline development and deployment processes.
- Stay updated with emerging trends and technologies in the software development industry.
Qualifications:
- Bachelor's degree in Computer Science, Information Technology, or a related field.
- 2-4 years of proven experience as a Full Stack developer.
- Strong proficiency in JavaScript/Typescript.
- Experience with cloud services (AWS, GCP, or Azure) and managing scalable applications in the cloud.
- Solid understanding of version control tools, preferably Git.
- Knowledge of CI/CD tools and methodologies.
- Excellent problem-solving skills and critical thinking abilities.
- Strong communication and teamwork skills.
- Ability to handle multiple projects and meet deadlines
To be successful in this role, you should have a proven track record in sales, excellent communication and negotiation skills, and the ability to work in a fast-paced environment. You should also be highly organized and able to manage to close deals with corporate.
Additional responsibilities may include:
-Setting and achieving sales targets
-Developing and implementing sales plans and programs
-Analyzing market trends and identifying new business opportunities
-Targeting and implementing new client base along with corporates and colleges
-Training and coaching team members to ensure their success
If you are a results-driven individual with a passion for sales and a desire to succeed, we encourage you to apply for this exciting opportunity.
Experience:
The candidate should have about 5+ years of experience with design and development in Java/Scala. Experience in algorithm, data-structure, database and architectures of distributed System is mandatory.
Required Skills:
- In-depth knowledge of Hadoop, Spark architecture and its components such as HDFS, YARN and executor, cores and memory param
- Knowledge of Scala and Java both
- Extensive experience in developing spark job. Should possess good Oops knowledge and be aware of enterprise application design patterns.
- Good knowledge of Unix/Linux.
- Experience working on large-scale software projects
- Understanding the big picture and the various uses cases involved while crafting the solution and documenting them in Unified Modeling language.
- Own and maintain the architecture document.
- Keep an eye out for technological trends, open-source projects that can be used.
- Knows common programming languages and Frameworks.
- Real time streaming data consumption
Good to have :
- Azure/AWS Cloud Knowledge of Data Storage and Compute side
- Knowledge Multitenant Architecture
- Brief idea of Data Science
Mandatory:
- 4-7 Experience in React JS, (ReactJS 2.5yrs compulsory)
- Optional:
Knowledge in UX/UI design, Azure DevOps, Test Driven - and Domain Driven Development
- Developing of complex IT systems with various system integrations and configurations
- Data security/GDPR
Specialism- Advance Analytics, Data Science, regression, forecasting, analytics, SQL, R, python, decision tree, random forest, SAS, clustering classification
Senior Analytics Consultant- Responsibilities
- Understand business problem and requirements by building domain knowledge and translate to data science problem
- Conceptualize and design cutting edge data science solution to solve the data science problem, apply design thinking concepts
- Identify the right algorithms , tech stack , sample outputs required to efficiently adder the end need
- Prototype and experiment the solution to successfully demonstrate the value
Independently or with support from team execute the conceptualized solution as per plan by following project management guidelines - Present the results to internal and client stakeholder in an easy to understand manner with great story telling, story boarding, insights and visualization
- Help build overall data science capability for eClerx through support in pilots, pre sales pitches, product development , practice development initiatives
- Develop and manage e-commerce websites, web applications & web sites.
- Analyze, design, code, debug, test, document & deploy applications.
- Participate in project & deployment planning.
- Must be a self-starter & be able to work with minimum supervision
- Exp. In modules/extensions development/customization.
- Exp. In Theme integration/customization.
- Exp. In API creation/integration.
- Exp. In Migration from Magento1 to Magento2
- Extensive experience of PHP and MySQL.
- Exposure on Magento 2, CMS and JavaScript frameworks such as jQuery.
- Demonstrable knowledge of XML, XHTML, CSS, Modules i.e. API integration,
- Payment Gateways, XML with a focus on standards.
- Demonstrable source control experience
- Two or more published websites in E-Commerce




