11+ SAP GL Jobs in Bangalore (Bengaluru) | SAP GL Job openings in Bangalore (Bengaluru)
Apply to 11+ SAP GL Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest SAP GL Job opportunities across top companies like Google, Amazon & Adobe.

Job Title: Oracle Fusion Developer
- Exp: 7 to 10 Years
- CTC: Competitive Hike
- Woke Mode: Contract 6 Months Extendable – 1 Month onsite – Riyadh
- Work Type: Hybrid
- Work Location: Mumbai, Chennai and Bangalore
Required Skills & Qualifications:
- Minimum 7 years of experience in Oracle Fusion Applications.
- Strong expertise in at least two core modules:
- HRMS – Core HR, Payroll, Talent, Absence
- Finance – GL, AP, AR, Fixed Assets, Cash Management
- Procurement – Self-Service Procurement, Supplier Portal, Purchasing
- Deep understanding of Oracle Fusion Cloud architecture and security model.
- Strong knowledge of Fusion BI (OTBI, BIP) for reporting and analytics.
- Proficiency in Oracle Integration Cloud (OIC) or middleware tools is a plus.
- Excellent problem-solving, analytical, and communication skills.
- Oracle Cloud certifications in relevant modules are highly desirable.
Job Title : Informatica MDM Developer
Experience : 7 to 10 Years
Location : Bangalore (3 Days Work From Office – ITPL Main Road, Mahadevapura)
Job Type : Full-time / Contract
Job Overview :
We are hiring an experienced Informatica MDM Developer to join our team in Bangalore. The ideal candidate will play a key role in implementing and customizing Master Data Management (MDM) solutions using Informatica MDM (Multi-Domain Edition), ensuring a trusted, unified view of enterprise data.
Mandatory Skills :
Informatica MDM (Multi-Domain Edition), ActiveVOS workflows, Java (User Exits), Services Integration Framework (SIF) APIs, SQL/PLSQL, Data Modeling, Informatica Data Quality (IDQ), MDM concepts (golden record, survivorship, trust, hierarchy).
Key Responsibilities :
- Configure Informatica MDM Hub : subject area models, base objects, relationships.
- Develop match/merge rules, trust/survivorship logic to create golden records.
- Design workflows using ActiveVOS for data stewardship and exception handling.
- Integrate with source/target systems (ERP, CRM, Data Lakes, APIs).
- Customize user exits (Java), SIF APIs, and business entity services.
- Implement and maintain data quality validations using IDQ.
- Collaborate with cross-functional teams for governance alignment.
- Support MDM jobs, synchronization, batch groups, and performance tuning.
Must-Have Skills :
- 7 to 10 years of experience in Data Engineering or MDM.
- 5+ years hands-on with Informatica MDM (Multi-Domain Edition).
- Strong in MDM concepts : golden record, trust, survivorship, hierarchy.
Proficient in :
- Informatica MDM Hub Console, Provisioning Tool, SIF.
- ActiveVOS workflows, Java-based user exits.
- SQL, PL/SQL, and data modeling.
- Experience with system integration and Informatica Data Quality (IDQ).
Nice-to-Have :
- Knowledge of Informatica EDC, Axon, cloud MDM (AWS/GCP/Azure).
- Understanding of data lineage, GDPR/HIPAA compliance, and DevOps tools.
Bangalore / Chennai
- Hands-on data modelling for OLTP and OLAP systems
- In-depth knowledge of Conceptual, Logical and Physical data modelling
- Strong understanding of Indexing, partitioning, data sharding, with practical experience of having done the same
- Strong understanding of variables impacting database performance for near-real-time reporting and application interaction.
- Should have working experience on at least one data modelling tool, preferably DBSchema, Erwin
- Good understanding of GCP databases like AlloyDB, CloudSQL, and BigQuery.
- People with functional knowledge of the mutual fund industry will be a plus
Role & Responsibilities:
● Work with business users and other stakeholders to understand business processes.
● Ability to design and implement Dimensional and Fact tables
● Identify and implement data transformation/cleansing requirements
● Develop a highly scalable, reliable, and high-performance data processing pipeline to extract, transform and load data from various systems to the Enterprise Data Warehouse
● Develop conceptual, logical, and physical data models with associated metadata including data lineage and technical data definitions
● Design, develop and maintain ETL workflows and mappings using the appropriate data load technique
● Provide research, high-level design, and estimates for data transformation and data integration from source applications to end-user BI solutions.
● Provide production support of ETL processes to ensure timely completion and availability of data in the data warehouse for reporting use.
● Analyze and resolve problems and provide technical assistance as necessary. Partner with the BI team to evaluate, design, develop BI reports and dashboards according to functional specifications while maintaining data integrity and data quality.
● Work collaboratively with key stakeholders to translate business information needs into well-defined data requirements to implement the BI solutions.
● Leverage transactional information, data from ERP, CRM, HRIS applications to model, extract and transform into reporting & analytics.
● Define and document the use of BI through user experience/use cases, prototypes, test, and deploy BI solutions.
● Develop and support data governance processes, analyze data to identify and articulate trends, patterns, outliers, quality issues, and continuously validate reports, dashboards and suggest improvements.
● Train business end-users, IT analysts, and developers.
Required Skills:
● Bachelor’s degree in Computer Science or similar field or equivalent work experience.
● 5+ years of experience on Data Warehousing, Data Engineering or Data Integration projects.
● Expert with data warehousing concepts, strategies, and tools.
● Strong SQL background.
● Strong knowledge of relational databases like SQL Server, PostgreSQL, MySQL.
● Strong experience in GCP & Google BigQuery, Cloud SQL, Composer (Airflow), Dataflow, Dataproc, Cloud Function and GCS
● Good to have knowledge on SQL Server Reporting Services (SSRS), and SQL Server Integration Services (SSIS).
● Knowledge of AWS and Azure Cloud is a plus.
● Experience in Informatica Power exchange for Mainframe, Salesforce, and other new-age data sources.
● Experience in integration using APIs, XML, JSONs etc.
- Experience: Freshers or any experience
- Education: Graduates only
- Shift: Rotational Day shift
- Week Off: 1 rotational week offs
- Language: English, Hindi, Marathi, Tamil, Telugu, Malayalam
- Salary: ₹21,000 per month
- Rounds: HR, Ops, Versant



Responsibilities:
- Develop and maintain high-quality mobile applications using React Native.
- Collaborate with cross-functional teams to gather requirements and translate them into high-level designs.
- Write clean, reusable, and well-structured code following industry best practices and coding standards.
- Conduct code reviews and provide constructive feedback to ensure code quality and adherence to standards.
- Mentor and guide junior developers, providing technical expertise and promoting professional growth.
- Collaborate with backend developers to integrate APIs and ensure smooth data flow between the app and server.
- Stay updated with the latest trends and advancements in React Native and mobile app development.
- Work in a 10 AM to 6 PM, six-day office role, maintaining regular attendance and punctuality.
Required Skills and Qualifications:
- Strong proficiency in React Native development.
- Experience with Redux or similar state management libraries.
- Proficiency in integrating APIs and working with backend services.
- Sound knowledge of JavaScript, ES6+, and modern web technologies.
- Familiarity with Git version control system and agile development methodologies.
- Good problem-solving and debugging skills.
- Excellent communication and teamwork abilities.
- Bachelor's degree in Computer Science or a related field (preferred).
Join Arroz Technology Private Limited as a React Native App Developer and be part of an innovative team driving the development of cutting-edge mobile applications. This role offers competitive compensation and growth opportunities within a dynamic work environment.

Experience of Linux
Experience using Python or Shell scripting (for Automation)
Hands-on experience with Implementation of CI/CD Processes
Experience working with one cloud platforms (AWS or Azure or Google)
Experience working with configuration management tools such as Ansible & Chef
Experience working with Containerization tool Docker.
Experience working with Container Orchestration tool Kubernetes.
Experience in source Control Management including SVN and/or Bitbucket
& GitHub
Experience with setup & management of monitoring tools like Nagios, Sensu & Prometheus or any other popular tools
Hands-on experience in Linux, Scripting Language & AWS is mandatory
Troubleshoot and Triage development, Production issues
Company Overview:
Rakuten, Inc. (TSE's first section: 4755) is the largest ecommerce company in Japan, and third largest eCommerce marketplace company worldwide. Rakuten provides a variety of consumer and business-focused services including e-commerce, e-reading, travel, banking, securities, credit card, e-money, portal and media, online marketing and professional sports. The company is expanding globally and currently has operations throughout Asia, Western Europe, and the Americas. Founded in 1997, Rakuten is headquartered in Tokyo, with over 17,000 employees and partner staff worldwide. Rakuten's 2018 revenues were 1101.48 billions yen. -In Japanese, Rakuten stands for ‘optimism.’ -It means we believe in the future. -It’s an understanding that, with the right mind-set, -we can make the future better by what we do today. Today, our 70+ businesses span e-commerce, digital content, communications and FinTech, bringing the joy of discovery to more than 1.2 billion members across the world.
Website : https://www.rakuten.com/">https://www.rakuten.com/
Crunchbase : https://www.crunchbase.com/organization/rakuten">Rakuten has raised a total of https://www.crunchbase.com/search/funding_rounds/field/organizations/funding_total/rakuten">$42.4M in funding over https://www.crunchbase.com/search/funding_rounds/field/organizations/num_funding_rounds/rakuten">2 rounds
Companysize : 10,001 + Employees
Founded : 1997
Headquarters : Tokyo, Japan
Work location : Bangalore (M.G.Road)
Please find below Job Description.
Role Description – Data Engineer for AN group (Location - India)
Key responsibilities include:
We are looking for engineering candidate in our Autonomous Networking Team. The ideal candidate must have following abilities –
- Hands- on experience in big data computation technologies (at least one and potentially several of the following: Spark and Spark Streaming, Hadoop, Storm, Kafka Streaming, Flink, etc)
- Familiar with other related big data technologies, such as big data storage technologies (e.g., Phoenix/HBase, Redshift, Presto/Athena, Hive, Spark SQL, BigTable, BigQuery, Clickhouse, etc), messaging layer (Kafka, Kinesis, etc), Cloud and container- based deployments (Docker, Kubernetes etc), Scala, Akka, SocketIO, ElasticSearch, RabbitMQ, Redis, Couchbase, JAVA, Go lang.
- Partner with product management and delivery teams to align and prioritize current and future new product development initiatives in support of our business objectives
- Work with cross functional engineering teams including QA, Platform Delivery and DevOps
- Evaluate current state solutions to identify areas to improve standards, simplify, and enhance functionality and/or transition to effective solutions to improve supportability and time to market
- Not afraid of refactoring existing system and guiding the team about same.
- Experience with Event driven Architecture, Complex Event Processing
- Extensive experience building and owning large- scale distributed backend systems.

About GlowRoad:
GlowRoad is building India's most profitable social e-commerce platform where resellers share
the catalog of products through their network on Facebook, Whatsapp, Instagram, etc and
convert them to sales. GlowRoad is on a mission to create micro-entrepreneurs (resellers) who can set up their web-store, market their products and track all transactions through its platform.
GlowRoad app has ~15M downloads and 1- million + MAU's.-
GlowRoad has been funded by global VCs like Accel Partners, CDH, KIP and Vertex Ventures and recently raised series C Funding. We are scaling our operations across India.-
GlowRoad is looking for team members passionate about building platforms for next billion
users and reimagining e-commerce for mobile-first users. A great environment, a fun, open,
energetic and creative environment. Approachable leadership, filled with passionate people, Open communication and provides high growth for employees.
Role:
● Gather, process/analyze and report business data across departments
● Report key business data/metrics on a regular basis (daily, weekly and monthly
as relevant)
● Structure concise reports to share with management
● Work closely with Senior Analysts to create data pipelines for Analytical
Databases for Category, Operations, Marketing, Support teams.
● Assist Senior Analysts in projects by learning new reporting tools like Power BI
and advanced analytics with R
Basic Qualifications
● Engineering Graduate
● 6- 24 months of Hands on experience with SQL, Excel, Google Spreadsheets
● Experience in creating MIS/Dashboards in Excel/Google Spreadsheets
● Strong in Mathematics
● Ability to take full ownership in terms of timeline and data sanity with respect to
reports
● Basic Verbal and Written English Communication



- Use data to develop machine learning models that optimize decision making in Credit Risk, Fraud, Marketing, and Operations
- Implement data pipelines, new features, and algorithms that are critical to our production models
- Create scalable strategies to deploy and execute your models
- Write well designed, testable, efficient code
- Identify valuable data sources and automate collection processes.
- Undertake to preprocess of structured and unstructured data.
- Analyze large amounts of information to discover trends and patterns.
Requirements:
- 1+ years of experience in applied data science or engineering with a focus on machine learning
- Python expertise with good knowledge of machine learning libraries, tools, techniques, and frameworks (e.g. pandas, sklearn, xgboost, lightgbm, logistic regression, random forest classifier, gradient boosting regressor etc)
- strong quantitative and programming skills with a product-driven sensibility