Cutshort logo
Oracle warehouse builder jobs

11+ Oracle Warehouse Builder Jobs in India

Apply to 11+ Oracle Warehouse Builder Jobs on CutShort.io. Find your next job, effortlessly. Browse Oracle Warehouse Builder Jobs and apply today!

icon
Bengaluru (Bangalore), Pune, Delhi, Gurugram, Nashik, Vizag
3 - 5 yrs
₹8L - ₹12L / yr
Oracle
Business Intelligence (BI)
PowerBI
Oracle Warehouse Builder
Informatica
+3 more
Oracle BI developer wiith 6+ years experience working on Oracle warehouse design, development and
testing
Good knowledge of Informatica ETL, Oracle Analytics Server
Analytical ability to design warehouse as per user requirements mainly in Finance and HR domain
Good skills to analyze existing ETL, dashboard to understand the logic and do enhancements as per
requirements
Good communication skills and written communication
Qualifications
Master or Bachelor degree in Engineering/Computer Science /Information Technology
Additional information
Excellent verbal and written communication skills
Read more
Cognitive Clouds Software Pvt Ltd

at Cognitive Clouds Software Pvt Ltd

1 video
6 recruiters
Talent Acquisition
Posted by Talent Acquisition
Bengaluru (Bangalore)
4 - 6 yrs
Best in industry
Snow flake schema
ETL
Data modeling

We are seeking a skilled Data Engineer with a strong proficiency in SQL and extensive experience in data modeling. The ideal candidate will be adept at designing and implementing robust data architectures, including snowflake schemas, ER diagrams, and various types of tables such as transaction, dimension, surrogate keys, foreign keys, and primary keys.


  • Over 4 years of experience as a data engineer or in a similar role.
  • Technical expertise with data models, data mining, and segmentation techniques
  • Knowledge of programming languages (e.g. Java and Python)
  • Hands-on experience with SQL database design
  • Develop and maintain efficient SQL queries for data extraction, transformation, and loading (ETL) processes.
  • Design and implement data models, including snowflake schemas and ER diagrams, to support business requirements.
  • Collaborate with cross-functional teams to understand data needs and requirements, and translate them into scalable database solutions.
Read more
bitsCrunch technology pvt ltd
Remote only
3 - 7 yrs
₹5L - ₹10L / yr
SQL
skill iconPython
skill iconJavascript
NOSQL Databases
Web3js
+2 more

Job Description: 

We are looking for an experienced SQL Developer to become a valued member of our dynamic team. In the role of SQL Developer, you will be tasked with creating top-notch database solutions, fine-tuning SQL databases, and providing support for our applications and systems. Your proficiency in SQL database design, development, and optimization will be instrumental in delivering efficient and dependable solutions to fulfil our business requirements.


Responsibilities:

 ● Create high-quality database solutions that align with the organization's requirements and standards.

● Design, manage, and fine-tune SQL databases, queries, and procedures to achieve optimal performance and scalability.

● Collaborate on the development of DBT pipelines to facilitate data transformation and modelling within our data warehouse.

● Evaluate and interpret ongoing business report requirements, gaining a clear understanding of the data necessary for insightful reporting.

● Conduct research to gather the essential data for constructing relevant and valuable reporting materials for stakeholders.

● Analyse existing SQL queries to identify areas for performance enhancements, implementing optimizations for greater efficiency.

● Propose new queries to extract meaningful insights from the data and enhance reporting capabilities.

● Develop procedures and scripts to ensure smooth data migration between systems, safeguarding data integrity.

● Deliver timely management reports on a scheduled basis to support decision-making processes.

● Investigate exceptions related to asset movements to maintain accurate and dependable data records.


Duties and Responsibilities: 

● A minimum of 3 years of hands-on experience in SQL development and administration, showcasing a strong proficiency in database management.

● A solid grasp of SQL database design, development, and optimization techniques.

● A Bachelor's degree in Computer Science, Information Technology, or a related field.

● An excellent understanding of DBT (Data Build Tool) and its practical application in data transformation and modelling.

● Proficiency in either Python or JavaScript, as these are commonly utilized for data-related tasks.

● Familiarity with NoSQL databases and their practical application in specific scenarios.

● Demonstrated commitment and pride in your work, with a focus on contributing to the company's overall success.

● Strong problem-solving skills and the ability to collaborate effectively within a team environment.

● Excellent interpersonal and communication skills that facilitate productive collaboration with colleagues and stakeholders.

● Familiarity with Agile development methodologies and tools that promote efficient project management and teamwork.

Read more
Career Forge

at Career Forge

2 candid answers
Mohammad Faiz
Posted by Mohammad Faiz
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
5 - 7 yrs
₹12L - ₹15L / yr
skill iconPython
Apache Spark
PySpark
Data engineering
ETL
+10 more

🚀 Exciting Opportunity: Data Engineer Position in Gurugram 🌐


Hello 


We are actively seeking a talented and experienced Data Engineer to join our dynamic team at Reality Motivational Venture in Gurugram (Gurgaon). If you're passionate about data, thrive in a collaborative environment, and possess the skills we're looking for, we want to hear from you!


Position: Data Engineer  

Location: Gurugram (Gurgaon)  

Experience: 5+ years 


Key Skills:

- Python

- Spark, Pyspark

- Data Governance

- Cloud (AWS/Azure/GCP)


Main Responsibilities:

- Define and set up analytics environments for "Big Data" applications in collaboration with domain experts.

- Implement ETL processes for telemetry-based and stationary test data.

- Support in defining data governance, including data lifecycle management.

- Develop large-scale data processing engines and real-time search and analytics based on time series data.

- Ensure technical, methodological, and quality aspects.

- Support CI/CD processes.

- Foster know-how development and transfer, continuous improvement of leading technologies within Data Engineering.

- Collaborate with solution architects on the development of complex on-premise, hybrid, and cloud solution architectures.


Qualification Requirements:

- BSc, MSc, MEng, or PhD in Computer Science, Informatics/Telematics, Mathematics/Statistics, or a comparable engineering degree.

- Proficiency in Python and the PyData stack (Pandas/Numpy).

- Experience in high-level programming languages (C#/C++/Java).

- Familiarity with scalable processing environments like Dask (or Spark).

- Proficient in Linux and scripting languages (Bash Scripts).

- Experience in containerization and orchestration of containerized services (Kubernetes).

- Education in database technologies (SQL/OLAP and Non-SQL).

- Interest in Big Data storage technologies (Elastic, ClickHouse).

- Familiarity with Cloud technologies (Azure, AWS, GCP).

- Fluent English communication skills (speaking and writing).

- Ability to work constructively with a global team.

- Willingness to travel for business trips during development projects.


Preferable:

- Working knowledge of vehicle architectures, communication, and components.

- Experience in additional programming languages (C#/C++/Java, R, Scala, MATLAB).

- Experience in time-series processing.


How to Apply:

Interested candidates, please share your updated CV/resume with me.


Thank you for considering this exciting opportunity.

Read more
Gurugram
5 - 8 yrs
₹15L - ₹23L / yr
SQL
skill iconPython
skill iconAmazon Web Services (AWS)
ETL

About the co.– Our client is an agency of the world’s largest media investment company which is a part of WPP. It is a global digital transformation agency with 1200 employees across 21 nations. Our team of experts support clients in programmatic, social, paid search, analytics, technology, organic search, affiliate marketing, e-commerce and across traditional channels. 


Job Location: Gurgaon


Reporting of the role This role reports to the Technology Architect,


Key Accountabilities: This role is for a Technical Specialist who can understand and solves complex functional, technical, and architectural practices that cover data and can understand the end-to-end data lifecycle capabilities and technologies and provide architectural guidance in the selection, articulation, and use of the chosen solutions.


The successful candidate will be expected to interact with all levels of the business and technical community, seeking strong engagement with all stakeholders. The candidate should be an expert in functional data architecture & design, including strong data modelling. A candidate who is self-disciplined, with a keenness to master, suggest, and work with different technologies & toolsets. The role would also involve intensive interaction with the business and other technology functions, so good communication skills and the ability to work under pressure is essential


What you’ll bring:


 5-7 years of strong experience in working with SQL, Python, ETL development, and AWS.

 Strong Experience in writing complex SQLs

 Good Communication skills  Work with senior staff and business leaders to identify requirements for data/information across the core business domains

 Work with Project Managers Business Analysts and other subject matter experts to identify and understand requirements

 Good experience working with any BI tool like Tableau, or Power BI.  Familiar with various cloud technologies and their offerings within data specialization and Data Warehousing.

 Snowflake is good to have. 


Minimum qualifications:

 B. Tech./MCA preferred

 Excellent 5 years Hand on experience in Big data, ETL Development, and Data Processing


Regards

Team Merito

Read more
Tata Digital Pvt Ltd
Mumbai, Bengaluru (Bangalore)
10 - 15 yrs
₹20L - ₹37L / yr
Service Integration and Management
Environment Specialist
ETL
Test cases
  • Implementing Environment solutions for projects in a dynamic corporate environment
  • Communicating and collaborating with project and technical teams on Environment requirements, delivery and support
  • Delivering and Maintaining Environment Management Plans, Bookings, Access Details and Schedules for Projects
  • Working with Environment Team on Technical Environment Delivery Solutions
  • Troubleshooting, managing and tracking Environment Incidents & Service Requests in conjunction with technical teams and external partners via the service management tool
  • Leadership support in the North Sydney office
  • Mentoring, guiding and leading other team
  • Creation of new test environments
  • Provisioning infrastructure and platform
Test environment configuration (module, system, sub-module)

  • Test data provisioning (privatization, traceability, ETL, segregation)
  • Endpoint integration
  • Monitoring the test environment
  • Updating/deleting outdated test-environments and their details
  • Investigation of test environment issues and at times, co- ordination till its resolution
Read more
Magic9 Media and Consumer Knowledge Pvt. Ltd.
Mumbai
3 - 5 yrs
₹7L - ₹12L / yr
ETL
SQL
skill iconPython
Statistical Analysis
skill iconMachine Learning (ML)
+4 more

Job Description

This requirement is to service our client which is a leading big data technology company that measures what viewers consume across platforms to enable marketers make better advertising decisions. We are seeking a Senior Data Operations Analyst to mine large-scale datasets for our client. Their work will have a direct impact on driving business strategies for prominent industry leaders. Self-motivation and strong communication skills are both must-haves. Ability to work in a fast-paced work environment is desired.


Problems being solved by our client: 

Measure consumer usage of devices linked to the internet and home networks including computers, mobile phones, tablets, streaming sticks, smart TVs, thermostats and other appliances. There are more screens and other connected devices in homes than ever before, yet there have been major gaps in understanding how consumers interact with this technology. Our client uses a measurement technology to unravel dynamics of consumers’ interactions with multiple devices.


Duties and responsibilities:

  • The successful candidate will contribute to the development of novel audience measurement and demographic inference solutions. 
  • Develop, implement, and support statistical or machine learning methodologies and processes. 
  • Build, test new features and concepts and integrate into production process
  • Participate in ongoing research and evaluation of new technologies
  • Exercise your experience in the development lifecycle through analysis, design, development, testing and deployment of this system
  • Collaborate with teams in Software Engineering, Operations, and Product Management to deliver timely and quality data. You will be the knowledge expert, delivering quality data to our clients

Qualifications:

  • 3-5 years relevant work experience in areas as outlined below
  • Experience in extracting data using SQL from large databases
  • Experience in writing complex ETL processes and frameworks for analytics and data management. Must have experience in working on ETL tools.
  • Master’s degree or PhD in Statistics, Data Science, Economics, Operations Research, Computer Science, or a similar degree with a focus on statistical methods. A Bachelor’s degree in the same fields with significant, demonstrated professional research experience will also be considered. 
  • Programming experience in scientific computing language (R, Python, Julia) and the ability to interact with relational data (SQL, Apache Pig, SparkSQL). General purpose programming (Python, Scala, Java) and familiarity with Hadoop is a plus.  
  • Excellent verbal and written communication skills. 
  • Experience with TV or digital audience measurement or market research data is a plus. 
  • Familiarity with systems analysis or systems thinking is a plus. 
  • Must be comfortable with analyzing complex, high-volume and high-dimension data from varying sources
  • Excellent verbal, written and computer communication skills
  • Ability to engage with Senior Leaders across all functional departments
  • Ability to take on new responsibilities and adapt to changes

 

Read more
EASEBUZZ

at EASEBUZZ

1 recruiter
Amala Baby
Posted by Amala Baby
Pune
2 - 4 yrs
₹2L - ₹20L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+12 more

Company Profile:

 

Easebuzz is a payment solutions (fintech organisation) company which enables online merchants to accept, process and disburse payments through developer friendly APIs. We are focusing on building plug n play products including the payment infrastructure to solve complete business problems. Definitely a wonderful place where all the actions related to payments, lending, subscription, eKYC is happening at the same time.

 

We have been consistently profitable and are constantly developing new innovative products, as a result, we are able to grow 4x over the past year alone. We are well capitalised and have recently closed a fundraise of $4M in March, 2021 from prominent VC firms and angel investors. The company is based out of Pune and has a total strength of 180 employees. Easebuzz’s corporate culture is tied into the vision of building a workplace which breeds open communication and minimal bureaucracy. An equal opportunity employer, we welcome and encourage diversity in the workplace. One thing you can be sure of is that you will be surrounded by colleagues who are committed to helping each other grow.

 

Easebuzz Pvt. Ltd. has its presence in Pune, Bangalore, Gurugram.

 


Salary: As per company standards.

 

Designation: Data Engineering

 

Location: Pune

 

Experience with ETL, Data Modeling, and Data Architecture

Design, build and operationalize large scale enterprise data solutions and applications using one or more of AWS data and analytics services in combination with 3rd parties
- Spark, EMR, DynamoDB, RedShift, Kinesis, Lambda, Glue.

Experience with AWS cloud data lake for development of real-time or near real-time use cases

Experience with messaging systems such as Kafka/Kinesis for real time data ingestion and processing

Build data pipeline frameworks to automate high-volume and real-time data delivery

Create prototypes and proof-of-concepts for iterative development.

Experience with NoSQL databases, such as DynamoDB, MongoDB etc

Create and maintain optimal data pipeline architecture,

Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.


Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.

Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.

Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.

Keep our data separated and secure across national boundaries through multiple data centers and AWS regions.

Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.

Evangelize a very high standard of quality, reliability and performance for data models and algorithms that can be streamlined into the engineering and sciences workflow

Build and enhance data pipeline architecture by designing and implementing data ingestion solutions.

 

Employment Type

Full-time

 

Read more
UAE Client
Remote only
5 - 10 yrs
₹10L - ₹18L / yr
Informatica
Informatica PowerCenter
SQL
PL/SQL

Informatica PowerCenter (9x ,10.2) : Minimum 2+ years experience

SQL / PLSQL: Understanding of SQL procedure. Able to convert procedures into Informatica mapping.

Good to have- Advantage if you have knowledge of Windows Batch Script.

Read more
Bengaluru (Bangalore)
5 - 8 yrs
₹15L - ₹20L / yr
TIBCO Spotfire
TIBCO
DXP
Spotfire
Dashboard
+2 more
  • 4+ years of extensive EXP in TIBCO SPOTFIRE Dashboard Development is MUST
  • Design and create data visualizations in TIBCO Spotfire
  • Proven EXP in delivering Spotfire solutions to advance business goals and needs.
  • Detailed knowledge of TIBCO Spotfire - report developer configuration
  • EXP on creating all charts that exist in Spotfire (scatter, line, bar, combo, pie, etc.) and how to manipulate every property associated with a visualization (trellis, color, shape, size, etc.)
  • EXP in writing efficient SQL queries, views in relational databases such as Oracle, SQL Server, Postgres and BigQuery (Optional).
  • Ability to incorporate multiple data sources into one Spotfire DXP and have that information linked via data table relations.
  • EXP with Spotfire Administrative tasks, load balancing, installation/configuration of servers and clients, upgrades and patches would be a plus.
  • Strong background in analytical visualizations and building executive dashboards.
  • In-depth knowledge & understanding of BI and Datawarehouse concepts
Read more
Grand Hyper

at Grand Hyper

1 recruiter
Rahul Malani
Posted by Rahul Malani
Bengaluru (Bangalore)
5 - 10 yrs
₹10L - ₹20L / yr
Data Warehouse (DWH)
Apache Hive
ETL
DWH Cloud
Hadoop
+3 more
candidate will be responsible for all aspects of data acquisition, data transformation, and analytics scheduling and operationalization to drive high-visibility, cross-division outcomes. Expected deliverables will include the development of Big Data ELT jobs using a mix of technologies, stitching together complex and seemingly unrelated data sets for mass consumption, and automating and scaling analytics into the GRAND's Data Lake. Key Responsibilities : - Create a GRAND Data Lake and Warehouse which pools all the data from different regions and stores of GRAND in GCC - Ensure Source Data Quality Measurement, enrichment and reporting of Data Quality - Manage All ETL and Data Model Update Routines - Integrate new data sources into DWH - Manage DWH Cloud (AWS/AZURE/Google) and Infrastructure Skills Needed : - Very strong in SQL. Demonstrated experience with RDBMS, Unix Shell scripting preferred (e.g., SQL, Postgres, Mongo DB etc) - Experience with UNIX and comfortable working with the shell (bash or KRON preferred) - Good understanding of Data warehousing concepts. Big data systems : Hadoop, NoSQL, HBase, HDFS, MapReduce - Aligning with the systems engineering team to propose and deploy new hardware and software environments required for Hadoop and to expand existing environments. - Working with data delivery teams to set up new Hadoop users. This job includes setting up Linux users, setting up and testing HDFS, Hive, Pig and MapReduce access for the new users. - Cluster maintenance as well as creation and removal of nodes using tools like Ganglia, Nagios, Cloudera Manager Enterprise, and other tools. - Performance tuning of Hadoop clusters and Hadoop MapReduce routines. - Screen Hadoop cluster job performances and capacity planning - Monitor Hadoop cluster connectivity and security - File system management and monitoring. - HDFS support and maintenance. - Collaborating with application teams to install operating system and - Hadoop updates, patches, version upgrades when required. - Defines, develops, documents and maintains Hive based ETL mappings and scripts
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort