Cutshort logo
Oracle Designer Jobs in Pune

11+ Oracle Designer Jobs in Pune | Oracle Designer Job openings in Pune

Apply to 11+ Oracle Designer Jobs in Pune on CutShort.io. Explore the latest Oracle Designer Job opportunities across top companies like Google, Amazon & Adobe.

icon
With a global provider of Business Process Management.

With a global provider of Business Process Management.

Agency job
via Jobdost by Mamatha A
Bengaluru (Bangalore), Mumbai, Gurugram, Nashik, Pune, Visakhapatnam, Chennai, Noida
3 - 5 yrs
₹8L - ₹12L / yr
Oracle Analytics
OAS
OAC
Oracle OAS
Oracle
+8 more

Oracle OAS Developer

 

 

Senior OAS/OAC (Oracle analytics) designer and developer having 3+ years of experience. Worked on new Oracle Analytics platform. Used latest features, custom plug ins and design new one using Java. Has good understanding about the various graphs data points and usage for appropriate financial data display. Worked on performance tuning and build complex data security requirements.

Qualifications



Bachelor university degree in Engineering/Computer Science.

Additional information

Have knowledge of Financial and HR dashboard

 

Read more
Bengaluru (Bangalore), Mumbai, Delhi, Gurugram, Pune, Hyderabad, Ahmedabad, Chennai
3 - 7 yrs
₹8L - ₹15L / yr
AWS Lambda
Amazon S3
Amazon VPC
Amazon EC2
Amazon Redshift
+3 more

Technical Skills:


  • Ability to understand and translate business requirements into design.
  • Proficient in AWS infrastructure components such as S3, IAM, VPC, EC2, and Redshift.
  • Experience in creating ETL jobs using Python/PySpark.
  • Proficiency in creating AWS Lambda functions for event-based jobs.
  • Knowledge of automating ETL processes using AWS Step Functions.
  • Competence in building data warehouses and loading data into them.


Responsibilities:


  • Understand business requirements and translate them into design.
  • Assess AWS infrastructure needs for development work.
  • Develop ETL jobs using Python/PySpark to meet requirements.
  • Implement AWS Lambda for event-based tasks.
  • Automate ETL processes using AWS Step Functions.
  • Build data warehouses and manage data loading.
  • Engage with customers and stakeholders to articulate the benefits of proposed solutions and frameworks.
Read more
Arahas Technologies
Nidhi Shivane
Posted by Nidhi Shivane
Pune
3 - 8 yrs
₹10L - ₹20L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+3 more


Role Description

This is a full-time hybrid role as a GCP Data Engineer,. As a GCP Data Engineer, you will be responsible for managing large sets of structured and unstructured data and developing processes to convert data into insights, information, and knowledge.

Skill Name: GCP Data Engineer

Experience: 7-10 years

Notice Period: 0-15 days

Location :-Pune

If you have a passion for data engineering and possess the following , we would love to hear from you:


🔹 7 to 10 years of experience working on Software Development Life Cycle (SDLC)

🔹 At least 4+ years of experience in Google Cloud platform, with a focus on Big Query

🔹 Proficiency in Java and Python, along with experience in Google Cloud SDK & API Scripting

🔹 Experience in the Finance/Revenue domain would be considered an added advantage

🔹 Familiarity with GCP Migration activities and the DBT Tool would also be beneficial


You will play a crucial role in developing and maintaining our data infrastructure on the Google Cloud platform.

Your expertise in SDLC, Big Query, Java, Python, and Google Cloud SDK & API Scripting will be instrumental in ensuring the smooth operation of our data systems..


Join our dynamic team and contribute to our mission of harnessing the power of data to make informed business decisions.

Read more
AdElement

at AdElement

2 recruiters
Sachin Bhatevara
Posted by Sachin Bhatevara
Pune
3 - 7 yrs
₹25L - ₹40L / yr
skill iconMachine Learning (ML)
skill iconData Science
Artificial Intelligence (AI)
Neural networks
PyTorch
+2 more

Data driven decision-making is core to advertising technology at AdElement. We are looking for sharp, disciplined, and highly quantitative machine learning/ artificial intellignce engineers with big data experience and a passion for digital marketing to help drive informed decision-making. You will work with top-talent and cutting edge technology and have a unique opportunity to turn your insights into products influencing billions. The potential candidate will have an extensive background in distributed training frameworks, will have experience to deploy related machine learning models end to end, and will have some experience in data-driven decision making of machine learning infrastructure enhancement. This is your chance to leave your legacy and be part of a highly successful and growing company.


Required Skills

- 3+ years of industry experience with Java/ Python in a programming intensive role

- 3+ years of experience with one or more of the following machine learning topics: classification, clustering, optimization, recommendation system, graph mining, deep learning

- 3+ years of industry experience with distributed computing frameworks such as Hadoop/Spark, Kubernetes ecosystem, etc

- 3+ years of industry experience with popular deep learning frameworks such as Spark MLlib, Keras, Tensorflow, PyTorch, etc

- 3+ years of industry experience with major cloud computing services

- An effective communicator with the ability to explain technical concepts to a non-technical audience

- (Preferred) Prior experience with ads product development (e.g., DSP/ad-exchange/SSP)

- Able to lead a small team of AI/ML Engineers to achieve business objectives



Responsibilities

- Collaborate across multiple teams - Data Science, Operations & Engineering on unique machine learning system challenges at scale

- Leverage distributed training systems to build scalable machine learning pipelines including ETL, model training and deployments in Real-Time Bidding space. 

- Design and implement solutions to optimize distributed training execution in terms of model hyperparameter optimization, model training/inference latency and system-level bottlenecks  

- Research state-of-the-art machine learning infrastructures to improve data healthiness, model quality and state management during the lifecycle of ML models refresh.

- Optimize integration between popular machine learning libraries and cloud ML and data processing frameworks. 

- Build Deep Learning models and algorithms with optimal parallelism and performance on CPUs/ GPUs.

- Work with top management on defining teams goals and objectives.


Education

- MTech or Ph.D. in Computer Science, Software Engineering, Mathematics or related fields

Read more
Tredence
Rohit S
Posted by Rohit S
Chennai, Pune, Bengaluru (Bangalore), Gurugram
11 - 16 yrs
₹20L - ₹32L / yr
Data Warehouse (DWH)
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
Data engineering
Data migration
+1 more
• Engages with Leadership of Tredence’s clients to identify critical business problems, define the need for data engineering solutions and build strategy and roadmap
• S/he possesses a wide exposure to complete lifecycle of data starting from creation to consumption
• S/he has in the past built repeatable tools / data-models to solve specific business problems
• S/he should have hand-on experience of having worked on projects (either as a consultant or with in a company) that needed them to
o Provide consultation to senior client personnel o Implement and enhance data warehouses or data lakes.
o Worked with business teams or was a part of the team that implemented process re-engineering driven by data analytics/insights
• Should have deep appreciation of how data can be used in decision-making
• Should have perspective on newer ways of solving business problems. E.g. external data, innovative techniques, newer technology
• S/he must have a solution-creation mindset.
Ability to design and enhance scalable data platforms to address the business need
• Working experience on data engineering tool for one or more cloud platforms -Snowflake, AWS/Azure/GCP
• Engage with technology teams from Tredence and Clients to create last mile connectivity of the solutions
o Should have experience of working with technology teams
• Demonstrated ability in thought leadership – Articles/White Papers/Interviews
Mandatory Skills Program Management, Data Warehouse, Data Lake, Analytics, Cloud Platform
Read more
Chennai, Bengaluru (Bangalore), Pune, Hyderabad
4 - 10 yrs
₹10L - ₹15L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
Position Overview:
Come work as a Senior - Big Data Engineer at a growing company that offers great benefits with opportunities to advance and learn alongside accomplished leaders. Innova Solutions is a global information technology company combining a global reach with a local touch. Headquartered in Santa Clara, California, Innova employs 17000 technology professionals worldwide, with field offices in New York, Chennai, Bangalore, Hyderabad, Pune, and Taipei. From Cloud Transformation to Data Services to Managed IT Operations, Innova provides a broad array of proven, tested, cost-effective and enterprise-scale technologies and services that leverage the latest technology and delivery models to deliver high value in the cloud, in the data center, and across complex interconnected environments. What you will be doing?

We are looking for a Spark developer who knows how to fully exploit the potential of our Spark cluster. You will clean, transform, and analyze vast amounts of raw data from various systems using Spark to provide ready-to-use data to our feature developers and business analysts. This involves both ad-hoc requests as well as data pipelines that are embedded in our production

Requirements:

The candidate should be well-versed in the Scala programming language Should have experience in Spark Architecture and Spark Internals Exp in AWS is preferable Should have experience in the full life cycle of at least one big data application
 
Interested candidates, share your updated profile
 
Current CTC
Expected CTC
Notice Period or LWD
Present & Preferred Location
Read more
InnovAccer

at InnovAccer

3 recruiters
Jyoti Kaushik
Posted by Jyoti Kaushik
Noida, Bengaluru (Bangalore), Pune, Hyderabad
4 - 7 yrs
₹4L - ₹16L / yr
ETL
SQL
Data Warehouse (DWH)
Informatica
Datawarehousing
+2 more

We are looking for a Senior Data Engineer to join the Customer Innovation team, who will be responsible for acquiring, transforming, and integrating customer data onto our Data Activation Platform from customers’ clinical, claims, and other data sources. You will work closely with customers to build data and analytics solutions to support their business needs, and be the engine that powers the partnership that we build with them by delivering high-fidelity data assets.

In this role, you will work closely with our Product Managers, Data Scientists, and Software Engineers to build the solution architecture that will support customer objectives. You'll work with some of the brightest minds in the industry, work with one of the richest healthcare data sets in the world, use cutting-edge technology, and see your efforts affect products and people on a regular basis. The ideal candidate is someone that

  • Has healthcare experience and is passionate about helping heal people,
  • Loves working with data,
  • Has an obsessive focus on data quality,
  • Is comfortable with ambiguity and making decisions based on available data and reasonable assumptions,
  • Has strong data interrogation and analysis skills,
  • Defaults to written communication and delivers clean documentation, and,
  • Enjoys working with customers and problem solving for them.

A day in the life at Innovaccer:

  • Define the end-to-end solution architecture for projects by mapping customers’ business and technical requirements against the suite of Innovaccer products and Solutions.
  • Measure and communicate impact to our customers.
  • Enabling customers on how to activate data themselves using SQL, BI tools, or APIs to solve questions they have at speed.

What You Need:

  • 4+ years of experience in a Data Engineering role, a Graduate degree in Computer Science, Statistics, Informatics, Information Systems, or another quantitative field.
  • 4+ years of experience working with relational databases like Snowflake, Redshift, or Postgres.
  • Intermediate to advanced level SQL programming skills.
  • Data Analytics and Visualization (using tools like PowerBI)
  • The ability to engage with both the business and technical teams of a client - to document and explain technical problems or concepts in a clear and concise way.
  • Ability to work in a fast-paced and agile environment.
  • Easily adapt and learn new things whether it’s a new library, framework, process, or visual design concept.

What we offer:

  • Industry certifications: We want you to be a subject matter expert in what you do. So, whether it’s our product or our domain, we’ll help you dive in and get certified.
  • Quarterly rewards and recognition programs: We foster learning and encourage people to take risks. We recognize and reward your hard work.
  • Health benefits: We cover health insurance for you and your loved ones.
  • Sabbatical policy: We encourage people to take time off and rejuvenate, learn new skills, and pursue their interests so they can generate new ideas with Innovaccer.
  • Pet-friendly office and open floor plan: No boring cubicles.
Read more
IT Company

IT Company

Agency job
via Volibits by Manasi D
Pune, Mumbai, Bengaluru (Bangalore)
5 - 8 yrs
₹1L - ₹10L / yr
JDE
JDE DSI
JDE DSI (integration tool) - (4- 8 yrs).
JD Edwards Data system Intergration.data-source agnostic. Modeling structure.
Read more
Intergral Add Science

Intergral Add Science

Agency job
via Vipsa Talent Solutions by Prashma S R
Pune
5 - 8 yrs
₹9L - ₹25L / yr
skill iconJava
Hadoop
Apache Spark
skill iconScala
skill iconPython
+3 more
  • 6+ years of recent hands-on Java development
  • Developing data pipelines in AWS or Google Cloud
  • Java, Python, JavaScript programming languages
  • Great understanding of designing for performance, scalability, and reliability of data intensive application
  • Hadoop MapReduce, Spark, Pig. Understanding of database fundamentals and advanced SQL knowledge.
  • In-depth understanding of object oriented programming concepts and design patterns
  • Ability to communicate clearly to technical and non-technical audiences, verbally and in writing
  • Understanding of full software development life cycle, agile development and continuous integration
  • Experience in Agile methodologies including Scrum and Kanban
Read more
comScore

at comScore

1 recruiter
Jordy Licht
Posted by Jordy Licht
Pune
1 - 2 yrs
₹3L - ₹4L / yr
Tabulate
Survey design
Scripting
You will be part of dynamic group tasked to deliver tabulated reports in a timely manner. Your duties will include reviewing all elements needed to execute a successful tabulations project: questionnaire, data file, Table & Banner guide and other relevant documents. You will be expected to develop custom syntax files using UNCLE software with average complexity, apply standard weighting algorithms to balance data quotas and generate data calculations. Your role will involve performing exhaustive quality control checks to ensure all tabulated data reported is 100% accurate; this will entail logic checks, consulting with senior members and cross-referencing syntax files. What You’ll Do: • Complete Tabulation projects from beginning to end • Use the UNCLE command language to build syntax to accommodate specifications provided by researchers • Perform basic data calculations using UNCLE syntax • Provide support to Survey Operations Tabs team as needed • Perform detailed quality checks using SPSS, online reporting and UNCLE • Concurrently handle several projects to accommodate deadlines What You’ll Need: • A Bachelor’s Degree in Marketing, Business, Social Sciences or a relevant quantitative/technical field • 2 - 3 years of experience, preferably as a Tabulator, in market research • Solid programing logic and full understanding of syntax writing to develop custom tables for Market Research • Ability to multitask and work independently with minimal supervision • Excellent written/oral communication skills and able to document processes and logs as needed • Able to work as member of a cohesive team What Would Be Good To Have: • Experience with the UNCLE tabulation package • Experience in having worked independently on large projects with medium degree of complexity • Able to write macros to automate repetitive functions • Solid understanding of data in multiple formats and how data is processed and presented in tabular format
Read more
Saama Technologies

at Saama Technologies

6 recruiters
Sandeep Chaudhary
Posted by Sandeep Chaudhary
Pune
7 - 11 yrs
₹6L - ₹22L / yr
skill iconData Analytics
skill iconData Science
Product Management
skill iconMachine Learning (ML)
skill iconPython
+1 more
Description Does solving complex business problems and real world challenges interest you Do you enjoy seeing the impact your contributions make on a daily basis Are you passionate about using data analytics to provide game changing solutions to the Global 2000 clients Do you thrive in a dynamic work environment that constantly pushes you to be the best you can be and more Are you ready to work with smart colleagues who drive for excellence in everything they do If you possess a solutions mindset, strong analytical skills, and commitment to be part of a tremendous journey, come join our growing, global team. See what Saama can do for your career and for your journey. Position: Data Scientist (2255) Location: Hinjewadi Phase 1, Pune Type: Permanent Full time Responsibilities: Work on small and large data sets of structured, semi-structured, and unstructured data to discover hidden knowledge about the client s business and develop methods to leverage that knowledge for their business. Identify and solve business challenges working closely with cross-functional teams, such as Delivery, Business Consulting, Engineering and Product Management. Develop prescriptive and predictive statistical, behavioral or other models via machine learning and/ or traditional statistical modeling techniques, and understand which type of model applies best in a given business situation. Drive the collection of new data and the refinement of existing data sources. Analyze and interpret the results of product experiments. Provide input for engineering and product teams as they develop and support our internal data platform to support ongoing statistical analyses. Requirements: Candidates should demonstrate following expertize- Must have Direct Hands-on, 7 years of experience, building complex systems using any Statistical Programming Language (R / Python / SAS) Must have fundamental knowledge of Inferential Statistics Should have worked on Predictive Modelling Experience should include the following, o File I/ O, Data Harmonization, Data Exploration o Multi-Dimensional Array Processing o Simulation & Optimization Techinques o Machine Learning Techniques (Supervised, Unsupervised) o Artificial Intelligence and Deep Learning o Natural Language Processing o Model Ensembling Techniques o Documenting Reproducible Research o Building Interactive Applications to demonstrate Data Science Use Cases Prior experience in Healthcare Domain, is a plus Experience using Big Data, is a plus. Exposure to SPARK is desirable Should have Excellent Analytical, Problem Solving ability. Should be able to grasp new concepts quickly Should be well familiar with Agile Project Management Methodology Should have experience of managing multiple simultaneous projects Should have played a team lead role Should have excellent written and verbal communication skills Should be a team player with open mind Impact on the business: Plays an important role in making Saama s Solutions game changers for our strategic partners by using data science to solve core, complex business challenges. Key relationships: Sales & pre-sales Product management Engineering Client organization: account management & delivery Saama Competencies: INTEGRITY: we do the right things. INNOVATION: we change the game. TRANSPARENCY: we communicate openly COLLABORATION: we work as one team PROBLEM-SOLVING: we solve core, complex business challenges ENJOY & CELEBRATE: we have fun Competencies: Self-starter who gets results with minimal support and direction in a fast-paced environment. Takes initiative; challenges the status quo to drive change. Learns quickly; takes smart risks to experiment and learn. Works well with others; builds trust and maintains credibility. Planful: identifies and confirms key requirements in dynamic environments; anticipates tasks and contingencies. Communicates effectively; productive communication with clients and all key stakeholders communication in both verbal and written communication. Stays the course despite challenges & setbacks. Works well under pressure. Strong analytical skills; able to apply inductive and deductive thinking to generate solutions for complex problems
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort