ODI Developer

at Technovert

DP
Posted by Dushyant Waghmare
icon
Hyderabad
icon
5 - 8 yrs
icon
₹12.5L - ₹24L / yr
icon
Full time
Skills
Data Warehouse (DWH)
Informatica
ETL

Role: ODI Developer

Location: Hyderabad (Initially remote)

Experience: 5-8 Years

 

Technovert is not your typical IT services firm. We have to credit two of our successful products generating $2M+ in licensing/SaaS revenues which is rare in the industry.

We are Obsessed with our love for technology and the infinite possibilities it can create for making this world a better place. Our clients find us at our best when we are challenged with their toughest of problems and we love chasing the problems. It thrills us and motivates us to deliver more. Our global delivery model has earned the trust and reputation of being a partner of choice.

We have a strong heritage built on great people who put customers first and deliver exceptional results with no surprises - every time. We partner with you to understand the interconnection of user experience, business goals, and information technology. It's the optimal fusing of these three drivers that deliver.

 

Must have:

  • Experience with DWH Implementation experience, with experience in developing ETL processes - ETL control tables, error logging, auditing, data quality, etc.
  • Responsible for creation of ELT maps, Migrations into different environments, Maintenance and Monitoring of the infrastructure, working with DBA's as well as creation of new reports to assist executive and managerial levels in analyzing the business needs to target the customers.
  • Should be able to implement reusability, parameterization, workflow design, etc.
  • Expertise in the Oracle ODI toolset and OAC & knowledge of ODI Master and work repository &data modeling and ETL design.
  • Used ODI Topology Manager to create connections to various technologies such as Oracle, SQL Server, Flat files, XML, etc.
  • Using ODI mappings, error handling, automation using ODI, Load plans, Migration of Objects.
  • Ability to design ETL unit test cases and debug ETL Mappings, expertise in developing Load Plans, Scheduling Jobs.
  • Integrate ODI with multiple Sources/targets.

 

Nice to have:

  • Exposure towards Oracle Cloud Infrastructure (OCI) is preferable.
  • Knowledge in Oracle Analytics Cloud to Explore data through visualizations, load, and model data.
  • Hands-on experience of ODI 12c would be an added advantage.

 

Qualification:

  • Overall 3+ years of experience in Oracle Data Integrator (ODI) and Oracle Data Integrator Cloud Service (ODICS).
  • Experience in designing and implementing the E-LT architecture that is required to build a data warehouse, including source-to-staging area, staging-to-target area, data transformations, and EL-T process flows.
  • Must be well versed and hands-on in using and customizing Knowledge Modules (KM) and experience of performance tuning of mappings.
  • Must be self-starting, have strong attention to detail and accuracy, and able to fill multiple roles within the Oracle environment.
  • Should be good with Oracle/SQL and should have a good understanding of DDL Deployments.

 

About Technovert

We are a team of problem solvers passionate about design and technology, delivering Digital transformation and increasing productivity.

Founded
2012
Type
Products & Services
Size
100-1000 employees
Stage
Profitable
View full company details
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Data Warehousing Engineer - Big Data/ETL

at Marktine

Founded 2014  •  Products & Services  •  20-100 employees  •  Bootstrapped
Big Data
ETL
PySpark
SSIS
Microsoft Windows Azure
Data Warehouse (DWH)
Python
Amazon Web Services (AWS)
Informatica
icon
Remote, Bengaluru (Bangalore)
icon
3 - 10 yrs
icon
₹5L - ₹15L / yr

Must Have Skills:

- Solid Knowledge on DWH, ETL and Big Data Concepts

- Excellent SQL Skills (With knowledge of SQL Analytics Functions)

- Working Experience on any ETL tool i.e. SSIS / Informatica

- Working Experience on any Azure or AWS Big Data Tools.

- Experience on Implementing Data Jobs (Batch / Real time Streaming)

- Excellent written and verbal communication skills in English, Self-motivated with strong sense of ownership and Ready to learn new tools and technologies

Preferred Skills:

- Experience on Py-Spark / Spark SQL

- AWS Data Tools (AWS Glue, AWS Athena)

- Azure Data Tools (Azure Databricks, Azure Data Factory)

Other Skills:

- Knowledge about Azure Blob, Azure File Storage, AWS S3, Elastic Search / Redis Search

- Knowledge on domain/function (across pricing, promotions and assortment).

- Implementation Experience on Schema and Data Validator framework (Python / Java / SQL),

- Knowledge on DQS and MDM.

Key Responsibilities:

- Independently work on ETL / DWH / Big data Projects

- Gather and process raw data at scale.

- Design and develop data applications using selected tools and frameworks as required and requested.

- Read, extract, transform, stage and load data to selected tools and frameworks as required and requested.

- Perform tasks such as writing scripts, web scraping, calling APIs, write SQL queries, etc.

- Work closely with the engineering team to integrate your work into our production systems.

- Process unstructured data into a form suitable for analysis.

- Analyse processed data.

- Support business decisions with ad hoc analysis as needed.

- Monitoring data performance and modifying infrastructure as needed.

Responsibility: Smart Resource, having excellent communication skills

 

 
Job posted by
Vishal Sharma

Computer Vision

at Quidich

Founded 2014  •  Products & Services  •  20-100 employees  •  Bootstrapped
Computer Vision
TensorFlow
C++
slam
EKF
Linear algebra
3D Geometry
Probability
3D rendering
Machine Learning (ML)
Deep Learning
icon
Mumbai
icon
0 - 9 yrs
icon
₹2L - ₹14L / yr

About Quidich


Quidich Innovation Labs pioneers products and customized technology solutions for the Sports Broadcast & Film industry. With a mission to bring machines and machine learning to sports, we use camera technology to develop services using remote controlled systems like drones and buggies that add value to any broadcast or production. Quidich provides services to some of the biggest sports & broadcast clients in India and across the globe. A few recent projects include Indian Premier League, ICC World Cup for Men and Women, Kaun Banega Crorepati, Bigg Boss, Gully Boy & Sanju.

What’s Unique About Quidich?

  • Your work will be consumed by millions of people within months of your joining and will impact consumption patterns of how live sport is viewed across the globe.
  • You work with passionate, talented, and diverse people who inspire and support you to achieve your goals.
  • You work in a culture of trust, care, and compassion.
  • You have the autonomy to shape your role, and drive your own learning and growth. 

Opportunity

  • You will be a part of world class sporting events
  • Your contribution to the software will help shape the final output seen on television
  • You will have an opportunity to work in live broadcast scenarios
  • You will work in a close knit team that is driven by innovation

Role

We are looking for a tech enthusiast who can work with us to help further the development of our Augmented Reality product, Spatio, to keep us ahead of the technology curve. We are one of the few companies in the world currently offering this product for live broadcast. We have a tight product roadmap that needs enthusiastic people to solve problems in the realm of software development and computer vision systems. Qualified candidates will be driven self-starters, robust thinkers, strong collaborators, and adept at operating in a highly dynamic environment. We look for candidates that are passionate about the product and embody our values.




Responsibilities

  • Working with the research team to develop, evaluate and optimize various state of the art algorithms.
  • Deploying high performance, readable, and reliable code on edge devices or any other target environments.
  • Continuously exploring new frameworks and identifying ways to incorporate those in the product.
  • Collaborating with the core team to bring ideas to life and keep pace with the latest research in Computer Vision, Deep Learning etc.

Minimum Qualifications, Skills and Competencies

  • B.E/B.Tech or Masters in Computer Science, Mathematics or relevant experience
  • 3+ years of experience in computer vision algorithms like - sfm/SLAM, optical flow, visual-inertial odometry
  • Experience in sensor fusion (camera, imu, lidars) and in probabilistic filters - EKF, UKF
  • Proficiency in programming - C++ and algorithms
  • Strong mathematical understanding - linear algebra, 3d-geometry, probability.

Preferred Qualifications, Skills and Competencies

  • Proven experience in optical flow, multi-camera geometry, 3D reconstruction
  • Strong background in Machine Learning and Deep Learning frameworks.

Reporting To: Product Lead 

Joining Date: Immediate (Mumbai)

Job posted by
Parag Sule
ETL
Windows Azure
ADF
SQL Azure
SSIS
PowerBI
Communication Skills
azure synapse
Azure databricks
icon
Bengaluru (Bangalore)
icon
6 - 9 yrs
icon
₹10L - ₹18L / yr

 

1) 6-9 years of industry experience and at least 4 years of experience in an architect role is required, along with at least 3-5 year experience in designing and building analytics/data solutions in Azure.

2) Demonstrated in-depth skills with Azure Data Factory(ADF),Azure SQL Server, Azure Synapse, ADLS with the ability to configure and administrate all aspects of Azure SQL Server.

3) Demonstrated experience delivering multiple data solutions as an architect.

4) Demonstrated experience with ETL development both on-premises and in the cloud using SSIS, Data Factory, and related Microsoft and other ETL technologies

5) DP-200 and DP-201 certifications preferred.

6) Good to have hands on experience in Power BI and Azure Databricks.

7)Should have good communication and presentation skills.

Job posted by
Jerrin Thomas

Data Engineer

at StatusNeo

Founded 2020  •  Products & Services  •  100-1000 employees  •  Profitable
Scala
PySpark
Data engineering
Big Data
Hadoop
Spark
Python
SQL
icon
Gurugram, Bengaluru (Bangalore), Pune
icon
2 - 15 yrs
icon
₹10L - ₹35L / yr

Data Engineer – SQL, RDBMS, pySpark/Scala, Python, Hive, Hadoop, Unix

 

Data engineering services required:

  • Build data products and processes alongside the core engineering and technology team;
  • Collaborate with senior data scientists to curate, wrangle, and prepare datafor use in their advanced analytical models;
  • Integrate datafrom a variety of sources, assuring that they adhere to data quality and accessibility standards;
  • Modify and improve data engineering processes to handle ever larger, more complex, and more types of data sources and pipelines;
  • Use Hadoop architecture and HDFS commands to design and optimize data queries at scale;
  • Evaluate and experiment with novel data engineering tools and advises information technology leads and partners about new capabilities to determine optimal solutions for particular technical problems or designated use cases.

 

Big data engineering skills:

  • Demonstrated ability to perform the engineering necessary to acquire, ingest, cleanse, integrate, and structure massive volumes of data from multiple sources and systems into enterprise analytics platforms;
  • Proven ability to design and optimize queries to build scalable, modular, efficient data pipelines;
  • Ability to work across structured, semi-structured, and unstructured data, extracting information and identifying linkages across disparate data sets;
  • Proven experience delivering production-ready data engineering solutions, including requirements definition, architecture selection, prototype development, debugging, unit-testing, deployment, support, and maintenance;
  • Ability to operate with a variety of data engineering tools and technologies
Job posted by
Alex P
ADF
SSIS
icon
Bengaluru (Bangalore), Hyderabad, Pune
icon
3 - 8 yrs
icon
₹8L - ₹16L / yr
Job Responsibilities/KRAs:
Responsibilities
 Understand business requirement and actively provide inputs from Data perspective.
 Experience of SSIS development.
 Experience in Migrating SSIS packages to Azure SSIS Integrated Runtime
 Experience in Data Warehouse / Data mart development and migration
 Good knowledge and Experience on Azure Data Factory
 Expert level knowledge of SQL DB & Datawarehouse
 Should know at least one programming language (python or PowerShell)
 Should be able to analyse and understand complex data flows in SSIS.
 Knowledge on Control-M
 Knowledge of Azure data lake is required.
 Excellent interpersonal/communication skills (both oral/written) with the ability to communicate
at various levels with clarity & precision.
 Build simple to complex pipelines & dataflows.
 Work with other Azure stack modules like Azure Data Lakes, SQL DW, etc.
Requirements
 Bachelor’s degree in Computer Science, Computer Engineering, or relevant field.
 A minimum of 5 years’ experience in a similar role.
 Strong knowledge of database structure systems and data mining.
 Excellent organizational and analytical abilities.
 Outstanding problem solver.
 Good written and verbal communication skills.
Job posted by
geeti gaurav mohanty

Data Engineer

at Aptus Data LAbs

Founded 2014  •  Products & Services  •  100-1000 employees  •  Profitable
Data engineering
Big Data
Hadoop
Data Engineer
Apache Kafka
Apache Spark
Python
Elastic Search
Kibana
Cisco Certified Network Associate (CCNA)
icon
Bengaluru (Bangalore)
icon
5 - 10 yrs
icon
₹6L - ₹15L / yr

Roles & Responsibilities

  1. Proven experience with deploying and tuning Open Source components into enterprise ready production tooling Experience with datacentre (Metal as a Service – MAAS) and cloud deployment technologies (AWS or GCP Architect certificates required)
  2. Deep understanding of Linux from kernel mechanisms through user space management
  3. Experience on CI/CD (Continuous Integrations and Deployment) system solutions (Jenkins).
  4. Using Monitoring tools (local and on public cloud platforms) Nagios, Prometheus, Sensu, ELK, Cloud Watch, Splunk, New Relic etc. to trigger instant alerts, reports and dashboards.  Work closely with the development and infrastructure teams to analyze and design solutions with four nines (99.99%) up-time, globally distributed, clustered, production and non-production virtualized infrastructure. 
  5. Wide understanding of IP networking as well as data centre infrastructure

Skills

  1. Expert with software development tools and sourcecode management, understanding, managing issues, code changes and grouping them into deployment releases in a stable and measurable way to maximize production Must be expert at developing and using ansible roles and configuring deployment templates with jinja2.
  2. Solid understanding of data collection tools like Flume, Filebeat, Metricbeat, JMX Exporter agents.
  3. Extensive experience operating and tuning the kafka streaming data platform, specifically as a message queue for big data processing
  4. Strong understanding and must have experience:
  5. Apache spark framework, specifically spark core and spark streaming, 
  6. Orchestration platforms, mesos and kubernetes, 
  7. Data storage platforms, elasticstack, carbon, clickhouse, cassandra, ceph, hdfs
  8. Core presentation technologies kibana, and grafana.
  9. Excellent scripting and programming skills (bash, python, java, go, rust). Must have previous experience with “rust” in order to support, improve in house developed products

Certification

Red Hat Certified Architect certificate or equivalent required CCNA certificate required 3-5 years of experience running open source big data platforms

Job posted by
Merlin Metilda

Sr Product Analyst

at High-Growth Fintech Startup

Agency job
via Unnati
Product Analyst
Product Management
Product Manager
SQL
Product Strategy
Google Analytics
Web Analytics
Business Analysis
Process automation
feature prioritization
icon
Mumbai
icon
3 - 5 yrs
icon
₹7L - ₹11L / yr
Want to join the trailblazing Fintech company which is leveraging software and technology to change the face of short-term financing in India!

Our client is an innovative Fintech company that is revolutionizing the business of short term finance. The company is an online lending startup that is driven by an app-enabled technology platform to solve the funding challenges of SMEs by offering quick-turnaround, paperless business loans without collateral. It counts over 2 million small businesses across 18 cities and towns as its customers.
 
Its founders are IIT and ISB alumni with deep experience in the fin-tech industry, from earlier working with organizations like Axis Bank, Aditya Birla Group, Fractal Analytics, and Housing.com. It has raised funds of Rs. 100 Crore from finance industry stalwarts and is growing by leaps and bounds.
 
As a Sr Product Analyst, you will partner with business & product teams to define goals and to perform extensive analysis.
 
What you will do:
  • Performing extensive analysis on SQL, Google Analytics & Excel from a product standpoint to provide quick recommendations to the management
  • Estimating impact and weighing in on feature prioritization, looking for insights and anomalies across lending funnel,
  • Defining key metrics and monitor on a day to day basis
  • Helping Marketing, product and UX team in defining segments by conducting user interviews and data backed insights

 


Candidate Profile:

What you need to have:

  • B.Tech /B.E.;Any Graduation
  • Strong background in statistical concepts & calculations to perform analysis/ modeling
  • Proficient in SQL
  • Good knowledge of Google Analytics and any other web analytics platforms (preferred)
  • Strong analytical and problem solving skills to analyze large quantum of datasets
  • Ability to work independently and bring innovative solutions to the team
  • Experience of working with a start-up or a product organization (preferred)
Job posted by
Prabha Ramamurthy

Consulting Staff Engineer - Machine Learning

at Thinkdeeply

Founded 2014  •  Products & Services  •  20-100 employees  •  Raised funding
Machine Learning (ML)
R Programming
TensorFlow
Deep Learning
Python
Natural Language Processing (NLP)
PyTorch
icon
Hyderabad
icon
5 - 15 yrs
icon
₹5L - ₹35L / yr

Job Description

Want to make every line of code count? Tired of being a small cog in a big machine? Like a fast-paced environment where stuff get DONE? Wanna grow with a fast-growing company (both career and compensation)? Like to wear different hats? Join ThinkDeeply in our mission to create and apply Enterprise-Grade AI for all types of applications.

 

Seeking an M.L. Engineer with high aptitude toward development. Will also consider coders with high aptitude in M.L. Years of experience is important but we are also looking for interest and aptitude. As part of the early engineering team, you will have a chance to make a measurable impact in future of Thinkdeeply as well as having a significant amount of responsibility.

 

Experience

10+ Years

 

Location

Bozeman/Hyderabad

 

Skills

Required Skills:

Bachelors/Masters or Phd in Computer Science or related industry experience

3+ years of Industry Experience in Deep Learning Frameworks in PyTorch or TensorFlow

7+ Years of industry experience in scripting languages such as Python, R.

7+ years in software development doing at least some level of Researching / POCs, Prototyping, Productizing, Process improvement, Large-data processing / performance computing

Familiar with non-neural network methods such as Bayesian, SVM, Adaboost, Random Forests etc

Some experience in setting up large scale training data pipelines.

Some experience in using Cloud services such as AWS, GCP, Azure

Desired Skills:

Experience in building deep learning models for Computer Vision and Natural Language Processing domains

Experience in productionizing/serving machine learning in industry setting

Understand the principles of developing cloud native applications

 

Responsibilities

 

Collect, Organize and Process data pipelines for developing ML models

Research and develop novel prototypes for customers

Train, implement and evaluate shippable machine learning models

Deploy and iterate improvements of ML Models through feedback

Job posted by
Aditya Kanchiraju

Predictive Modelling And Optimization Consultant (SCM)

at BRIDGEi2i Analytics Solutions

Founded 2011  •  Products & Services  •  100-1000 employees  •  Profitable
R Programming
Data Analytics
Predictive modelling
Supply Chain Management (SCM)
SQL
MySQL
Python
Statistical Modeling
Supply chain optimization
icon
Bengaluru (Bangalore)
icon
4 - 10 yrs
icon
₹9L - ₹15L / yr

The person holding this position is responsible for leading the solution development and implementing advanced analytical approaches across a variety of industries in the supply chain domain.

At this position you act as an interface between the delivery team and the supply chain team, effectively understanding the client business and supply chain.

Candidates will be expected to lead projects across several areas such as

  • Demand forecasting
  • Inventory management
  • Simulation & Mathematical optimization models.
  • Procurement analytics
  • Distribution/Logistics planning
  • Network planning and optimization

 

Qualification and Experience

  • 4+ years of analytics experience in supply chain – preferable industries hi-tech, consumer technology, CPG, automobile, retail or e-commerce supply chain.
  • Master in Statistics/Economics or MBA or M. Sc./M. Tech with Operations Research/Industrial Engineering/Supply Chain
  • Hands-on experience in delivery of projects using statistical modelling

Skills / Knowledge

  • Hands on experience in statistical modelling software such as R/ Python and SQL.
  • Experience in advanced analytics / Statistical techniques – Regression, Decision tress, Ensemble machine learning algorithms etc. will be considered as an added advantage.
  • Highly proficient with Excel, PowerPoint and Word applications.
  • APICS-CSCP or PMP certification will be added advantage
  • Strong knowledge of supply chain management
  • Working knowledge on the linear/nonlinear optimization
  • Ability to structure problems through a data driven decision-making process.
  • Excellent project management skills, including time and risk management and project structuring.
  • Ability to identify and draw on leading-edge analytical tools and techniques to develop creative approaches and new insights to business issues through data analysis.
  • Ability to liaison effectively with multiple stakeholders and functional disciplines.
  • Experience in Optimization tools like Cplex, ILOG, GAMS will be an added advantage.
Job posted by
Venniza Glades

Data Migration Developer

at Qvantel Software Solutions Ltd

Founded 2000  •  Products & Services  •  100-1000 employees  •  Profitable
Data Migration
BSS
ETL
icon
Hyderabad
icon
3 - 7 yrs
icon
₹6L - ₹20L / yr
We are now looking for passionate DATA MIGRATION DEVELOPERS to work in our Hyderabad site Role Description: We are looking for data migration developers to our BSS delivery projects. Your main goal is to analyse migration data, create migration solution and execute the data migration. You will work as part of the migration team in cooperation with our migration architect and BSS delivery project manager. You have a solid background with telecom BSS and experience in data migrations. You will be expected to interpret data analysis produced by Business Analysts and raise issues or questions and work directly with the client on-site to resolve them. You must therefore be capable of understanding the telecom business behind a technical solution. Requirements: – To understand different data migration approaches and capability to adopt requirements to migration tool development and utilization – Capability to analyse the shape & health of source data – Extraction of data from multiple legacy sources – Building transformation code to adhere to data mappings – Loading data to either new or existing target solutions. We appreciate: – Deep knowledge of ETL processes and/or other migration tools – Proven experience in data migrations with high volumes and in business critical systems in telecom business – Experience in telecom business support systems – Ability to apply innovation and improvement to the data migration/support processes and to be able to manage multiple priorities effectively. We can offer you: – Interesting and challenging work in a fast-growing, customer-oriented company – An international and multicultural working environment with experienced and enthusiastic colleagues – Plenty of opportunities to learn, grow and progress in your career At Qvantel we have built a young, dynamic culture where people are motivated to learn and develop themselves, are used to working both independently as well as in teams, have a systematic, hands on working style and a can-do attitude. Our people are used to communicate across other cultures and time zones. A sense of humor can also come in handy. Don’t hesitate to ask for more information from Srinivas Bollipally our Recruitment Specialist reachable at [email protected]
Job posted by
Srinivas Bollipally
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at Technovert?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort