Role: ODI Developer
Location: Hyderabad (Initially remote)
Experience: 5-8 Years
Technovert is not your typical IT services firm. We have to credit two of our successful products generating $2M+ in licensing/SaaS revenues which is rare in the industry.
We are Obsessed with our love for technology and the infinite possibilities it can create for making this world a better place. Our clients find us at our best when we are challenged with their toughest of problems and we love chasing the problems. It thrills us and motivates us to deliver more. Our global delivery model has earned the trust and reputation of being a partner of choice.
We have a strong heritage built on great people who put customers first and deliver exceptional results with no surprises - every time. We partner with you to understand the interconnection of user experience, business goals, and information technology. It's the optimal fusing of these three drivers that deliver.
Must have:
- Experience with DWH Implementation experience, with experience in developing ETL processes - ETL control tables, error logging, auditing, data quality, etc.
- Responsible for creation of ELT maps, Migrations into different environments, Maintenance and Monitoring of the infrastructure, working with DBA's as well as creation of new reports to assist executive and managerial levels in analyzing the business needs to target the customers.
- Should be able to implement reusability, parameterization, workflow design, etc.
- Expertise in the Oracle ODI toolset and OAC & knowledge of ODI Master and work repository &data modeling and ETL design.
- Used ODI Topology Manager to create connections to various technologies such as Oracle, SQL Server, Flat files, XML, etc.
- Using ODI mappings, error handling, automation using ODI, Load plans, Migration of Objects.
- Ability to design ETL unit test cases and debug ETL Mappings, expertise in developing Load Plans, Scheduling Jobs.
- Integrate ODI with multiple Sources/targets.
Nice to have:
- Exposure towards Oracle Cloud Infrastructure (OCI) is preferable.
- Knowledge in Oracle Analytics Cloud to Explore data through visualizations, load, and model data.
- Hands-on experience of ODI 12c would be an added advantage.
Qualification:
- Overall 3+ years of experience in Oracle Data Integrator (ODI) and Oracle Data Integrator Cloud Service (ODICS).
- Experience in designing and implementing the E-LT architecture that is required to build a data warehouse, including source-to-staging area, staging-to-target area, data transformations, and EL-T process flows.
- Must be well versed and hands-on in using and customizing Knowledge Modules (KM) and experience of performance tuning of mappings.
- Must be self-starting, have strong attention to detail and accuracy, and able to fill multiple roles within the Oracle environment.
- Should be good with Oracle/SQL and should have a good understanding of DDL Deployments.
About Technovert
We are a team of problem solvers passionate about design and technology, delivering Digital transformation and increasing productivity.
Similar jobs
Data Warehousing Engineer - Big Data/ETL
at Marktine
Must Have Skills:
- Solid Knowledge on DWH, ETL and Big Data Concepts
- Excellent SQL Skills (With knowledge of SQL Analytics Functions)
- Working Experience on any ETL tool i.e. SSIS / Informatica
- Working Experience on any Azure or AWS Big Data Tools.
- Experience on Implementing Data Jobs (Batch / Real time Streaming)
- Excellent written and verbal communication skills in English, Self-motivated with strong sense of ownership and Ready to learn new tools and technologies
Preferred Skills:
- Experience on Py-Spark / Spark SQL
- AWS Data Tools (AWS Glue, AWS Athena)
- Azure Data Tools (Azure Databricks, Azure Data Factory)
Other Skills:
- Knowledge about Azure Blob, Azure File Storage, AWS S3, Elastic Search / Redis Search
- Knowledge on domain/function (across pricing, promotions and assortment).
- Implementation Experience on Schema and Data Validator framework (Python / Java / SQL),
- Knowledge on DQS and MDM.
Key Responsibilities:
- Independently work on ETL / DWH / Big data Projects
- Gather and process raw data at scale.
- Design and develop data applications using selected tools and frameworks as required and requested.
- Read, extract, transform, stage and load data to selected tools and frameworks as required and requested.
- Perform tasks such as writing scripts, web scraping, calling APIs, write SQL queries, etc.
- Work closely with the engineering team to integrate your work into our production systems.
- Process unstructured data into a form suitable for analysis.
- Analyse processed data.
- Support business decisions with ad hoc analysis as needed.
- Monitoring data performance and modifying infrastructure as needed.
Responsibility: Smart Resource, having excellent communication skills
About Quidich
Quidich Innovation Labs pioneers products and customized technology solutions for the Sports Broadcast & Film industry. With a mission to bring machines and machine learning to sports, we use camera technology to develop services using remote controlled systems like drones and buggies that add value to any broadcast or production. Quidich provides services to some of the biggest sports & broadcast clients in India and across the globe. A few recent projects include Indian Premier League, ICC World Cup for Men and Women, Kaun Banega Crorepati, Bigg Boss, Gully Boy & Sanju.
What’s Unique About Quidich?
- Your work will be consumed by millions of people within months of your joining and will impact consumption patterns of how live sport is viewed across the globe.
- You work with passionate, talented, and diverse people who inspire and support you to achieve your goals.
- You work in a culture of trust, care, and compassion.
- You have the autonomy to shape your role, and drive your own learning and growth.
Opportunity
- You will be a part of world class sporting events
- Your contribution to the software will help shape the final output seen on television
- You will have an opportunity to work in live broadcast scenarios
- You will work in a close knit team that is driven by innovation
Role
We are looking for a tech enthusiast who can work with us to help further the development of our Augmented Reality product, Spatio, to keep us ahead of the technology curve. We are one of the few companies in the world currently offering this product for live broadcast. We have a tight product roadmap that needs enthusiastic people to solve problems in the realm of software development and computer vision systems. Qualified candidates will be driven self-starters, robust thinkers, strong collaborators, and adept at operating in a highly dynamic environment. We look for candidates that are passionate about the product and embody our values.
Responsibilities
- Working with the research team to develop, evaluate and optimize various state of the art algorithms.
- Deploying high performance, readable, and reliable code on edge devices or any other target environments.
- Continuously exploring new frameworks and identifying ways to incorporate those in the product.
- Collaborating with the core team to bring ideas to life and keep pace with the latest research in Computer Vision, Deep Learning etc.
Minimum Qualifications, Skills and Competencies
- B.E/B.Tech or Masters in Computer Science, Mathematics or relevant experience
- 3+ years of experience in computer vision algorithms like - sfm/SLAM, optical flow, visual-inertial odometry
- Experience in sensor fusion (camera, imu, lidars) and in probabilistic filters - EKF, UKF
- Proficiency in programming - C++ and algorithms
- Strong mathematical understanding - linear algebra, 3d-geometry, probability.
Preferred Qualifications, Skills and Competencies
- Proven experience in optical flow, multi-camera geometry, 3D reconstruction
- Strong background in Machine Learning and Deep Learning frameworks.
Reporting To: Product Lead
Joining Date: Immediate (Mumbai)
1) 6-9 years of industry experience and at least 4 years of experience in an architect role is required, along with at least 3-5 year experience in designing and building analytics/data solutions in Azure.
2) Demonstrated in-depth skills with Azure Data Factory(ADF),Azure SQL Server, Azure Synapse, ADLS with the ability to configure and administrate all aspects of Azure SQL Server.
3) Demonstrated experience delivering multiple data solutions as an architect.
4) Demonstrated experience with ETL development both on-premises and in the cloud using SSIS, Data Factory, and related Microsoft and other ETL technologies
5) DP-200 and DP-201 certifications preferred.
6) Good to have hands on experience in Power BI and Azure Databricks.
7)Should have good communication and presentation skills.
Data Engineer – SQL, RDBMS, pySpark/Scala, Python, Hive, Hadoop, Unix
Data engineering services required:
- Build data products and processes alongside the core engineering and technology team;
- Collaborate with senior data scientists to curate, wrangle, and prepare datafor use in their advanced analytical models;
- Integrate datafrom a variety of sources, assuring that they adhere to data quality and accessibility standards;
- Modify and improve data engineering processes to handle ever larger, more complex, and more types of data sources and pipelines;
- Use Hadoop architecture and HDFS commands to design and optimize data queries at scale;
- Evaluate and experiment with novel data engineering tools and advises information technology leads and partners about new capabilities to determine optimal solutions for particular technical problems or designated use cases.
Big data engineering skills:
- Demonstrated ability to perform the engineering necessary to acquire, ingest, cleanse, integrate, and structure massive volumes of data from multiple sources and systems into enterprise analytics platforms;
- Proven ability to design and optimize queries to build scalable, modular, efficient data pipelines;
- Ability to work across structured, semi-structured, and unstructured data, extracting information and identifying linkages across disparate data sets;
- Proven experience delivering production-ready data engineering solutions, including requirements definition, architecture selection, prototype development, debugging, unit-testing, deployment, support, and maintenance;
- Ability to operate with a variety of data engineering tools and technologies
Responsibilities
Understand business requirement and actively provide inputs from Data perspective.
Experience of SSIS development.
Experience in Migrating SSIS packages to Azure SSIS Integrated Runtime
Experience in Data Warehouse / Data mart development and migration
Good knowledge and Experience on Azure Data Factory
Expert level knowledge of SQL DB & Datawarehouse
Should know at least one programming language (python or PowerShell)
Should be able to analyse and understand complex data flows in SSIS.
Knowledge on Control-M
Knowledge of Azure data lake is required.
Excellent interpersonal/communication skills (both oral/written) with the ability to communicate
at various levels with clarity & precision.
Build simple to complex pipelines & dataflows.
Work with other Azure stack modules like Azure Data Lakes, SQL DW, etc.
Requirements
Bachelor’s degree in Computer Science, Computer Engineering, or relevant field.
A minimum of 5 years’ experience in a similar role.
Strong knowledge of database structure systems and data mining.
Excellent organizational and analytical abilities.
Outstanding problem solver.
Good written and verbal communication skills.
Roles & Responsibilities
- Proven experience with deploying and tuning Open Source components into enterprise ready production tooling Experience with datacentre (Metal as a Service – MAAS) and cloud deployment technologies (AWS or GCP Architect certificates required)
- Deep understanding of Linux from kernel mechanisms through user space management
- Experience on CI/CD (Continuous Integrations and Deployment) system solutions (Jenkins).
- Using Monitoring tools (local and on public cloud platforms) Nagios, Prometheus, Sensu, ELK, Cloud Watch, Splunk, New Relic etc. to trigger instant alerts, reports and dashboards. Work closely with the development and infrastructure teams to analyze and design solutions with four nines (99.99%) up-time, globally distributed, clustered, production and non-production virtualized infrastructure.
- Wide understanding of IP networking as well as data centre infrastructure
Skills
- Expert with software development tools and sourcecode management, understanding, managing issues, code changes and grouping them into deployment releases in a stable and measurable way to maximize production Must be expert at developing and using ansible roles and configuring deployment templates with jinja2.
- Solid understanding of data collection tools like Flume, Filebeat, Metricbeat, JMX Exporter agents.
- Extensive experience operating and tuning the kafka streaming data platform, specifically as a message queue for big data processing
- Strong understanding and must have experience:
- Apache spark framework, specifically spark core and spark streaming,
- Orchestration platforms, mesos and kubernetes,
- Data storage platforms, elasticstack, carbon, clickhouse, cassandra, ceph, hdfs
- Core presentation technologies kibana, and grafana.
- Excellent scripting and programming skills (bash, python, java, go, rust). Must have previous experience with “rust” in order to support, improve in house developed products
Certification
Red Hat Certified Architect certificate or equivalent required CCNA certificate required 3-5 years of experience running open source big data platforms
Our client is an innovative Fintech company that is revolutionizing the business of short term finance. The company is an online lending startup that is driven by an app-enabled technology platform to solve the funding challenges of SMEs by offering quick-turnaround, paperless business loans without collateral. It counts over 2 million small businesses across 18 cities and towns as its customers.
- Performing extensive analysis on SQL, Google Analytics & Excel from a product standpoint to provide quick recommendations to the management
- Estimating impact and weighing in on feature prioritization, looking for insights and anomalies across lending funnel,
- Defining key metrics and monitor on a day to day basis
- Helping Marketing, product and UX team in defining segments by conducting user interviews and data backed insights
What you need to have:
- B.Tech /B.E.;Any Graduation
- Strong background in statistical concepts & calculations to perform analysis/ modeling
- Proficient in SQL
- Good knowledge of Google Analytics and any other web analytics platforms (preferred)
- Strong analytical and problem solving skills to analyze large quantum of datasets
- Ability to work independently and bring innovative solutions to the team
- Experience of working with a start-up or a product organization (preferred)
Consulting Staff Engineer - Machine Learning
at Thinkdeeply
Job Description
Want to make every line of code count? Tired of being a small cog in a big machine? Like a fast-paced environment where stuff get DONE? Wanna grow with a fast-growing company (both career and compensation)? Like to wear different hats? Join ThinkDeeply in our mission to create and apply Enterprise-Grade AI for all types of applications.
Seeking an M.L. Engineer with high aptitude toward development. Will also consider coders with high aptitude in M.L. Years of experience is important but we are also looking for interest and aptitude. As part of the early engineering team, you will have a chance to make a measurable impact in future of Thinkdeeply as well as having a significant amount of responsibility.
Experience
10+ Years
Location
Bozeman/Hyderabad
Skills
Required Skills:
Bachelors/Masters or Phd in Computer Science or related industry experience
3+ years of Industry Experience in Deep Learning Frameworks in PyTorch or TensorFlow
7+ Years of industry experience in scripting languages such as Python, R.
7+ years in software development doing at least some level of Researching / POCs, Prototyping, Productizing, Process improvement, Large-data processing / performance computing
Familiar with non-neural network methods such as Bayesian, SVM, Adaboost, Random Forests etc
Some experience in setting up large scale training data pipelines.
Some experience in using Cloud services such as AWS, GCP, Azure
Desired Skills:
Experience in building deep learning models for Computer Vision and Natural Language Processing domains
Experience in productionizing/serving machine learning in industry setting
Understand the principles of developing cloud native applications
Responsibilities
Collect, Organize and Process data pipelines for developing ML models
Research and develop novel prototypes for customers
Train, implement and evaluate shippable machine learning models
Deploy and iterate improvements of ML Models through feedback
Predictive Modelling And Optimization Consultant (SCM)
at BRIDGEi2i Analytics Solutions
The person holding this position is responsible for leading the solution development and implementing advanced analytical approaches across a variety of industries in the supply chain domain.
At this position you act as an interface between the delivery team and the supply chain team, effectively understanding the client business and supply chain.
Candidates will be expected to lead projects across several areas such as
- Demand forecasting
- Inventory management
- Simulation & Mathematical optimization models.
- Procurement analytics
- Distribution/Logistics planning
- Network planning and optimization
Qualification and Experience
- 4+ years of analytics experience in supply chain – preferable industries hi-tech, consumer technology, CPG, automobile, retail or e-commerce supply chain.
- Master in Statistics/Economics or MBA or M. Sc./M. Tech with Operations Research/Industrial Engineering/Supply Chain
- Hands-on experience in delivery of projects using statistical modelling
Skills / Knowledge
- Hands on experience in statistical modelling software such as R/ Python and SQL.
- Experience in advanced analytics / Statistical techniques – Regression, Decision tress, Ensemble machine learning algorithms etc. will be considered as an added advantage.
- Highly proficient with Excel, PowerPoint and Word applications.
- APICS-CSCP or PMP certification will be added advantage
- Strong knowledge of supply chain management
- Working knowledge on the linear/nonlinear optimization
- Ability to structure problems through a data driven decision-making process.
- Excellent project management skills, including time and risk management and project structuring.
- Ability to identify and draw on leading-edge analytical tools and techniques to develop creative approaches and new insights to business issues through data analysis.
- Ability to liaison effectively with multiple stakeholders and functional disciplines.
- Experience in Optimization tools like Cplex, ILOG, GAMS will be an added advantage.
Data Migration Developer
at Qvantel Software Solutions Ltd