Cutshort logo
Teradata Jobs in Hyderabad

11+ Teradata Jobs in Hyderabad | Teradata Job openings in Hyderabad

Apply to 11+ Teradata Jobs in Hyderabad on CutShort.io. Explore the latest Teradata Job opportunities across top companies like Google, Amazon & Adobe.

icon
Top IT MNC

Top IT MNC

Agency job
Chennai, Hyderabad, Bengaluru (Bangalore), Pune
3 - 8 yrs
₹4L - ₹13L / yr
Teradata
Greetings,

We are looking for a Teradata developer for one of our premium clients, Kindly contact me if interested
Read more
Kanerika Software

at Kanerika Software

1 recruiter
Suman Kumar
Posted by Suman Kumar
Hyderabad
5 - 10 yrs
₹5L - ₹20L / yr
Data Warehouse (DWH)
Informatica
ETL
IICS,
IDMC
+1 more


Responsibilities includes:

  • Design and execute a Data Quality Audit/Assessment
  • Design and execute the data quality mappings that will cleanse, de-duplicate, and otherwise prepare the project data.
  • Migrate On-prim IDQ to CDQ in data quality migration projects.
  • Implement data quality processes including transliteration, parsing, analysis, standardization, and enrichment at point of entry and batch modes; Deploy mappings that will run in a scheduled, batch, or real-time environment.
  • Document all mappings, applets, and rules in detail and hand over documentation to the customer.
  • Collaborate with various business and technical teams to gather requirements around data quality rules and propose the optimization of these rules if applicable, then design and develop these rules with IDQ.
  • As per business requirements, perform thorough data profiling with multiple usage patterns, root cause analysis and data cleansing and develop scorecards utilizing Informatica, Excel and other data quality tools.
  • Develop "matching" plans, help determine best matching algorithm, configure identity matching, and analyze duplicates.
  • Build complex profiles and scripts to execute and test mappings and workflows to implement data stewardship and exception processing.
  • Run data quality specific ETL jobs (address standardization and validation, email cleanups, name cleanup, parsing, etc.) utilizing IDQ and other ETL tools
  • Serve as the primary resource to team members and data stewards for training, problem resolution, data profiling etc.
  • Analyze and provide data metrics to management in order to help prioritize areas for data quality improvement.
  • Participate in improvement of master data management process and support transactional systems and processes.
  • Good knowledge of writing shell-scripts to invoke mappings or workflows.
  • Good at writing SQL queries and verifying the results.
  • Should be familiar in migrating object from IDQ to PowerCenter
  • Should be familiar with deploying and validating objects to different environments.
  • Good at writing Test scenarios and performing Unit Test and verifying the end results with business requirements.
  • Perform both record-level and large-scale manual additions, adjustments, and corrections to continuously ensure overall data quality and integrity.
  • Maintain excellent working relationship between internal (team and the whole organization) and external parties (vendors, customers)
  • Assist with special projects or ad-hoc reviews as needed.


Read more
Luxury e-commerce platform well-established organisation

Luxury e-commerce platform well-established organisation

Agency job
via The Hub by Sridevi Viswanathan
Hyderabad
3 - 8 yrs
₹23L - ₹28L / yr
skill iconMachine Learning (ML)
Algorithms
Data mining
Pattern recognition
Digital Signal Processing
Algorithm Engineer
Experience 3 to 8 Years

Skill Set
  • experience in algorithm development with a focus on signal processing, pattern recognition, machine learning, classification, data mining, and other areas of machine intelligence.
  • Ability to analyse data streams from multiple sensors and develop algorithms to extract accurate and meaningful sport metrics.
  • Should have a deeper understanding of IMU sensors and Biosensors like HRM, ECG
  • A good understanding on Power and memory management on embedded platform
Responsibilities
  •  Expertise in the design of multitasking, event-driven, real-time firmware using C and understanding of RTOS concepts
  • Knowledge of Machine learning, Analytical and methodical approaches to data analysis and verification and Python
  • Prior experience on fitness algorithm development using IMU sensor
  •  Interest in fitness activities and knowledge of human body anatomy
Read more
A LEADING US BASED MNC

A LEADING US BASED MNC

Agency job
via Zeal Consultants by Zeal Consultants
Bengaluru (Bangalore), Hyderabad, Delhi, Gurugram
5 - 10 yrs
₹14L - ₹15L / yr
Google Cloud Platform (GCP)
Spark
PySpark
Apache Spark
"DATA STREAMING"

Data Engineering : Senior Engineer / Manager


As Senior Engineer/ Manager in Data Engineering, you will translate client requirements into technical design, and implement components for a data engineering solutions. Utilize a deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution.


Must Have skills :


1. GCP


2. Spark streaming : Live data streaming experience is desired.


3. Any 1 coding language: Java/Pyhton /Scala



Skills & Experience :


- Overall experience of MINIMUM 5+ years with Minimum 4 years of relevant experience in Big Data technologies


- Hands-on experience with the Hadoop stack - HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline. Working knowledge on real-time data pipelines is added advantage.


- Strong experience in at least of the programming language Java, Scala, Python. Java preferable


- Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc.


- Well-versed and working knowledge with data platform related services on GCP


- Bachelor's degree and year of work experience of 6 to 12 years or any combination of education, training and/or experience that demonstrates the ability to perform the duties of the position


Your Impact :


- Data Ingestion, Integration and Transformation


- Data Storage and Computation Frameworks, Performance Optimizations


- Analytics & Visualizations


- Infrastructure & Cloud Computing


- Data Management Platforms


- Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time


- Build functionality for data analytics, search and aggregation

Read more
Blend360

at Blend360

1 recruiter
VasimAkram Shaik
Posted by VasimAkram Shaik
Hyderabad
5 - 13 yrs
Best in industry
Tableau
SQL
Business Intelligence (BI)
Spotfire
Qlikview
+3 more

Key Responsibilities:


•Design, development, support and maintain automated business intelligence products in Tableau.


•Rapidly design, develop and implement reporting applications that insert KPI metrics and actionable insights into the operational, tactical and strategic activities of key business functions.


•Develop strong communication skills with a proven success communicating with users, other tech teams.


•Identify business requirements, design processes that leverage/adapt the business logic and regularly communicate with business stakeholders to ensure delivery meets business needs.


•Design, code and review business intelligence projects developed in tools Tableau & Power BI.


•Work as a member and lead teams to implement BI solutions for our customers.


•Develop dashboards and data sources that meet and exceed customer requirements.


•Partner with business information architects to understand the business use cases that support and fulfill business and data strategy.


•Partner with Product Owners and cross functional teams in a collaborative and agile environment


•Provide best practices for data visualization and Tableau implementations.


•Work along with solution architect in RFI / RFP response solution design, customer presentations, demonstrations, POCs etc. for growth.



Desired Candidate Profile:


•6-10 years of programming experience and a demonstrated proficiency in Experience with Tableau Certifications in Tableau is highly preferred.


•Ability to architect and scope complex projects.


•Strong understanding of SQL and basic understanding of programming languages; experience with SAQL, SOQL, Python, or R a plus.


•Applied experience in Agile development processes (SCRUM)


•Ability to independently learn new technologies.


•Ability to show initiative and work independently with minimal direction.


•Presentation skills – demonstrated ability to simplify complex situations and ideas and distill them into compelling and effective written and oral presentations.


•Learn quickly – ability to understand and rapidly comprehend new areas, functional and technical, and apply detailed and critical thinking to customer solutions.



Education:


•Bachelor/master’s degree in Computer Science, Computer Engineering, quantitative studies, such as Statistics, Math, Operation Research, Economics and Advanced Analytics

Read more
Hyderabad
3 - 4 yrs
₹10L - ₹15L / yr
skill iconData Science
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
Computer Vision
TensorFlow
+5 more

At Livello we building machine-learning-based demand forecasting tools as well as computer-vision-based multi-camera product recognition solutions that detects people and products to track the inserted/removed items on shelves based on the hand movement of users. We are building models to determine real-time inventory levels, user behaviour as well as predicting how much of each product needs to be reordered so that the right products are delivered to the right locations at the right time, to fulfil customer demand.


Responsibilities

  • Lead the CV and DS Team
  • Work in the area of Computer Vision and Machine Learning, with focus on product (primarily food) and people recognition (position, movement, age, gender, DSGVO compliant).
  • Your work will include formulation and development of a Machine Learning models to solve the underlying problem.
  • You help build our smart supply chain system, keep up to date with the latest algorithmic improvements in forecasting and predictive areas, challenge the status quo
  • Statistical data modelling and machine learning research.
  • Conceptualize, implement and evaluate algorithmic solutions for supply forecasting, inventory optimization, predicting sales, and automating business processes
  • Conduct applied research to model complex dependencies, statistical inference and predictive modelling
  • Technological conception, design and implementation of new features
  • Quality assurance of the software through planning, creation and execution of tests
  • Work with a cross-functional team to define, build, test, and deploy applications


Requirements:

  • Master/PHD in Mathematics, Statistics, Engineering, Econometrics, Computer Science or any related fields.
  • 3-4 years of experience with computer vision and data science.
  • Relevant Data Science experience, deep technical background in applied data science (machine learning algorithms, statistical analysis, predictive modelling, forecasting, Bayesian methods, optimization techniques).
  • Experience building production-quality and well-engineered Computer Vision and Data Science products.
  • Experience in image processing, algorithms and neural networks.
  • Knowledge of the tools, libraries and cloud services for Data Science. Ideally Google Cloud Platform
  • Solid Python engineering skills and experience with Python, Tensorflow, Docker
  • Cooperative and independent work, analytical mindset, and willingness to take responsibility
  • Fluency in English, both written and spoken.
Read more
IT MNC
Agency job
via Apical Mind by Madhusudan Patade
Bengaluru (Bangalore), Hyderabad, Noida, Chennai, NCR (Delhi | Gurgaon | Noida)
3 - 12 yrs
₹15L - ₹40L / yr
Presto
Hadoop
presto
SQL

Experience – 3 – 12 yrs

Budget - Open

Location - PAN India (Noida/Bangaluru/Hyderabad/Chennai)


Presto Developer (4)

 

Understanding of distributed SQL query engine running on Hadoop 

Design and develop core components for Presto 

Contribute to the ongoing Presto development by implementing new features, bug fixes, and other improvements 

Develop new and extend existing Presto connectors to various data sources 

Lead complex and technically challenging projects from concept to completion 

Write tests and contribute to ongoing automation infrastructure development 

Run and analyze software performance metrics 

Collaborate with teams globally across multiple time zones and operate in an Agile development environment 

Hands-on experience and interest with Hadoop 

Read more
R&D Company
Bengaluru (Bangalore), Hyderabad
6 - 15 yrs
₹40L - ₹70L / yr
skill iconDeep Learning
skill iconMachine Learning (ML)
Compiler
LLVM
MLIR
+1 more

Job Title: Chief Engineer: Deep Learning Compiler Expert

 

You will collaborate with experts in machine learning, algorithms and software to lead our effort of deploying machine learning models onto Samsung Mobile AI platform.

In this position, you will contribute, develop and enhance our compiler infrastructure for high-performance by using open-source technology like MLIR, LLVM, TVM and IREE.

 

Necessary Skills / Attributes:

  • 6 to 15 years of experience in the field of compiler design and graph mapping.
  • 2+ years hands-on experience with MLIR and/or LLVM.
  • Experience with multiple toolchains, compilers, and Instruction Set Architectures.
  • Strong knowledge of resource management, scheduling, code generation, and compute graph optimization.
  • Strong expertise in writing modern standards (C++17 or newer) C++ production quality code along test-driven development principles.
  • Comfortable and experienced in software development life cycle - coding, debugging, optimization, testing, and continuous integration.
  • Familiarity with parallelization techniques for ML acceleration.
  • Experience working on and contributing to an active compiler toolchain codebase, such as LLVM, MLIR, or Glow.
  • Experience in deep learning algorithms and techniques, e.g., convolutional neural networks, recurrent networks, etc.
  • Experience of developing in a mainstream machine-learning framework, e.g. PyTorch, Tensorflow or Caffe.
  • Experience operating in a fast-moving environment where the workloads evolve at a rapid pace.
  • Understanding of the interplay of hardware and software architectures on future algorithms, programming models and applications.
  • Experience developing innovative architectures to extend the state of the art in DL performance and efficiency.
  • Experience with Hardware and Software Co-design.

 

M.S. or higher degree, in CS/CE/EE or equivalent with industry or open-source experience.

 

Work Profile:

  • Design, implement and test compiler features and capabilities related to infrastructure and compiler passes.
  • Ingest CNN graphs in Pytorch/TF/TFLite/ONNX format and map them to hardware implementations, model data-flows, create resource utilization cost-benefit analysis and estimate silicon performance.
  • Develop graph compiler optimizations (operator fusion, layout optimization, etc) that are customized to each of the different ML accelerators in the system.
  • Integrate open-source and vendor compiler technology into Samsung ML internal compiler infrastructure.
  • Collaborate with Samsung ML acceleration platform engineers to guide the direction of inferencing and provide requirements and feature requests for hardware vendors.
  • Closely follow industry and academic developments in the ML compiler domain and provide performance guidelines and standard methodologies for other ML engineers.
  • Create and optimize compiler backend to leverage the full hardware potential, efficiently optimizing them using novel approaches.
  • Evaluate code performance, debug, diagnose and drive resolution of compiler and cross-disciplinary system issues.
  • Contribute to the development of machine-learning libraries, intermediate representations, export formats and analysis tools.
  • Communicate and collaborate effectively with cross-functional hardware and software engineering teams.
  • Champion engineering and operational excellence, establishing metrics and processes for regular assessment and improvement.

 

Keywords to source candidates

Senior Developer, Deep Learning, Prediction engine, Machine Learning, Compiler

Read more
MNC

MNC

Agency job
via Fragma Data Systems by geeti gaurav mohanty
Bengaluru (Bangalore), Hyderabad
3 - 6 yrs
₹10L - ₹15L / yr
Big Data
Spark
ETL
Apache
Hadoop
+2 more
Desired Skill, Experience, Qualifications, and Certifications:
• 5+ years’ experience developing and maintaining modern ingestion pipeline using
technologies like Spark, Apache Nifi etc).
• 2+ years’ experience with Healthcare Payors (focusing on Membership, Enrollment, Eligibility,
• Claims, Clinical)
• Hands on experience on AWS Cloud and its Native components like S3, Athena, Redshift &
• Jupyter Notebooks
• Strong in Spark Scala & Python pipelines (ETL & Streaming)
• Strong experience in metadata management tools like AWS Glue
• String experience in coding with languages like Java, Python
• Worked on designing ETL & streaming pipelines in Spark Scala / Python
• Good experience in Requirements gathering, Design & Development
• Working with cross-functional teams to meet strategic goals.
• Experience in high volume data environments
• Critical thinking and excellent verbal and written communication skills
• Strong problem-solving and analytical abilities, should be able to work and delivery
individually
• Good-to-have AWS Developer certified, Scala coding experience, Postman-API and Apache
Airflow or similar schedulers experience
• Nice-to-have experience in healthcare messaging standards like HL7, CCDA, EDI, 834, 835, 837
• Good communication skills
Read more
Indium Software

at Indium Software

16 recruiters
Mohamed Aslam
Posted by Mohamed Aslam
Hyderabad
3 - 7 yrs
₹7L - ₹13L / yr
skill iconPython
Spark
SQL
PySpark
HiveQL
+2 more

Indium Software is a niche technology solutions company with deep expertise in Digital , QA and Gaming. Indium helps customers in their Digital Transformation journey through a gamut of solutions that enhance business value.

With over 1000+ associates globally, Indium operates through offices in the US, UK and India

Visit http://www.indiumsoftware.com">www.indiumsoftware.com to know more.

Job Title: Analytics Data Engineer

What will you do:
The Data Engineer must be an expert in SQL development further providing support to the Data and Analytics in database design, data flow and analysis activities. The position of the Data Engineer also plays a key role in the development and deployment of innovative big data platforms for advanced analytics and data processing. The Data Engineer defines and builds the data pipelines that will enable faster, better, data-informed decision-making within the business.

We ask:

Extensive Experience with SQL and strong ability to process and analyse complex data

The candidate should also have an ability to design, build, and maintain the business’s ETL pipeline and data warehouse The candidate will also demonstrate expertise in data modelling and query performance tuning on SQL Server
Proficiency with analytics experience, especially funnel analysis, and have worked on analytical tools like Mixpanel, Amplitude, Thoughtspot, Google Analytics, and similar tools.

Should work on tools and frameworks required for building efficient and scalable data pipelines
Excellent at communicating and articulating ideas and an ability to influence others as well as drive towards a better solution continuously.
Experience working in python, Hive queries, spark, pysaprk, sparkSQL, presto

  • Relate Metrics to product
  • Programmatic Thinking
  • Edge cases
  • Good Communication
  • Product functionality understanding

Perks & Benefits:
A dynamic, creative & intelligent team they will make you love being at work.
Autonomous and hands-on role to make an impact you will be joining at an exciting time of growth!

Flexible work hours and Attractive pay package and perks
An inclusive work environment that lets you work in the way that works best for you!

Read more
I Base IT

at I Base IT

1 recruiter
Sravanthi Alamuri
Posted by Sravanthi Alamuri
Hyderabad
9 - 13 yrs
₹10L - ₹23L / yr
skill iconData Analytics
Data Warehouse (DWH)
Data Structures
Spark
Architecture
+4 more
Data Architect who leads a team of 5 numbers. Required skills : Spark ,Scala , hadoop
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort