8+ Teradata Jobs in India
Apply to 8+ Teradata Jobs on CutShort.io. Find your next job, effortlessly. Browse Teradata Jobs and apply today!
Responsibilities:
• Designing Hive/HCatalog data model includes creating table definitions, file formats, compression techniques for Structured & Semi-structured data processing
• Implementing Spark processing based ETL frameworks
• Implementing Big data pipeline for Data Ingestion, Storage, Processing & Consumption
• Modifying the Informatica-Teradata & Unix based data pipeline
• Enhancing the Talend-Hive/Spark & Unix based data pipelines
• Develop and Deploy Scala/Python based Spark Jobs for ETL processing
• Strong SQL & DWH concepts.
Preferred Background:
• Function as integrator between business needs and technology solutions, helping to create technology solutions to meet clients’ business needs
• Lead project efforts in defining scope, planning, executing, and reporting to stakeholders on strategic initiatives
• Understanding of EDW system of business and creating High level design document and low level implementation document
• Understanding of Big Data Lake system of business and creating High level design document and low level implementation document
• Designing Big data pipeline for Data Ingestion, Storage, Processing & Consumption
at Vithamas Technologies Pvt LTD
RequiredSkills:
• Minimum of 4-6 years of experience in data modeling (including conceptual, logical and physical data models. • 2-3 years of experience inExtraction, Transformation and Loading ETLwork using data migration tools like Talend, Informatica, Datastage, etc. • 4-6 years of experience as a database developerinOracle, MS SQLor another enterprise database with a focus on building data integration process • Candidate should haveanyNoSqltechnology exposure preferably MongoDB. • Experience in processing large data volumes indicated by experience with BigDataplatforms (Teradata, Netezza, Vertica or Cloudera, Hortonworks, SAP HANA, Cassandra, etc.). • Understanding of data warehousing concepts and decision support systems.
• Ability to deal with sensitive and confidential material and adhere to worldwide data security and • Experience writing documentation for design and feature requirements. • Experience developing data-intensive applications on cloud-based architectures and infrastructures such as AWS, Azure etc. • Excellent communication and collaboration skills.
We are looking for a Teradata developer for one of our premium clients, Kindly contact me if interested
- Must have 4 to 7 years of experience in ETL Design and Development using Informatica Components.
- Should have extensive knowledge in Unix shell scripting.
- Understanding of DW principles (Fact, Dimension tables, Dimensional Modelling and Data warehousing concepts).
- Research, development, document and modification of ETL processes as per data architecture and modeling requirements.
- Ensure appropriate documentation for all new development and modifications of the ETL processes and jobs.
- Should be good in writing complex SQL queries.
- • Selected candidates will be provided training opportunities on one or more of following: Google Cloud, AWS, DevOps Tools, Big Data technologies like Hadoop, Pig, Hive, Spark, Sqoop, Flume and
- Kafka would get chance to be part of the enterprise-grade implementation of Cloud and Big Data systems
- Will play an active role in setting up the Modern data platform based on Cloud and Big Data
- Would be part of teams with rich experience in various aspects of distributed systems and computing.
delivered.
• You will utilize your configuration management and software release experience; as well as
change management concepts to drive the success of the projects.
• You will partner with senior leaders to understand and communicate the business needs to
translate them into IT requirements. Consult with Customer’s Business Analysts on their Data
warehouse requirements
• You will assist the technical team in identification and resolution of Data Quality issues.
• You will manage small to medium-sized projects relating to the delivery of applications or
application changes.
• You will use Managed Services or 3rd party resources to meet application support requirements.
• You will interface daily with multi-functional team members within the EDW team and across the
enterprise to resolve issues.
• Recommend and advocate different approaches and designs to the requirements
• Write technical design docs
• Execute Data modelling
• Solution inputs for the presentation layer
• You will craft and generate summary, statistical, and presentation reports; as well as provide reporting and metrics for strategic initiatives.
• Performs miscellaneous job-related duties as assigned
Preferred Qualifications
• Strong interpersonal, teamwork, organizational and workload planning skills
• Strong analytical, evaluative, and problem-solving abilities as well as exceptional customer service orientation
• Ability to drive clarity of purpose and goals during release and planning activities
• Excellent organizational skills including ability to prioritize tasks efficiently with high level of attention to detail
• Excited by the opportunity to continually improve processes within a large company
• Healthcare background/ Automobile background.
• Familiarity with major big data solutions and products available in the market.
• Proven ability to drive continuous
They provide both wholesale and retail funding. (PM1)
- Key responsibility is to design, develop & maintain efficient Data models for the organization maintained to ensure optimal query performance by the consumption layer.
- Developing, Deploying & maintaining a repository of UDXs written in Java / Python.
- Develop optimal Data Model design, analyzing complex distributed data deployments, and making recommendations to optimize performance basis data consumption patterns, performance expectations, the query is executed on the tables/databases, etc.
- Periodic Database health check and maintenance
- Designing collections in a no-SQL Database for efficient performance
- Document & maintain data dictionary from various sources to enable data governance
- Coordination with Business teams, IT, and other stakeholders to provide best-in-class data pipeline solutions, exposing data via APIs, loading in down streams, No-SQL Databases, etc
- Data Governance Process Implementation and ensuring data security
Requirements
- Extensive working experience in Designing & Implementing Data models in OLAP Data Warehousing solutions (Redshift, Synapse, Snowflake, Teradata, Vertica, etc).
- Programming experience using Python / Java.
- Working knowledge in developing & deploying User-defined Functions (UDXs) using Java / Python.
- Strong understanding & extensive working experience in OLAP Data Warehousing (Redshift, Synapse, Snowflake, Teradata, Vertica, etc) architecture and cloud-native Data Lake (S3, ADLS, BigQuery, etc) Architecture.
- Strong knowledge in Design, Development & Performance tuning of 3NF/Flat/Hybrid Data Model.
- Extensive technical experience in SQL including code optimization techniques.
- Strung knowledge of database performance and tuning, troubleshooting, and tuning.
- Knowledge of collection design in any No-SQL DB (DynamoDB, MongoDB, CosmosDB, etc), along with implementation of best practices.
- Ability to understand business functionality, processes, and flows.
- Good combination of technical and interpersonal skills with strong written and verbal communication; detail-oriented with the ability to work independently.
- Any OLAP DWH DBA Experience and User Management will be added advantage.
- Knowledge in financial industry-specific Data models such as FSLDM, IBM Financial Data Model, etc will be added advantage.
- Experience in Snowflake will be added advantage.
- Working experience in BFSI/NBFC & data understanding of Loan/Mortgage data will be added advantage.
Functional knowledge
- Data Governance & Quality Assurance
- Modern OLAP Database Architecture & Design
- Linux
- Data structures, algorithm & data modeling techniques
- No-SQL database architecture
- Data Security
- Experience - 6-10 years
- Key Skills for Software Developer C++, Linux with SQL:
- - Looking only for candidates who can join immediately or max 10 day
- - should be able to work independently with no handholding
- - the ability to work with C/C++ code on Windows/Linux platforms
- - Database knowledge of MSSQL, Oracle, MySQL/MariaDB, ideally other ones too: Saphana, Teradata, Postgres.
- - Ability to work on their own fixing defects
- - should be able to understand secure coding practices
- - should have the Ability to work independently and with the team across different time zones
- Experience 4 - 8 years
- Key Skills for Software Developer C++, Linux :
- - Looking only for candidates who can join immediately or max 10 day
- - should be able to work independently with no handholding
- - the ability to work with C/C++ code on Windows/Linux platforms
- - Database knowledge of MSSQL, Oracle, MySQL/MariaDB, ideally other ones too: Saphana, Teradata, Postgres.
- - Ability to work on their own fixing defects
- - should be able to understand secure coding practices
- - should have the Ability to work independently and with the team across different time zones