4+ Teradata Jobs in Pune | Teradata Job openings in Pune
Apply to 4+ Teradata Jobs in Pune on CutShort.io. Explore the latest Teradata Job opportunities across top companies like Google, Amazon & Adobe.

Required Technical Skill Set:Teradata with Marketing Campaign knowledge and SAS
Desired Competencies (Technical/Behavioral Competency)
Must-Have
1. Advanced coding skills in Teradata SQL and SAS is required
2. Experience with customer segmentation, marketing optimization, and marketing automation. Thorough understanding of customer contact management principles
3. Design and execution of campaign on consumer and business products using Teradata communication manager and inhouse tools
4. Analyzing effectiveness of various campaigns by doing necessary analysis to add insights and improve future campaigns
5. Timely resolution of Marketing team queries and other ad-hoc request
Good-to-Have
1. Awareness of CRM tools & process, automation
2. Knowledge of commercial databases preferable
3. People & team management skills
Responsibilities:
• Designing Hive/HCatalog data model includes creating table definitions, file formats, compression techniques for Structured & Semi-structured data processing
• Implementing Spark processing based ETL frameworks
• Implementing Big data pipeline for Data Ingestion, Storage, Processing & Consumption
• Modifying the Informatica-Teradata & Unix based data pipeline
• Enhancing the Talend-Hive/Spark & Unix based data pipelines
• Develop and Deploy Scala/Python based Spark Jobs for ETL processing
• Strong SQL & DWH concepts.
Preferred Background:
• Function as integrator between business needs and technology solutions, helping to create technology solutions to meet clients’ business needs
• Lead project efforts in defining scope, planning, executing, and reporting to stakeholders on strategic initiatives
• Understanding of EDW system of business and creating High level design document and low level implementation document
• Understanding of Big Data Lake system of business and creating High level design document and low level implementation document
• Designing Big data pipeline for Data Ingestion, Storage, Processing & Consumption
We are looking for a Teradata developer for one of our premium clients, Kindly contact me if interested
- Must have 4 to 7 years of experience in ETL Design and Development using Informatica Components.
- Should have extensive knowledge in Unix shell scripting.
- Understanding of DW principles (Fact, Dimension tables, Dimensional Modelling and Data warehousing concepts).
- Research, development, document and modification of ETL processes as per data architecture and modeling requirements.
- Ensure appropriate documentation for all new development and modifications of the ETL processes and jobs.
- Should be good in writing complex SQL queries.
- • Selected candidates will be provided training opportunities on one or more of following: Google Cloud, AWS, DevOps Tools, Big Data technologies like Hadoop, Pig, Hive, Spark, Sqoop, Flume and
- Kafka would get chance to be part of the enterprise-grade implementation of Cloud and Big Data systems
- Will play an active role in setting up the Modern data platform based on Cloud and Big Data
- Would be part of teams with rich experience in various aspects of distributed systems and computing.