4+ Teradata Jobs in Pune | Teradata Job openings in Pune
Apply to 4+ Teradata Jobs in Pune on CutShort.io. Explore the latest Teradata Job opportunities across top companies like Google, Amazon & Adobe.

Job Title : Ab Initio Developer
Location : Pune
Experience : 5+ Years
Notice Period : Immediate Joiners Only
Job Summary :
We are looking for an experienced Ab Initio Developer to join our team in Pune.
The ideal candidate should have strong hands-on experience in Ab Initio development, data integration, and Unix scripting, with a solid understanding of SDLC and data warehousing concepts.
Mandatory Skills :
Ab Initio (GDE, EME, graphs, parameters), SQL/Teradata, Data Warehousing, Unix Shell Scripting, Data Integration, DB Load/Unload Utilities.
Key Responsibilities :
- Design and develop Ab Initio graphs/plans/sandboxes/projects using GDE and EME.
- Manage and configure standard environment parameters and multifile systems.
- Perform complex data integration from multiple source and target systems with business rule transformations.
- Utilize DB Load/Unload Utilities effectively for optimized performance.
- Implement generic graphs, ensure proper use of parallelism, and maintain project parameters.
- Work in a data warehouse environment involving SDLC, ETL processes, and data analysis.
- Write and maintain Unix Shell Scripts and use utilities like sed, awk, etc.
- Optimize and troubleshoot performance issues in Ab Initio jobs.
Mandatory Skills :
- Strong expertise in Ab Initio (GDE, EME, graphs, parallelism, DB utilities, multifile systems).
- Experience with SQL and databases like SQL Server or Teradata.
- Proficiency in Unix Shell Scripting and Unix utilities.
- Data integration and ETL from varied source/target systems.
Good to Have :
- Experience in Ab Initio and AWS integration.
- Knowledge of Message Queues and Continuous Graphs.
- Exposure to Metadata Hub.
- Familiarity with Big Data tools such as Hive, Impala.
- Understanding of job scheduling tools.
Responsibilities:
• Designing Hive/HCatalog data model includes creating table definitions, file formats, compression techniques for Structured & Semi-structured data processing
• Implementing Spark processing based ETL frameworks
• Implementing Big data pipeline for Data Ingestion, Storage, Processing & Consumption
• Modifying the Informatica-Teradata & Unix based data pipeline
• Enhancing the Talend-Hive/Spark & Unix based data pipelines
• Develop and Deploy Scala/Python based Spark Jobs for ETL processing
• Strong SQL & DWH concepts.
Preferred Background:
• Function as integrator between business needs and technology solutions, helping to create technology solutions to meet clients’ business needs
• Lead project efforts in defining scope, planning, executing, and reporting to stakeholders on strategic initiatives
• Understanding of EDW system of business and creating High level design document and low level implementation document
• Understanding of Big Data Lake system of business and creating High level design document and low level implementation document
• Designing Big data pipeline for Data Ingestion, Storage, Processing & Consumption
We are looking for a Teradata developer for one of our premium clients, Kindly contact me if interested
- Must have 4 to 7 years of experience in ETL Design and Development using Informatica Components.
- Should have extensive knowledge in Unix shell scripting.
- Understanding of DW principles (Fact, Dimension tables, Dimensional Modelling and Data warehousing concepts).
- Research, development, document and modification of ETL processes as per data architecture and modeling requirements.
- Ensure appropriate documentation for all new development and modifications of the ETL processes and jobs.
- Should be good in writing complex SQL queries.
- • Selected candidates will be provided training opportunities on one or more of following: Google Cloud, AWS, DevOps Tools, Big Data technologies like Hadoop, Pig, Hive, Spark, Sqoop, Flume and
- Kafka would get chance to be part of the enterprise-grade implementation of Cloud and Big Data systems
- Will play an active role in setting up the Modern data platform based on Cloud and Big Data
- Would be part of teams with rich experience in various aspects of distributed systems and computing.