Cutshort logo
Product based Company logo
ETL Developer
Product based Company's logo

ETL Developer

Agency job
4 - 15 yrs
₹5L - ₹25L / yr (ESOP available)
Coimbatore
Skills
ETL
Big Data
Hi Professionals,
We are looking for ETL Developer for Reputed Client @ Coimbatore Permanent role
Work Location : Coimbatore
Experience : 4+ Years
Skills ;
  •  Talend (or)Strong experience in any of the ETL Tools like (Informatica/Datastage/Talend)
  • DB preference (Teradata /Oracle /Sql server )
  • Supporting Tools (JIRA/SVN)
Notice Period : Immediate to 30 Days
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About Product based Company

Founded
Type
Size
Stage
About
N/A
Company social profiles
N/A

Similar jobs

globe teleservices
deepshikha thapar
Posted by deepshikha thapar
Bengaluru (Bangalore)
5 - 10 yrs
₹20L - ₹25L / yr
ETL
skill iconPython
Informatica
Talend



Good experience in the Extraction, Transformation, and Loading (ETL) of data from various sources into Data Warehouses and Data Marts using Informatica Power Center (Repository Manager,

Designer, Workflow Manager, Workflow Monitor, Metadata Manager), Power Connect as ETL tool on Oracle, and SQL Server Databases.



 Knowledge of Data Warehouse/Data mart, ODS, OLTP, and OLAP implementations teamed with

project scope, Analysis, requirements gathering, data modeling, ETL Design, development,

System testing, Implementation, and production support.

 Strong experience in Dimensional Modeling using Star and Snow Flake Schema, Identifying Facts

and Dimensions

 Used various transformations like Filter, Expression, Sequence Generator, Update Strategy,

Joiner, Stored Procedure, and Union to develop robust mappings in the Informatica Designer.

 Developed mapping parameters and variables to support SQL override.

 Created applets to use them in different mappings.

 Created sessions, configured workflows to extract data from various sources, transformed data,

and loading into the data warehouse.

 Used Type 1 SCD and Type 2 SCD mappings to update slowly Changing Dimension Tables.

 Modified existing mappings for enhancements of new business requirements.

 Involved in Performance tuning at source, target, mappings, sessions, and system levels.

 Prepared migration document to move the mappings from development to testing and then to

production repositories

 Extensive experience in developing Stored Procedures, Functions, Views and Triggers, Complex

SQL queries using PL/SQL.


 Experience in resolving on-going maintenance issues and bug fixes; monitoring Informatica

/Talend sessions as well as performance tuning of mappings and sessions.

 Experience in all phases of Data warehouse development from requirements gathering for the

data warehouse to develop the code, Unit Testing, and Documenting.

 Extensive experience in writing UNIX shell scripts and automation of the ETL processes using

UNIX shell scripting.

 Experience in using Automation Scheduling tools like Control-M.

 Hands-on experience across all stages of Software Development Life Cycle (SDLC) including

business requirement analysis, data mapping, build, unit testing, systems integration, and user

acceptance testing.

 Build, operate, monitor, and troubleshoot Hadoop infrastructure.

 Develop tools and libraries, and maintain processes for other engineers to access data and write

MapReduce programs.

Read more
Bengaluru (Bangalore)
1 - 6 yrs
₹2L - ₹8L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+9 more

ROLE AND RESPONSIBILITIES

Should be able to work as an individual contributor and maintain good relationship with stakeholders. Should

be proactive to learn new skills per business requirement. Familiar with extraction of relevant data, cleanse and

transform data into insights that drive business value, through use of data analytics, data visualization and data

modeling techniques.


QUALIFICATIONS AND EDUCATION REQUIREMENTS

Technical Bachelor’s Degree.

Non-Technical Degree holders should have 1+ years of relevant experience.

Read more
Shiprocket
at Shiprocket
5 recruiters
Kailuni Lanah
Posted by Kailuni Lanah
Gurugram
4 - 10 yrs
₹25L - ₹35L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+4 more

We are seeking an experienced Senior Data Platform Engineer to join our team. The ideal candidate should have extensive experience with Pyspark, Airflow, Presto, Hive, Kafka and Debezium, and should be passionate about developing scalable and reliable data platforms.

Responsibilities:

  • Design, develop, and maintain our data platform architecture using Pyspark, Airflow, Presto, Hive, Kafka, and Debezium.
  • Develop and maintain ETL processes to ingest, transform, and load data from various sources into our data platform.
  • Work closely with data analysts, data scientists, and other stakeholders to understand their requirements and design solutions that meet their needs.
  • Implement and maintain data governance policies and procedures to ensure data quality, privacy, and security.
  • Continuously monitor and optimize the performance of our data platform to ensure scalability, reliability, and cost-effectiveness.
  • Keep up-to-date with the latest trends and technologies in the field of data engineering and share knowledge and best practices with the team.

Requirements:

  • Bachelor's degree in Computer Science, Information Technology, or related field.
  • 5+ years of experience in data engineering or related fields.
  • Strong proficiency in Pyspark, Airflow, Presto, Hive, Datalake, and Debezium.
  • Experience with data warehousing, data modeling, and data governance.
  • Experience working with large-scale distributed systems and cloud platforms (e.g., AWS, GCP, Azure).
  • Strong problem-solving skills and ability to work independently and collaboratively.
  • Excellent communication and interpersonal skills.

If you are a self-motivated and driven individual with a passion for data engineering and a strong background in Pyspark, Airflow, Presto, Hive, Datalake, and Debezium, we encourage you to apply for this exciting opportunity. We offer competitive compensation, comprehensive benefits, and a collaborative work environment that fosters innovation and growth.

Read more
Amagi Media Labs
at Amagi Media Labs
3 recruiters
Rajesh C
Posted by Rajesh C
Bengaluru (Bangalore), Chennai
12 - 15 yrs
₹50L - ₹60L / yr
skill iconData Science
skill iconMachine Learning (ML)
ETL
Data Warehouse (DWH)
skill iconAmazon Web Services (AWS)
+5 more
Job Title: Data Architect
Job Location: Chennai

Job Summary
The Engineering team is seeking a Data Architect. As a Data Architect, you will drive a
Data Architecture strategy across various Data Lake platforms. You will help develop
reference architecture and roadmaps to build highly available, scalable and distributed
data platforms using cloud based solutions to process high volume, high velocity and
wide variety of structured and unstructured data. This role is also responsible for driving
innovation, prototyping, and recommending solutions. Above all, you will influence how
users interact with Conde Nast’s industry-leading journalism.
Primary Responsibilities
Data Architect is responsible for
• Demonstrated technology and personal leadership experience in architecting,
designing, and building highly scalable solutions and products.
• Enterprise scale expertise in data management best practices such as data integration,
data security, data warehousing, metadata management and data quality.
• Extensive knowledge and experience in architecting modern data integration
frameworks, highly scalable distributed systems using open source and emerging data
architecture designs/patterns.
• Experience building external cloud (e.g. GCP, AWS) data applications and capabilities is
highly desirable.
• Expert ability to evaluate, prototype and recommend data solutions and vendor
technologies and platforms.
• Proven experience in relational, NoSQL, ELT/ETL technologies and in-memory
databases.
• Experience with DevOps, Continuous Integration and Continuous Delivery technologies
is desirable.
• This role requires 15+ years of data solution architecture, design and development
delivery experience.
• Solid experience in Agile methodologies (Kanban and SCRUM)
Required Skills
• Very Strong Experience in building Large Scale High Performance Data Platforms.
• Passionate about technology and delivering solutions for difficult and intricate
problems. Current on Relational Databases and No sql databases on cloud.
• Proven leadership skills, demonstrated ability to mentor, influence and partner with
cross teams to deliver scalable robust solutions..
• Mastery of relational database, NoSQL, ETL (such as Informatica, Datastage etc) /ELT
and data integration technologies.
• Experience in any one of Object Oriented Programming (Java, Scala, Python) and
Spark.
• Creative view of markets and technologies combined with a passion to create the
future.
• Knowledge on cloud based Distributed/Hybrid data-warehousing solutions and Data
Lake knowledge is mandate.
• Good understanding of emerging technologies and its applications.
• Understanding of code versioning tools such as GitHub, SVN, CVS etc.
• Understanding of Hadoop Architecture and Hive SQL
• Knowledge in any one of the workflow orchestration
• Understanding of Agile framework and delivery

Preferred Skills:
● Experience in AWS and EMR would be a plus
● Exposure in Workflow Orchestration like Airflow is a plus
● Exposure in any one of the NoSQL database would be a plus
● Experience in Databricks along with PySpark/Spark SQL would be a plus
● Experience with the Digital Media and Publishing domain would be a
plus
● Understanding of Digital web events, ad streams, context models

About Condé Nast

CONDÉ NAST INDIA (DATA)
Over the years, Condé Nast successfully expanded and diversified into digital, TV, and social
platforms - in other words, a staggering amount of user data. Condé Nast made the right
move to invest heavily in understanding this data and formed a whole new Data team
entirely dedicated to data processing, engineering, analytics, and visualization. This team
helps drive engagement, fuel process innovation, further content enrichment, and increase
market revenue. The Data team aimed to create a company culture where data was the
common language and facilitate an environment where insights shared in real-time could
improve performance.
The Global Data team operates out of Los Angeles, New York, Chennai, and London. The
team at Condé Nast Chennai works extensively with data to amplify its brands' digital
capabilities and boost online revenue. We are broadly divided into four groups, Data
Intelligence, Data Engineering, Data Science, and Operations (including Product and
Marketing Ops, Client Services) along with Data Strategy and monetization. The teams built
capabilities and products to create data-driven solutions for better audience engagement.
What we look forward to:
We want to welcome bright, new minds into our midst and work together to create diverse
forms of self-expression. At Condé Nast, we encourage the imaginative and celebrate the
extraordinary. We are a media company for the future, with a remarkable past. We are
Condé Nast, and It Starts Here.
Read more
GradMener Technology Pvt. Ltd.
Pune
2 - 5 yrs
₹3L - ₹15L / yr
ETL
Informatica
Data Warehouse (DWH)
SQL
Oracle
Job Description : 
 
Roles and Responsibility : 
 
  • Designing and coding the data warehousing system to desired company specifications 
  • Conducting preliminary testing of the warehousing environment before data is extracted
  • Extracting company data and transferring it into the new warehousing environment
  • Testing the new storage system once all the data has been transferred
  • Troubleshooting any issues that may arise
  • Providing maintenance support
  • Consulting with data management teams to get a big-picture idea of the company’s data storage needs
  • Presenting the company with warehousing options based on their storage needs
Requirements :
  • Experience of 1-3 years in Informatica Power Center
  • Excellent knowledge in Oracle database and Pl-SQL such - Stored Procs, Functions, User Defined Functions, table partition, Index, views etc.
  • Knowledge of SQL Server database 
  • Hands on experience in Informatica Power Center and Database performance tuning, optimization including complex Query optimization techniques  Understanding of ETL Control Framework
  • Experience in UNIX shell/Perl Scripting
  • Good communication skills, including the ability to write clearly
  • Able to function effectively as a member of a team 
  • Proactive with respect to personal and technical development
Read more
GitHub
at GitHub
4 recruiters
Nataliia Mediana
Posted by Nataliia Mediana
Remote only
3 - 8 yrs
$24K - $60K / yr
ETL
PySpark
Data engineering
Data engineer
athena
+9 more
We are a nascent quant hedge fund; we need to stage financial data and make it easy to run and re-run various preprocessing and ML jobs on the data.
- We are looking for an experienced data engineer to join our team.
- The preprocessing involves ETL tasks, using pyspark, AWS Glue, staging data in parquet formats on S3, and Athena

To succeed in this data engineering position, you should care about well-documented, testable code and data integrity. We have devops who can help with AWS permissions.
We would like to build up a consistent data lake with staged, ready-to-use data, and to build up various scripts that will serve as blueprints for various additional data ingestion and transforms.

If you enjoy setting up something which many others will rely on, and have the relevant ETL expertise, we’d like to work with you.

Responsibilities
- Analyze and organize raw data
- Build data pipelines
- Prepare data for predictive modeling
- Explore ways to enhance data quality and reliability
- Potentially, collaborate with data scientists to support various experiments

Requirements
- Previous experience as a data engineer with the above technologies
Read more
Bengaluru (Bangalore)
5 - 6 yrs
₹8L - ₹10L / yr
Data migration
Data Warehouse (DWH)
ETL
SQL
skill iconPostgreSQL
+4 more
  • Excellent working knowledge on Data Warehousing /Data Migration activity using an ETL tool.
  • Strong Data Integration, PostgreSQL/Oracle Database skills, Shell Scripting, Python programming, and development know-how.
  • Hands-on experience in working with and generating XML documents.
  • Good analytical and business process understanding capability.
  • Familiarized with Data Models, Source-Target Data Mapping, Transactional, and Master Data concepts.
  • Well-experienced in High level/Detailed design, Performance tuning of ETL jobs.
  • Very good communication skills, interpersonal skills, stakeholder management skills, self-motivated, quick learner, team player.
  • Exposure to After Sales Business Domain is highly preferred.
  • Experience using HP ALM, Jira for ticketing.
  • Experience release management

 

Read more
Hyderabad
5 - 9 yrs
₹12L - ₹14L / yr
ETL
Snowflake
Data Warehouse (DWH)
Datawarehousing
Apache Spark
+4 more
Overall experience of 4 – 8 years of experience in DW / BI technologies.
Minimum 2 years of work experience on Snowflake and Azure storage.
Minimum 3 years of development experience in ETL Tool Experience.
Strong SQL database skills in other databases like Oracle, SQL Server, DB2 and Teradata
Good to have Hadoop and Spark experience.
Good conceptual knowledge on Data-Warehouse and various methodologies.
Working knowledge in any of the scripting like UNIX / Shell
Good Presentation and communication skills.
Should be flexible with the overlapping working hours.
Should be able to work independently and be proactive.
Good understanding of Agile development cycle.
Read more
Data ToBiz
at Data ToBiz
2 recruiters
PS Dhillon
Posted by PS Dhillon
Chandigarh, NCR (Delhi | Gurgaon | Noida)
2 - 6 yrs
₹7L - ₹15L / yr
ETL
skill iconAmazon Web Services (AWS)
Amazon Redshift
skill iconPython
Job Responsibilities : - Developing new data pipelines and ETL jobs for processing millions of records and it should be scalable with growth.
Pipelines should be optimised to handle both real time data, batch update data and historical data.
Establish scalable, efficient, automated processes for complex, large scale data analysis.
Write high quality code to gather and manage large data sets (both real time and batch data) from multiple sources, perform ETL and store it in a data warehouse.
Manipulate and analyse complex, high-volume, high-dimensional data from varying sources using a variety of tools and data analysis techniques.
Participate in data pipelines health monitoring and performance optimisations as well as quality documentation.
Interact with end users/clients and translate business language into technical requirements.
Acts independently to expose and resolve problems.

Job Requirements :-
2+ years experience working in software development & data pipeline development for enterprise analytics.
2+ years of working with Python with exposure to various warehousing tools
In-depth working with any of commercial tools like AWS Glue, Ta-lend, Informatica, Data-stage, etc.
Experience with various relational databases like MySQL, MSSql, Oracle etc. is a must.
Experience with analytics and reporting tools (Tableau, Power BI, SSRS, SSAS).
Experience in various DevOps practices helping the client to deploy and scale the systems as per requirement.
Strong verbal and written communication skills with other developers and business client.
Knowledge of Logistics and/or Transportation Domain is a plus.
Hands-on with traditional databases and ERP systems like Sybase and People-soft.
Read more
Helical IT Solutions
at Helical IT Solutions
4 recruiters
Niyotee Gupta
Posted by Niyotee Gupta
Hyderabad
1 - 5 yrs
₹3L - ₹8L / yr
ETL
Big Data
TAC
PL/SQL
Relational Database (RDBMS)
+1 more

ETL Developer – Talend

Job Duties:

  • ETL Developer is responsible for Design and Development of ETL Jobs which follow standards,

best practices and are maintainable, modular and reusable.

  • Proficiency with Talend or Pentaho Data Integration / Kettle.
  • ETL Developer will analyze and review complex object and data models and the metadata

repository in order to structure the processes and data for better management and efficient

access.

  • Working on multiple projects, and delegating work to Junior Analysts to deliver projects on time.
  • Training and mentoring Junior Analysts and building their proficiency in the ETL process.
  • Preparing mapping document to extract, transform, and load data ensuring compatibility with

all tables and requirement specifications.

  • Experience in ETL system design and development with Talend / Pentaho PDI is essential.
  • Create quality rules in Talend.
  • Tune Talend / Pentaho jobs for performance optimization.
  • Write relational(sql) and multidimensional(mdx) database queries.
  • Functional Knowledge of Talend Administration Center/ Pentaho data integrator, Job Servers &

Load balancing setup, and all its administrative functions.

  • Develop, maintain, and enhance unit test suites to verify the accuracy of ETL processes,

dimensional data, OLAP cubes and various forms of BI content including reports, dashboards,

and analytical models.

  • Exposure in Map Reduce components of Talend / Pentaho PDI.
  • Comprehensive understanding and working knowledge in Data Warehouse loading, tuning, and

maintenance.

  • Working knowledge of relational database theory and dimensional database models.
  • Creating and deploying Talend / Pentaho custom components is an add-on advantage.
  • Nice to have java knowledge.

Skills and Qualification:

  • BE, B.Tech / MS Degree in Computer Science, Engineering or a related subject.
  • Having an experience of 3+ years.
  • Proficiency with Talend or Pentaho Data Integration / Kettle.
  • Ability to work independently.
  • Ability to handle a team.
  • Good written and oral communication skills.
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos