ETL Developer

at Product based Company

Agency job
icon
Coimbatore
icon
4 - 15 yrs
icon
₹5L - ₹25L / yr (ESOP available)
icon
Full time
Skills
ETL
Big Data
Hi Professionals,
We are looking for ETL Developer for Reputed Client @ Coimbatore Permanent role
Work Location : Coimbatore
Experience : 4+ Years
Skills ;
  •  Talend (or)Strong experience in any of the ETL Tools like (Informatica/Datastage/Talend)
  • DB preference (Teradata /Oracle /Sql server )
  • Supporting Tools (JIRA/SVN)
Notice Period : Immediate to 30 Days
Read more
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Data Analyst

at Factory Edtech

Agency job
via Qrata
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
Data Analytics
SQL
Javascript
XML
ETL
icon
Delhi
icon
2 - 5 yrs
icon
₹10L - ₹12L / yr
Job Title:
Data Analyst

Job Brief:
The successful candidate will turn data into information, information into insight and insight into business decisions.

Data Analyst Job Duties
Data analyst responsibilities include conducting full lifecycle analysis to include requirements, activities and design. Data analysts will develop analysis and reporting capabilities. They will also monitor performance
and quality control plans to identify improvements.

About Us
We began in 2015 with an entrepreneurial vision to bring a digital change in the manufacturing landscape of India. With a team of 300+ we are working towards the digital transformation of business in the manufacturing industry across domains like Footwear, Apparel, Textile, Accessories etc. We are backed by investors such as
Info Edge (Naukri.com), Matrix Partners, Sequoia, Water Bridge Ventures and select Industry leaders.
Today, we have enabled 2000+ Manufacturers to digitize their distribution channel.

Responsibilities
● Interpret data, analyze results using statistical techniques and provide ongoing reports.
● Develop and implement databases, data collection systems, data analytics and other strategies that
optimize statistical efficiency and quality.
● Acquire data from primary or secondary data sources and maintain databases/data systems.
● Identify, analyze, and interpret trends or patterns in complex data sets.
● Filter and “clean” data by reviewing computer reports, printouts, and performance indicators to locate and
correct code problems.
● Work with management to prioritize business and information needs.
● Locate and define new process improvement opportunities.
Requirements
● Proven working experience as a Data Analyst or Business Data Analyst.
● Technical expertise regarding data models, database design development, data mining and segmentation
techniques.
● Strong knowledge of and experience with reporting packages (Business Objects etc), databases (SQL etc),
programming (XML, Javascript, or ETL frameworks).
● Knowledge of statistics and experience using statistical packages for analyzing datasets (Excel, SPSS, SAS
etc).
● Strong analytical skills with the ability to collect, organize, analyze, and disseminate significant amounts of
information with attention to detail and accuracy.
● Adept at queries, report writing and presenting findings.


Job Location
South Delhi, New Delhi
Read more
Job posted by
Blessy Fernandes

Data Engineer

at Climate Connect Digital

Founded 2010  •  Products & Services  •  20-100 employees  •  Profitable
Data Warehouse (DWH)
Informatica
ETL
Big Data
PySpark
Apache Hadoop
Apache Hive
icon
Remote only
icon
1 - 4 yrs
icon
₹8L - ₹15L / yr

About Climate Connect Digital


Our team is inspired to change the world by making energy greener, and more affordable. Established in 2011 in London, UK, and now headquartered in Gurgaon, India. From unassuming beginnings, we have become a leading energy-AI software player, at the vanguard of accelerating the global energy transition.


Today we are a remote first organization, building digital tools for modern enterprises to reduce their carbon footprint and help the industry to get to carbon zero.



About the Role - Data Engineer


As we start into our first strong growth phase, we are looking for a Data Engineer to build the data infrastructure to support business and product growth.

You are someone who can see projects through from beginning to end, coach others, and self-manage. We’re looking for an eager individual who can guide our data stack using AWS services with technical knowledge, communication skills, and real-world experience.


The data flowing through our platform directly contributes to decision-making by algorithms & all levels of leadership alike. If you’re passionate about building tools that enhance productivity, improve green energy, reduce waste, and improve work-life harmony for a large and rapidly growing finance user base, come join us!


Job Responsibilities

  • Iterate, build, and implement our data model, data warehousing, and data integration architecture using AWS & GCP services
  • Build solutions that ingest data from source and partner systems into our data infrastructure, where the data is transformed, intelligently curated and made available for consumption by downstream operational and analytical processes
  • Integrate data from source systems using common ETL tools or programming languages (e.g. Ruby, Python, Scala, AWS Data Pipeline, etc.)
  • Develop tailor-made strategies, concepts and solutions for the efficient handling of our growing amounts of data
  • Work iteratively with our data scientist to build up fact tables (e.g. container ship movements), dimension tables (e.g. weather data), ETL processes, and build the data catalog

Job Requirements


  • Experience designing, building and maintaining data architecture and warehousing using AWS services
  • Authoritative in ETL optimization, designing, coding, and tuning big data processes using Apache Spark, R, Python, C# and/or similar technologies
  • Experience managing AWS resources using Terraform
  • Experience in Data engineering and infrastructure work for analytical and machine learning processes
  • Experience with ETL tooling, migrating ETL code from one technology to another will be a benefit
  • Experience with Data visualisation / dashboarding tools as QA/QC data processes
  • Independent, self-starter who thrives in a fast pace environment

What’s in it for you


We offer competitive salaries based on prevailing market rates. In addition to your introductory package, you can expect to receive the following benefits:


  • Flexible working hours and leave policy
  • Learning and development opportunities
  • Medical insurance/Term insurance, Gratuity benefits over and above the salaries
  • Access to industry and domain thought leaders.

At Climate Connect, you get a rare opportunity to join an established company at the early stages of a significant and well-backed global growth push.


We are building a remote-first organisation ingrained in the team ethos. We understand its importance for the success of any next-generation technology company. The team includes passionate and self-driven people with unconventional backgrounds, and we’re seeking a similar spirit with the right potential.

 

What it’s ​like to work with us

 

You become part of a strong network and an accomplished legacy from leading technology and business schools worldwide when you join us. Such as the Indian Institute of Technology, Oxford University, University of Cambridge, University College London, and many more.

 

We don’t believe in constrained traditional hierarchies and instead work in flexible teams with the freedom to achieve successful business outcomes. We want more people who can thrive in a fast-paced, collaborative environment. Our comprehensive support system comprises a global network of advisors and experts, providing unparalleled opportunities for learning and growth.

Read more
Job posted by
Hrushikesh Mande

SQL Engineers

at Datametica Solutions Private Limited

Founded 2013  •  Products & Services  •  100-1000 employees  •  Profitable
ETL
SQL
Data engineering
Analytics
PL/SQL
Shell Scripting
Linux/Unix
Datawarehousing
icon
Pune, Hyderabad
icon
4 - 10 yrs
icon
₹5L - ₹20L / yr

We at Datametica Solutions Private Limited are looking for SQL Engineers who have a passion for cloud with knowledge of different on-premise and cloud Data implementation in the field of Big Data and Analytics including and not limiting to Teradata, Netezza, Exadata, Oracle, Cloudera, Hortonworks and alike. 

Ideal candidates should have technical experience in migrations and the ability to help customers get value from Datametica's tools and accelerators.

Job Description

Experience : 4-10 years

Location : Pune

 


Mandatory Skills - 

  • Strong in ETL/SQL development
  • Strong Data Warehousing skills
  • Hands-on experience working with Unix/Linux
  • Development experience in Enterprise Data warehouse projects
  • Good to have experience working with Python, shell scripting
  •  

Opportunities -

  • Selected candidates will be provided training opportunities on one or more of the following: Google Cloud, AWS, DevOps Tools, Big Data technologies like Hadoop, Pig, Hive, Spark, Sqoop, Flume and Kafka
  • Would get chance to be part of the enterprise-grade implementation of Cloud and Big Data systems
  • Will play an active role in setting up the Modern data platform based on Cloud and Big Data
  • Would be part of teams with rich experience in various aspects of distributed systems and computing


 

About Us!

A global Leader in the Data Warehouse Migration and Modernization to the Cloud, we empower businesses by migrating their Data/Workload/ETL/Analytics to the Cloud by leveraging Automation.

 

We have expertise in transforming legacy Teradata, Oracle, Hadoop, Netezza, Vertica, Greenplum along with ETLs like Informatica, Datastage, AbInitio & others, to cloud-based data warehousing with other capabilities in data engineering, advanced analytics solutions, data management, data lake and cloud optimization.

 

Datametica is a key partner of the major cloud service providers - Google, Microsoft, Amazon, Snowflake.

 

We have our own products!

Eagle – Data warehouse Assessment & Migration Planning Product

Raven – Automated Workload Conversion Product

Pelican - Automated Data Validation Product, which helps automate and accelerate data migration to the cloud.

 

Why join us!

Datametica is a place to innovate, bring new ideas to live and learn new things. We believe in building a culture of innovation, growth and belonging. Our people and their dedication over these years are the key factors in achieving our success.

 

 

Benefits we Provide!

Working with Highly Technical and Passionate, mission-driven people

Subsidized Meals & Snacks

Flexible Schedule

Approachable leadership

Access to various learning tools and programs

Pet Friendly

Certification Reimbursement Policy

 

Check out more about us on our website below!

www.datametica.com

Read more
Job posted by
Sayali Kachi

SQL Lead

at Datametica Solutions Private Limited

Founded 2013  •  Products & Services  •  100-1000 employees  •  Profitable
PL/SQL
MySQL
SQL server
SQL
Linux/Unix
Shell Scripting
Data modeling
Data Warehouse (DWH)
ETL
icon
Pune, Hyderabad
icon
6 - 12 yrs
icon
₹11L - ₹25L / yr

We at Datametica Solutions Private Limited are looking for an SQL Lead / Architect who has a passion for the cloud with knowledge of different on-premises and cloud Data implementation in the field of Big Data and Analytics including and not limiting to Teradata, Netezza, Exadata, Oracle, Cloudera, Hortonworks and alike.

Ideal candidates should have technical experience in migrations and the ability to help customers get value from Datametica's tools and accelerators.



Job Description :

Experience: 6+ Years

Work Location: Pune / Hyderabad



Technical Skills :

  • Good programming experience as an Oracle PL/SQL, MySQL, and SQL Server Developer
  • Knowledge of database performance tuning techniques
  • Rich experience in a database development
  • Experience in Designing and Implementation Business Applications using the Oracle Relational Database Management System
  • Experience in developing complex database objects like Stored Procedures, Functions, Packages and Triggers using SQL and PL/SQL
  •  

Required Candidate Profile :

  • Excellent communication, interpersonal, analytical skills and strong ability to drive teams
  • Analyzes data requirements and data dictionary for moderate to complex projects • Leads data model related analysis discussions while collaborating with Application Development teams, Business Analysts, and Data Analysts during joint requirements analysis sessions
  • Translate business requirements into technical specifications with an emphasis on highly available and scalable global solutions
  • Stakeholder management and client engagement skills
  • Strong communication skills (written and verbal)

About Us!

A global leader in the Data Warehouse Migration and Modernization to the Cloud, we empower businesses by migrating their Data/Workload/ETL/Analytics to the Cloud by leveraging Automation.

We have expertise in transforming legacy Teradata, Oracle, Hadoop, Netezza, Vertica, Greenplum along with ETLs like Informatica, Datastage, AbInitio & others, to cloud-based data warehousing with other capabilities in data engineering, advanced analytics solutions, data management, data lake and cloud optimization.

Datametica is a key partner of the major cloud service providers - Google, Microsoft, Amazon, Snowflake.

We have our own products!

Eagle Data warehouse Assessment & Migration Planning Product

Raven Automated Workload Conversion Product

Pelican Automated Data Validation Product, which helps automate and accelerate data migration to the cloud.



Why join us!

Datametica is a place to innovate, bring new ideas to live, and learn new things. We believe in building a culture of innovation, growth, and belonging. Our people and their dedication over these years are the key factors in achieving our success.



Benefits we Provide!

Working with Highly Technical and Passionate, mission-driven people

Subsidized Meals & Snacks

Flexible Schedule

Approachable leadership

Access to various learning tools and programs

Pet Friendly

Certification Reimbursement Policy



Check out more about us on our website below!

www.datametica.com

Read more
Job posted by
Sayali Kachi

BI Lead

at Rishabh Software

Founded 2001  •  Products & Services  •  100-1000 employees  •  Profitable
Datawarehousing
Microsoft Windows Azure
ETL
Relational Database (RDBMS)
SQL Server Integration Services (SSIS)
PowerBI
OLAP
Informatica
SQL Azure
Monitoring
Azure synapse
Azure DevOps
icon
Vadodara, Bengaluru (Bangalore), Ahmedabad, Pune, Kolkata, Hyderabad
icon
6 - 8 yrs
icon
Best in industry
Technical Skills
Mandatory (Minimum 4 years of working experience)
 3+ years of experience leading data warehouse implementation with technical architectures , ETL / ELT ,
reporting / analytic tools and scripting (end to end implementation)
 Experienced in Microsoft Azure (Azure SQL Managed Instance , Data Factory , Azure Synapse, Azure Monitoring ,
Azure DevOps , Event Hubs , Azure AD Security)
 Deep experience in using any BI tools such as Power BI/Tableau, QlikView/SAP-BO etc.,
 Experienced in ETL tools such as SSIS, Talend/Informatica/Pentaho
 Expertise in using RDBMSes like Oracle, SQL Server as source or target and online analytical processing (OLAP)
 Experienced in SQL/T-SQL/ DML/DDL statements, stored procedure, function, trigger, indexes, cursor
 Expertise in building and organizing advanced DAX calculations and SSAS cubes
 Experience in data/dimensional modelling, analysis, design, testing, development, and implementation
 Experienced in advanced data warehouse concepts using structured, semi-structured and un-structured data
 Experienced with real time ingestion, change data capture, real time & batch processing
 Good knowledge of meta data management and data governance
 Great problem solving skills, with a strong bias for quality and design excellence
 Experienced in developing dashboards with a focus on usability, performance, flexibility, testability, and
standardization.
 Familiarity with development in cloud environments like AWS / Azure / Google

Good To Have (1+ years of working experience)
 Experience working with Snowflake, Amazon RedShift
Soft Skills
 Good verbal and written communication skills
 Ability to collaborate and work effectively in a team.
 Excellent analytical and logical skills
Read more
Job posted by
Baiju Sukumaran

Data Engineer

at Skill-lync

Founded 2018  •  Product  •  1000-5000 employees  •  Raised funding
ETL
Spark
Amazon Web Services (AWS)
Docker
icon
Bengaluru (Bangalore)
icon
3 - 8 yrs
icon
₹10L - ₹20L / yr
Roles & Responsibilities:
• The responsibilities range from being at the vanguard of solving technical problems to
venturing into unchartered areas of technologies to solve complex problems.
• Implement, or operate comprehensive data platform components to balance optimization
of data access with batch loading and resource utilization factors, per customer
requirements.
• Develop robust data platform components for sourcing, loading, transformation, and
extracting data from various sources.
• Build metadata processes and frameworks.
• Create supporting documentation, such as metadata and diagrams of entity relationships,
business processes, and process flow.
• Maintain standards, such as organization, structure, or nomenclature, for data platform
elements, such as data architectures, pipelines, frameworks, models, tools, and databases.
• Implement business rules via scripts, middleware, or other technologies.
• Map data between source systems and data lake
• Ability to be independent and produce high-quality code on components related to the Data
Platform. Should also possess Creativity, Responsibility, and Autonomy.
• Participate in the planning, design, and implementation of features, working with small
teams that consist of engineers, product managers, and marketing.
• Demonstrate strong technical talent throughout the organization and engineer products that
meet future scalability, performance, security, and quality goals while maintaining a
cohesive user experience across different components and products.
• Adopt and share best practices of software development methodology and frameworks
used in the data platform.
• Passion for continuous learning, experimenting and applying cutting-edge technology and
software paradigms. Also responsible for fostering this culture across the team.

Qualifications
• 2+yrs production software experience
• Experience with Cloud platforms, preferably AWS.
• Strong knowledge of popular database and data warehouse technologies & concepts from
Google, and Amazon such as BigQuery, Redshift, Snowflake, etc.
• Experience in data modeling, data design, and persistence on large complex datasets.
• Experience with object-oriented design and development (preferably Python/Java)
• Background in Spark or other Big Data-related technologies, and non-relational databases
is a plus.
• Experience with software development best practices, including unit testing and
continuous delivery.
• Desire to apply agile development principles in a fast-paced startup environment.
• Strong teamwork and communications.
Read more
Job posted by
Vijayalaxmi Umachagi

Data Engineer

at AI enabled SAAS organisation

Data engineering
Data Engineer
AWS Lambda
Microservices
ETL
Python
MongoDB
MySQL
Cassandra
Docker
GitHub
Tableau
airflow
icon
Bengaluru (Bangalore)
icon
1 - 8 yrs
icon
₹5L - ₹40L / yr
Required Skills & Experience:
• 2+ years of experience in data engineering & strong understanding of data engineering principles using big data technologies
• Excellent programming skills in Python is mandatory
• Expertise in relational databases (MSSQL/MySQL/Postgres) and expertise in SQL. Exposure to NoSQL such as Cassandra. MongoDB will be a plus.
• Exposure to deploying ETL pipelines such as AirFlow, Docker containers & Lambda functions
• Experience in AWS loud services such as AWS CLI, Glue, Kinesis etc
• Experience using Tableau for data visualization is a plus
• Ability to demonstrate a portfolio of projects (GitHub, papers, etc.) is a plus
• Motivated, can-do attitude and desire to make a change is a must
• Excellent communication skills
Read more
Job posted by
Kalindi Maheshwari

Big Data Architect

at Agilisium

Agency job
via Recruiting India
Big Data
Apache Spark
Spark
PySpark
ETL
Data engineering
icon
Chennai
icon
10 - 19 yrs
icon
₹12L - ₹40L / yr

Job Sector: IT, Software

Job Type: Permanent

Location: Chennai

Experience: 10 - 20 Years

Salary: 12 – 40 LPA

Education: Any Graduate

Notice Period: Immediate

Key Skills: Python, Spark, AWS, SQL, PySpark

Contact at triple eight two zero nine four two double seven

 

Job Description:

Requirements

  • Minimum 12 years experience
  • In depth understanding and knowledge on distributed computing with spark.
  • Deep understanding of Spark Architecture and internals
  • Proven experience in data ingestion, data integration and data analytics with spark, preferably PySpark.
  • Expertise in ETL processes, data warehousing and data lakes.
  • Hands on with python for Big data and analytics.
  • Hands on in agile scrum model is an added advantage.
  • Knowledge on CI/CD and orchestration tools is desirable.
  • AWS S3, Redshift, Lambda knowledge is preferred
Thanks
Read more
Job posted by
Moumita Santra

Etl developer

at TechChefs Software

Founded 2015  •  Services  •  100-1000 employees  •  Bootstrapped
ETL
Informatica
Python
SQL
icon
Remote, Anywhere from india
icon
5 - 10 yrs
icon
₹1L - ₹15L / yr

Responsibilities

  • Installing and configuring Informatica components, including high availability; managing server activations and de-activations for all environments; ensuring that all systems and procedures adhere to organizational best practices
  • Day to day administration of the Informatica Suite of services (PowerCenter, IDS, Metadata, Glossary and Analyst).
  • Informatica capacity planning and on-going monitoring (e.g. CPU, Memory, etc.) to proactively increase capacity as needed.
  • Manage backup and security of Data Integration Infrastructure.
  • Design, develop, and maintain all data warehouse, data marts, and ETL functions for the organization as a part of an infrastructure team.
  • Consult with users, management, vendors, and technicians to assess computing needs and system requirements.
  • Develop and interpret organizational goals, policies, and procedures.
  • Evaluate the organization's technology use and needs and recommend improvements, such as software upgrades.
  • Prepare and review operational reports or project progress reports.
  • Assist in the daily operations of the Architecture Team , analyzing workflow, establishing priorities, developing standards, and setting deadlines.
  • Work with vendors to manage support SLA’s and influence vendor product roadmap
  • Provide leadership and guidance in technical meetings, define standards and assist/provide status updates
  • Work with cross functional operations teams such as systems, storage and network to design technology stacks.

 

Preferred Qualifications

  • Minimum 6+ years’ experience as Informatica Engineer and Developer role
  • Minimum of 5+ years’ experience in an ETL environment as a developer.
  • Minimum of 5+ years of experience in SQL coding and understanding of databases
  • Proficiency in Python
  • Proficiency in command line troubleshooting
  • Proficiency in writing code in Perl/Shell scripting languages
  • Understanding of Java and concepts of Object-oriented programming
  • Good understanding of systems, networking, and storage
  • Strong knowledge of scalability and high availability
Read more
Job posted by
Shilpa Yadav

ETL Talend developer

at Rivet Systems Pvt Ltd.

Founded 2011  •  Products & Services  •  20-100 employees  •  Profitable
ETL
Hadoop
Big Data
Pig
Spark
Apache Hive
Talend
icon
Bengaluru (Bangalore)
icon
5 - 19 yrs
icon
₹10L - ₹30L / yr
Strong exposure in ETL / Big Data / Talend / Hadoop / Spark / Hive / Pig

To be considered as a candidate for a Senior Data Engineer position, a person must have a proven track record of architecting data solutions on current and advanced technical platforms. They must have leadership abilities to lead a team providing data centric solutions with best practices and modern technologies in mind. They look to build collaborative relationships across all levels of the business and the IT organization. They possess analytic and problem-solving skills and have the ability to research and provide appropriate guidance for synthesizing complex information and extract business value. Have the intellectual curiosity and ability to deliver solutions with creativity and quality. Effectively work with business and customers to obtain business value for the requested work. Able to communicate technical results to both technical and non-technical users using effective story telling techniques and visualizations. Demonstrated ability to perform high quality work with innovation both independently and collaboratively.

Read more
Job posted by
Shobha B K
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at Product based Company?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort