Loading...

{{notif_text}}

Excited to launch our new report - Understanding the mind of modern candidates. Get it here

ETL Jobs

Explore top ETL Job opportunities for Top Companies & Startups. All jobs are added by verified employees who can be contacted directly below.

Senior Data Engineer
Senior Data Engineer

via upGrad
Founded 2015
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Mumbai
Experience icon
3 - 7 years
Experience icon
Best in industry12 - 20 lacs/annum

About UpGrad : UpGrad is an online education platform building the careers of tomorrow by offering the most industry relevant programs in an immersive learning experience. Our mission is to create a new digital-first learning experience to deliver tangible career impact to individuals at scale. UpGrad currently offers programs in Data Science, Big Data, Product Management, Digital Marketing, Entrepreneurship and Management. UpGrad was rated as one of the top 10 most innovative companies in India for 2017 - https://www.fastcompany.com/most-innovative-companies/2017/sectors/india . UpGrad is co-founded by 3 IIT-Delhi and Parthenon alumni and the 4th co-founder is serial entrepreneur Ronnie Screwvala. UpGrad has a committed capital of 100Cr and in the first year of operations, has built the largest revenue generating online program in India (PG Diploma in Data Science) and the largest enrolment online program in India (Start-up India learning program). Position : Senior Data Engineer Position Type : Full Time Location : Mumbai We are looking for an experienced Data Engineer for product and business analytics who will design and build mission critical data pipelines in SQL environment. As a Senior Data Engineer, you will: - Engineer data pipelines ( batch and real-time ) that aids in creation of data-driven products for our platform - Design, develop and maintain a robust and scalable data-warehouse - Work closely alongside Product managers and data-scientists to bring the various datasets together and cater to our business intelligence and analytics use-cases - Design and develop solutions using data science techniques ranging from statistics, algorithms to machine learning - Perform hands-on devops work to keep the Data platform secure and reliable Basic Qualifications - Bachelor's degree in Computer Science, Information Systems, or related engineering discipline - 4+ years’ experience with ETL, Data Mining, Data Modeling, and working with large-scale datasets - 1+ years’ experience with an object-oriented programming language such as Python, C++, Java, etc. - Extremely proficient in writing performant SQL working with large data volumes - Experience with map-reduce concepts - Experience in building automated analytical systems utilizing large data sets. - Familiarity with AWS technologies preferred

Job posted by
apply for job
apply for job
Omkar Pradhan picture
Omkar Pradhan
Job posted by
Omkar Pradhan picture
Omkar Pradhan
Apply for job
apply for job

Data Scientist
Data Scientist

Founded 2018
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[1 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Chandigarh
Experience icon
2 - 5 years
Experience icon
Best in industry4 - 6 lacs/annum

Job Summary DataToBiz is an AI and Data Analytics Services startup. We are a team of young and dynamic professionals looking for an exceptional data scientist to join our team in Chandigarh. We are trying to solve some very exciting business challenges by applying cutting-edge Machine Learning and Deep Learning Technology. Being a consulting and services startup we are looking for quick learners who can work in a cross-functional team of Consultants, SMEs from various domains, UX architects, and Application development experts, to deliver compelling solutions through the application of Data Science and Machine Learning. The desired candidate will have a passion for finding patterns in large datasets, an ability to quickly understand the underlying domain and expertise to apply Machine Learning tools and techniques to create insights from the data. Responsibilities and Duties As a Data Scientist on our team, you will be responsible for solving complex big-data problems for various clients (on-site and off-site) using data mining, statistical analysis, machine learning, deep learning. One of the primary responsibilities will be to understand the business need and translate it into an actionable analytical plan in consultation with the team. Ensure that the analytical plan aligns with the customer’s overall strategic need. Understand and identify appropriate data sources required for solving the business problem at hand. Explore, diagnose and resolve any data discrepancies – including but not limited to any ETL that may be required, missing value and extreme value/outlier treatment using appropriate methods. Execute project plan to meet requirements and timelines. Identify success metrics and monitor them to ensure high-quality output for the client. Deliver production-ready models that can be deployed in the production system. Create relevant output documents, as required – power point deck/ excel files, data frames etc. Overall project management - Creating a project plan and timelines for the project and obtain sign-off. Monitor project progress in conjunction with the project plan – report risks, scope creep etc. in a timely manner. Identify and evangelize new and upcoming analytical trends in the market within the organization. Implementing the applications of these algorithms/methods/techniques in R/Python Required Experience, Skills and Qualifications 3+ years experience working Data Mining and Statistical Modeling for predictive and prescriptive enterprise analytics. 2+ years of working with Python, Machine learning with exposure to one or more ML/DL frameworks like Tensorflow, Caffe, Scikit-Learn, MXNet, CNTK. Exposure to ML techniques and algorithms to work with different data formats including Structured Data, Unstructured Data, and Natural Language. Experience working with data retrieval and manipulation tools for various data sources like: Rest/Soap APIs, Relational (MySQL) and No-SQL Databases (MongoDB), IOT data streams, Cloud-based storage, and HDFS. Strong foundation in Algorithms and Data Science theory. Strong verbal and written communication skills with other developers and business client Knowledge of Telecom and/or FinTech Domain is a plus.

Job posted by
apply for job
apply for job
Ankush Sharma picture
Ankush Sharma
Job posted by
Ankush Sharma picture
Ankush Sharma
Apply for job
apply for job

Data ETL Engineer
Data ETL Engineer

Founded 2013
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Chennai
Experience icon
1 - 3 years
Experience icon
Best in industry5 - 12 lacs/annum

Responsibilities: Design and develop ETL Framework and Data Pipelines in Python 3. Orchestrate complex data flows from various data sources (like RDBMS, REST API, etc) to the data warehouse and vice versa. Develop app modules (in Django) for enhanced ETL monitoring. Device technical strategies for making data seamlessly available to BI and Data Sciences teams. Collaborate with engineering, marketing, sales, and finance teams across the organization and help Chargebee develop complete data solutions. Serve as a subject-matter expert for available data elements and analytic capabilities. Qualification: Expert programming skills with the ability to write clean and well-designed code. Expertise in Python, with knowledge of at least one Python web framework. Strong SQL Knowledge, and high proficiency in writing advanced SQLs. Hands on experience in modeling relational databases. Experience integrating with third-party platforms is an added advantage. Genuine curiosity, proven problem-solving ability, and a passion for programming and data.

Job posted by
apply for job
apply for job
Vinothini Sundaram picture
Vinothini Sundaram
Job posted by
Vinothini Sundaram picture
Vinothini Sundaram
Apply for job
apply for job

Data Engineer
Data Engineer

Founded 2014
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
NCR (Delhi | Gurgaon | Noida)
Experience icon
1 - 5 years
Experience icon
Best in industry6 - 18 lacs/annum

We are looking at a Big Data Engineer with at least 3-5 years of experience as a Big Data Developer/EngineerExperience with Big Data technologies and tools like Hadoop, Hive, MapR, Kafka, Spark, etc.,Experience in Architecting data ingestion, storage, consumption model.Experience with NoSQL Databases like MongoDB, HBase, Cassandra, etc.,Knowledge of various ETL tools & techniques

Job posted by
apply for job
apply for job
Raghavendra Mishra picture
Raghavendra Mishra
Job posted by
Raghavendra Mishra picture
Raghavendra Mishra
Apply for job
apply for job

Senior BI & ETL Developer
Senior BI & ETL Developer

via Wibmo
Founded 1999
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 10 years
Experience icon
Best in industry10 - 15 lacs/annum

Critical Tasks and Expected Contributions/Results : The role will be primarily focused on the design, development and testing of ETL workflows (using Talend) as well as the batch management and error handling processes. Build Business Intelligence Applications using tools like Power BI. Additional responsibilities include the documentation of technical specifications and related project artefacts. - Gather requirement and propose possible ETL solutions for in-house designed Data Warehouse - Analyze & translate functional specifications & change requests into technical specifications. - Design and Creating star schema data models - Design, Build and Implement Business Intelligence Solutions using Power BI - Develop, implement & test ETL program logic. - Deployment and support any related issues Key Competency : - A good understanding of the concepts and best practices of data warehouse ETL design and be able to apply these suitably to solve specific business needs. - Expert knowledge of ETL tool like Talend - Have more than 8 years experience in designing and developing ETL work packages, and be able to demonstrate expertise in ETL tool- Talend - Knowledge of BI tools like Power BI is required - Ability to follow functional ETL specifications and challenge business logic and schema design where appropriate, as well as manage their time effectively. - Exposure to Performance tuning is essential - Good organisational skills. - Methodical and structured approach to design and development. - Good interpersonal skills.

Job posted by
apply for job
apply for job
Shirin AM picture
Shirin AM
Job posted by
Shirin AM picture
Shirin AM
Apply for job
apply for job

Data Engineering Manager
Data Engineering Manager

via Amazon
Founded 1991
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Hyderabad
Experience icon
9 - 14 years
Experience icon
Best in industry25 - 40 lacs/annum

The Last Mile Analytics & Quality Team in Hyderabad is looking for Transportation Quality Specialist who will act as first level support for address, geocode and static route management in Last Mile with multiple Transportation services along with other operational issues and activities related to Transportation process and optimization. Your solutions will impact our customers directly! This job requires you to constantly hit the ground running and your ability to learn quickly and work on disparate and overlapping tasks will define your success. High Impact production issues often require coordination between multiple Development, Operations and IT Support groups, so you get to experience a breadth of impact with various groups. Primary responsibilities include troubleshooting, diagnosing and fixing static route issues, developing monitoring solutions, performing software maintenance and configuration, implementing the fix for internally developed code, performing minor SQL queries, updating, tracking and resolving technical challenges. Responsibilities also include working alongside development on Amazon Corporate and Divisional Software projects, updating/enhancing our current tools, automation of support processes and documentation of our systems. The ideal candidate must be detail oriented, have superior verbal and written communication skills, strong organizational skills, able to juggle multiple tasks at once, able to work independently and can maintain professionalism under pressure. You must be able to identify problems before they happen and implement solutions that detect and prevent outages. You must be able to accurately prioritize projects, make sound judgments, work to improve the customer experience, and get the right things done. Internal job description Your solutions will impact our customers directly! This job requires you to constantly hit the ground running and your ability to learn quickly and work on disparate and overlapping tasks will define your success. High Impact production issues often require coordination between multiple Development, Operations and IT Support groups, so you get to experience a breadth of impact with various groups. Primary responsibilities include troubleshooting, diagnosing and fixing static route issues, developing monitoring solutions, performing software maintenance and configuration, implementing the fix for internally developed code, performing minor SQL queries, updating, tracking and resolving technical challenges. Responsibilities also include working alongside development on Amazon Corporate and Divisional Software projects, updating/enhancing our current tools, automation of support processes and documentation of our systems. The ideal candidate must be detail oriented, have superior verbal and written communication skills, strong organizational skills, able to juggle multiple tasks at once, able to work independently and can maintain professionalism under pressure. You must be able to identify problems before they happen and implement solutions that detect and prevent outages. You must be able to accurately prioritize projects, make sound judgments, work to improve the customer experience, and get the right things done. Basic qualifications - Bachelors degree in Computer Science or Engineering - Good communication skills- both verbal and written - Demonstrated ability to work in a team - Proficiency in MS Office, SQL, Excel. Preferred qualifications - Experience working with relational databases - Experience with Linux - Debugging and troubleshooting skills, with an enthusiastic attitude to support and resolve customer problems

Job posted by
apply for job
apply for job
Rakesh Kumar picture
Rakesh Kumar
Job posted by
Rakesh Kumar picture
Rakesh Kumar
Apply for job
apply for job

Database Architect
Database Architect

Founded 2017
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 10 years
Experience icon
Best in industry10 - 20 lacs/annum

candidate will be responsible for all aspects of data acquisition, data transformation, and analytics scheduling and operationalization to drive high-visibility, cross-division outcomes. Expected deliverables will include the development of Big Data ELT jobs using a mix of technologies, stitching together complex and seemingly unrelated data sets for mass consumption, and automating and scaling analytics into the GRAND's Data Lake. Key Responsibilities : - Create a GRAND Data Lake and Warehouse which pools all the data from different regions and stores of GRAND in GCC - Ensure Source Data Quality Measurement, enrichment and reporting of Data Quality - Manage All ETL and Data Model Update Routines - Integrate new data sources into DWH - Manage DWH Cloud (AWS/AZURE/Google) and Infrastructure Skills Needed : - Very strong in SQL. Demonstrated experience with RDBMS, Unix Shell scripting preferred (e.g., SQL, Postgres, Mongo DB etc) - Experience with UNIX and comfortable working with the shell (bash or KRON preferred) - Good understanding of Data warehousing concepts. Big data systems : Hadoop, NoSQL, HBase, HDFS, MapReduce - Aligning with the systems engineering team to propose and deploy new hardware and software environments required for Hadoop and to expand existing environments. - Working with data delivery teams to set up new Hadoop users. This job includes setting up Linux users, setting up and testing HDFS, Hive, Pig and MapReduce access for the new users. - Cluster maintenance as well as creation and removal of nodes using tools like Ganglia, Nagios, Cloudera Manager Enterprise, and other tools. - Performance tuning of Hadoop clusters and Hadoop MapReduce routines. - Screen Hadoop cluster job performances and capacity planning - Monitor Hadoop cluster connectivity and security - File system management and monitoring. - HDFS support and maintenance. - Collaborating with application teams to install operating system and - Hadoop updates, patches, version upgrades when required. - Defines, develops, documents and maintains Hive based ETL mappings and scripts

Job posted by
apply for job
apply for job
Rahul Malani picture
Rahul Malani
Job posted by
Rahul Malani picture
Rahul Malani
Apply for job
apply for job