Loading...

{{notif_text}}

The next CutShort event, {{next_cs_event.name}}, in partnership with UpGrad, will happen on {{next_cs_event.startDate | date: 'd MMMM'}}The next CutShort event, {{next_cs_event.name}}, in partnership with UpGrad, will begin in a few hoursThe CutShort event, {{next_cs_event.name}}, in partnership with UpGrad, is LIVE.Join now!
{{hours_remaining}}:{{minutes_remaining}}:{{seconds_remaining}}Want to save 90% of your recruiting time? Learn how in our next webinar on 22nd March at 3 pmLearn more

ETL Jobs

Explore top ETL Job opportunities for Top Companies & Startups. All jobs are added by verified employees who can be contacted directly below.

Data Engineer

via Rupeek
Founded 2015
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[1 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 7 years
Experience icon
Best in industry12 - 22 lacs/annum

As a Data Engineer you will: Create and maintain optimal data pipeline architecture, Assemble large, complex data sets that meet functional / non-functional business requirements. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies. Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics. Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. Keep our data separated and secure across national boundaries through multiple data centers and AWS regions. Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader. Work with data and analytics experts to strive for greater functionality in our data systems. Qualifications for a Data Engineer: 4+ years of experience in a Data Engineer role Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases Experience building and optimizing 'big data' data pipelines, architectures and data sets Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement Strong analytic skills related to working with unstructured datasets Build processes supporting data transformation, data structures, metadata, dependency and workload management A successful history of manipulating, processing and extracting value from large disconnected datasets Working knowledge of message queuing, stream processing, and highly scalable 'big data' data stores Experience with big data tools: Hadoop, Spark, Kafka, etc Experience with relational SQL and NoSQL databases, including Postgres and Cassandra Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc. Experience with AWS cloud services: EC2, EMR, RDS, Redshift, Kinesis Experience with stream-processing systems: Storm, Spark-Streaming, etc Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc Rupeek tech Stack: You can take a look at our tech stack here: http://stackshare.io/AmarPrabhu/rupeek

Job posted by
apply for job
apply for job
Bhavana Y.C picture
Bhavana Y.C
Job posted by
Bhavana Y.C picture
Bhavana Y.C
Apply for job
apply for job

Senior Software Engineer - Big Data

Founded 2003
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 9 years
Experience icon
Best in industry20 - 27 lacs/annum

Description At LogMeIn, we build beautifully simple and easy-to-use Cloud-based, cross-platform Web, Mobile and Desktop software products. You probably know us by such industry-defining brand names as GoToMeeting®, GoToWebinar®, JoinMe®, LastPass®, Rescue® and BoldChat® as well as other award winning products and services. LogMeIn enables customers around the world to enjoy highly productive, mobile workstyles. Currently, we’re searching for a high caliber and innovative Big Data and Analytics Engineer who will provide useful insights into the data and enable the stakeholders make Data Driven decisions. He’ll be part of the team building the next generation data platform on Cloud using cutting edge technologies like Spark, Presto, Kinesis, EMR, Pig, Hive, Redshift. If you're passionate about building high quality software for data, thrive in an innovative, cutting-edge startup-like environment, and consider yourself to be a top-notch Data Engineer, then LogMeIn could very well be the perfect fit for you and your career. Responsibilities • Responsible for analysis, design and development activities on multiple projects; plans, organizes, and performs the technical work within area of specialization • Participates in design activity with other programmers on technical aspects relating to the project, including functional specifications, design parameters, feature enhancements, and alternative solutions; • Meets or exceeds standards for the quality and timeliness of the work products that they create (e.g., requirements, designs, code, fixes). • Implements, unit tests, debugs and integrates complex code; designs, writes, conducts, and directs the development of tests to verify the functionality, accuracy, and efficiency of developed or enhanced software; analyzes results for conformance to plans and specifications making recommendations based on the results • Generally provides technical direction and project management within a project/scrum team with increased leadership of others; provides guidance in methodology selection, project planning, the review of work products; may serve in a part-time technical lead capacity to a limited number of junior engineers, providing immediate direction and guidance • Keeps technically abreast of trends and advancements within area of specialization, incorporating these improvements where applicable; attends technical conferences as appropriate Requirements • Bachelor’s degree or equivalent in computer science or related field is preferred, with 5-8 years of directly related work experience • Hands-on experience designing, developing and maintaining high-volume ETL processes using Big Data technologies like Pig, Hive, Oozie, Spark, MapReduce • Solid understanding of Data Warehousing concepts • Strong understanding of Dimensional Data Modeling • Experience in using Hadoop, S3, MapReduce, Redshift, RDS on AWS • Expertise in at least one Visualization tool like Tableau, Quicksight, PowerBI, Sisense, Birst, QlikView, Looker etc. • Experience in working on processing real time streaming data • Strong SQL and Stored Procedure development skills. Knowledge of NoSQL is an added plus • Knowledge of Java to leverage Big Data technologies is desired • Knowledge of scripting language preferably Python or statistical programming language R is desired • Working knowledge of Linux environment • Knowledge of SDLC and Agile development methodologies • Expertise in OOAD principles and methodologies (e.g., UML) and OS concepts • Extensive knowledge and discipline in software engineering process; experience as a technical lead on complex projects, providing guidance on design and development approach • Expertise implementing, unit testing, debugging and integrating code of moderate complexity • Experience helping others to design, write, conduct, and direct the development of tests • Experience independently publishing papers, blogs, and creating and presenting briefings to technical audiences • Strong critical thinking and problem solving skills • Approaches problems with curiosity and open-mindedness

Job posted by
apply for job
apply for job
Kunal Banerjee picture
Kunal Banerjee
Job posted by
Kunal Banerjee picture
Kunal Banerjee
Apply for job
apply for job

Data Scientist

Founded 2018
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[1 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Chandigarh
Experience icon
2 - 5 years
Experience icon
Best in industry4 - 6 lacs/annum

Job Summary DataToBiz is an AI and Data Analytics Services startup. We are a team of young and dynamic professionals looking for an exceptional data scientist to join our team in Chandigarh. We are trying to solve some very exciting business challenges by applying cutting-edge Machine Learning and Deep Learning Technology. Being a consulting and services startup we are looking for quick learners who can work in a cross-functional team of Consultants, SMEs from various domains, UX architects, and Application development experts, to deliver compelling solutions through the application of Data Science and Machine Learning. The desired candidate will have a passion for finding patterns in large datasets, an ability to quickly understand the underlying domain and expertise to apply Machine Learning tools and techniques to create insights from the data. Responsibilities and Duties As a Data Scientist on our team, you will be responsible for solving complex big-data problems for various clients (on-site and off-site) using data mining, statistical analysis, machine learning, deep learning. One of the primary responsibilities will be to understand the business need and translate it into an actionable analytical plan in consultation with the team. Ensure that the analytical plan aligns with the customer’s overall strategic need. Understand and identify appropriate data sources required for solving the business problem at hand. Explore, diagnose and resolve any data discrepancies – including but not limited to any ETL that may be required, missing value and extreme value/outlier treatment using appropriate methods. Execute project plan to meet requirements and timelines. Identify success metrics and monitor them to ensure high-quality output for the client. Deliver production-ready models that can be deployed in the production system. Create relevant output documents, as required – power point deck/ excel files, data frames etc. Overall project management - Creating a project plan and timelines for the project and obtain sign-off. Monitor project progress in conjunction with the project plan – report risks, scope creep etc. in a timely manner. Identify and evangelize new and upcoming analytical trends in the market within the organization. Implementing the applications of these algorithms/methods/techniques in R/Python Required Experience, Skills and Qualifications 3+ years experience working Data Mining and Statistical Modeling for predictive and prescriptive enterprise analytics. 2+ years of working with Python, Machine learning with exposure to one or more ML/DL frameworks like Tensorflow, Caffe, Scikit-Learn, MXNet, CNTK. Exposure to ML techniques and algorithms to work with different data formats including Structured Data, Unstructured Data, and Natural Language. Experience working with data retrieval and manipulation tools for various data sources like: Rest/Soap APIs, Relational (MySQL) and No-SQL Databases (MongoDB), IOT data streams, Cloud-based storage, and HDFS. Strong foundation in Algorithms and Data Science theory. Strong verbal and written communication skills with other developers and business client Knowledge of Telecom and/or FinTech Domain is a plus.

Job posted by
apply for job
apply for job
Ankush Sharma picture
Ankush Sharma
Job posted by
Ankush Sharma picture
Ankush Sharma
Apply for job
apply for job

Data ETL Engineer

Founded 2013
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Chennai
Experience icon
1 - 3 years
Experience icon
Best in industry5 - 12 lacs/annum

Responsibilities: Design and develop ETL Framework and Data Pipelines in Python 3. Orchestrate complex data flows from various data sources (like RDBMS, REST API, etc) to the data warehouse and vice versa. Develop app modules (in Django) for enhanced ETL monitoring. Device technical strategies for making data seamlessly available to BI and Data Sciences teams. Collaborate with engineering, marketing, sales, and finance teams across the organization and help Chargebee develop complete data solutions. Serve as a subject-matter expert for available data elements and analytic capabilities. Qualification: Expert programming skills with the ability to write clean and well-designed code. Expertise in Python, with knowledge of at least one Python web framework. Strong SQL Knowledge, and high proficiency in writing advanced SQLs. Hands on experience in modeling relational databases. Experience integrating with third-party platforms is an added advantage. Genuine curiosity, proven problem-solving ability, and a passion for programming and data.

Job posted by
apply for job
apply for job
Vinothini Sundaram picture
Vinothini Sundaram
Job posted by
Vinothini Sundaram picture
Vinothini Sundaram
Apply for job
apply for job

Data Engineer

Founded 2014
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
NCR (Delhi | Gurgaon | Noida)
Experience icon
1 - 5 years
Experience icon
Best in industry6 - 18 lacs/annum

We are looking at a Big Data Engineer with at least 3-5 years of experience as a Big Data Developer/EngineerExperience with Big Data technologies and tools like Hadoop, Hive, MapR, Kafka, Spark, etc.,Experience in Architecting data ingestion, storage, consumption model.Experience with NoSQL Databases like MongoDB, HBase, Cassandra, etc.,Knowledge of various ETL tools & techniques

Job posted by
apply for job
apply for job
Raghavendra Mishra picture
Raghavendra Mishra
Job posted by
Raghavendra Mishra picture
Raghavendra Mishra
Apply for job
apply for job

Senior BI & ETL Developer

via Wibmo
Founded 1999
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 10 years
Experience icon
Best in industry10 - 15 lacs/annum

Critical Tasks and Expected Contributions/Results : The role will be primarily focused on the design, development and testing of ETL workflows (using Talend) as well as the batch management and error handling processes. Build Business Intelligence Applications using tools like Power BI. Additional responsibilities include the documentation of technical specifications and related project artefacts. - Gather requirement and propose possible ETL solutions for in-house designed Data Warehouse - Analyze & translate functional specifications & change requests into technical specifications. - Design and Creating star schema data models - Design, Build and Implement Business Intelligence Solutions using Power BI - Develop, implement & test ETL program logic. - Deployment and support any related issues Key Competency : - A good understanding of the concepts and best practices of data warehouse ETL design and be able to apply these suitably to solve specific business needs. - Expert knowledge of ETL tool like Talend - Have more than 8 years experience in designing and developing ETL work packages, and be able to demonstrate expertise in ETL tool- Talend - Knowledge of BI tools like Power BI is required - Ability to follow functional ETL specifications and challenge business logic and schema design where appropriate, as well as manage their time effectively. - Exposure to Performance tuning is essential - Good organisational skills. - Methodical and structured approach to design and development. - Good interpersonal skills.

Job posted by
apply for job
apply for job
Shirin AM picture
Shirin AM
Job posted by
Shirin AM picture
Shirin AM
Apply for job
apply for job

Data Engineering Manager

via Amazon
Founded 1991
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Hyderabad
Experience icon
9 - 14 years
Experience icon
Best in industry25 - 40 lacs/annum

The Last Mile Analytics & Quality Team in Hyderabad is looking for Transportation Quality Specialist who will act as first level support for address, geocode and static route management in Last Mile with multiple Transportation services along with other operational issues and activities related to Transportation process and optimization. Your solutions will impact our customers directly! This job requires you to constantly hit the ground running and your ability to learn quickly and work on disparate and overlapping tasks will define your success. High Impact production issues often require coordination between multiple Development, Operations and IT Support groups, so you get to experience a breadth of impact with various groups. Primary responsibilities include troubleshooting, diagnosing and fixing static route issues, developing monitoring solutions, performing software maintenance and configuration, implementing the fix for internally developed code, performing minor SQL queries, updating, tracking and resolving technical challenges. Responsibilities also include working alongside development on Amazon Corporate and Divisional Software projects, updating/enhancing our current tools, automation of support processes and documentation of our systems. The ideal candidate must be detail oriented, have superior verbal and written communication skills, strong organizational skills, able to juggle multiple tasks at once, able to work independently and can maintain professionalism under pressure. You must be able to identify problems before they happen and implement solutions that detect and prevent outages. You must be able to accurately prioritize projects, make sound judgments, work to improve the customer experience, and get the right things done. Internal job description Your solutions will impact our customers directly! This job requires you to constantly hit the ground running and your ability to learn quickly and work on disparate and overlapping tasks will define your success. High Impact production issues often require coordination between multiple Development, Operations and IT Support groups, so you get to experience a breadth of impact with various groups. Primary responsibilities include troubleshooting, diagnosing and fixing static route issues, developing monitoring solutions, performing software maintenance and configuration, implementing the fix for internally developed code, performing minor SQL queries, updating, tracking and resolving technical challenges. Responsibilities also include working alongside development on Amazon Corporate and Divisional Software projects, updating/enhancing our current tools, automation of support processes and documentation of our systems. The ideal candidate must be detail oriented, have superior verbal and written communication skills, strong organizational skills, able to juggle multiple tasks at once, able to work independently and can maintain professionalism under pressure. You must be able to identify problems before they happen and implement solutions that detect and prevent outages. You must be able to accurately prioritize projects, make sound judgments, work to improve the customer experience, and get the right things done. Basic qualifications - Bachelors degree in Computer Science or Engineering - Good communication skills- both verbal and written - Demonstrated ability to work in a team - Proficiency in MS Office, SQL, Excel. Preferred qualifications - Experience working with relational databases - Experience with Linux - Debugging and troubleshooting skills, with an enthusiastic attitude to support and resolve customer problems

Job posted by
apply for job
apply for job
Rakesh Kumar picture
Rakesh Kumar
Job posted by
Rakesh Kumar picture
Rakesh Kumar
Apply for job
apply for job

Database Architect

Founded 2017
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 10 years
Experience icon
Best in industry10 - 20 lacs/annum

candidate will be responsible for all aspects of data acquisition, data transformation, and analytics scheduling and operationalization to drive high-visibility, cross-division outcomes. Expected deliverables will include the development of Big Data ELT jobs using a mix of technologies, stitching together complex and seemingly unrelated data sets for mass consumption, and automating and scaling analytics into the GRAND's Data Lake. Key Responsibilities : - Create a GRAND Data Lake and Warehouse which pools all the data from different regions and stores of GRAND in GCC - Ensure Source Data Quality Measurement, enrichment and reporting of Data Quality - Manage All ETL and Data Model Update Routines - Integrate new data sources into DWH - Manage DWH Cloud (AWS/AZURE/Google) and Infrastructure Skills Needed : - Very strong in SQL. Demonstrated experience with RDBMS, Unix Shell scripting preferred (e.g., SQL, Postgres, Mongo DB etc) - Experience with UNIX and comfortable working with the shell (bash or KRON preferred) - Good understanding of Data warehousing concepts. Big data systems : Hadoop, NoSQL, HBase, HDFS, MapReduce - Aligning with the systems engineering team to propose and deploy new hardware and software environments required for Hadoop and to expand existing environments. - Working with data delivery teams to set up new Hadoop users. This job includes setting up Linux users, setting up and testing HDFS, Hive, Pig and MapReduce access for the new users. - Cluster maintenance as well as creation and removal of nodes using tools like Ganglia, Nagios, Cloudera Manager Enterprise, and other tools. - Performance tuning of Hadoop clusters and Hadoop MapReduce routines. - Screen Hadoop cluster job performances and capacity planning - Monitor Hadoop cluster connectivity and security - File system management and monitoring. - HDFS support and maintenance. - Collaborating with application teams to install operating system and - Hadoop updates, patches, version upgrades when required. - Defines, develops, documents and maintains Hive based ETL mappings and scripts

Job posted by
apply for job
apply for job
Rahul Malani picture
Rahul Malani
Job posted by
Rahul Malani picture
Rahul Malani
Apply for job
apply for job

Data Migration Developer

Founded
Products and services{{j_company_types[ - 1]}}
{{j_company_sizes[ - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Hyderabad
Experience icon
3 - 7 years
Experience icon
Best in industry6 - 20 lacs/annum

We are now looking for passionate DATA MIGRATION DEVELOPERS to work in our Hyderabad site Role Description: We are looking for data migration developers to our BSS delivery projects. Your main goal is to analyse migration data, create migration solution and execute the data migration. You will work as part of the migration team in cooperation with our migration architect and BSS delivery project manager. You have a solid background with telecom BSS and experience in data migrations. You will be expected to interpret data analysis produced by Business Analysts and raise issues or questions and work directly with the client on-site to resolve them. You must therefore be capable of understanding the telecom business behind a technical solution. Requirements: – To understand different data migration approaches and capability to adopt requirements to migration tool development and utilization – Capability to analyse the shape & health of source data – Extraction of data from multiple legacy sources – Building transformation code to adhere to data mappings – Loading data to either new or existing target solutions. We appreciate: – Deep knowledge of ETL processes and/or other migration tools – Proven experience in data migrations with high volumes and in business critical systems in telecom business – Experience in telecom business support systems – Ability to apply innovation and improvement to the data migration/support processes and to be able to manage multiple priorities effectively. We can offer you: – Interesting and challenging work in a fast-growing, customer-oriented company – An international and multicultural working environment with experienced and enthusiastic colleagues – Plenty of opportunities to learn, grow and progress in your career At Qvantel we have built a young, dynamic culture where people are motivated to learn and develop themselves, are used to working both independently as well as in teams, have a systematic, hands on working style and a can-do attitude. Our people are used to communicate across other cultures and time zones. A sense of humor can also come in handy. Don’t hesitate to ask for more information from Srinivas Bollipally our Recruitment Specialist reachable at Srinivas.bollipally@qvantel.com

Job posted by
apply for job
apply for job
Srinivas Bollipally picture
Srinivas Bollipally
Job posted by
Srinivas Bollipally picture
Srinivas Bollipally
Apply for job
apply for job

Data Integration Consultant

Founded 2010
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Mumbai
Experience icon
2 - 4 years
Experience icon
Best in industry5 - 10 lacs/annum

Founded in 2010, Eccella is a boutique consulting firm specializing in Business Intelligence and Data Management. Eccella has offices in New York, London and Mumbai, providing expert services to our international clientele. Our consulting specialty is in Data Management with a specific expertise in the Informatica Platform including PowerCenter, B2B, IDQ, ILM, MDM and PowerExchange. Our clients include both Fortune 500 companies and public sector organizations where we provide expert services in Architecture, Design and Development. Our application development and software solutions are geared toward the small and medium business market where we offer the little guys the tools and abilities of the larger players allowing them to compete and effectively manage their operations

Job posted by
apply for job
apply for job
Rohit Ranjan picture
Rohit Ranjan
Job posted by
Rohit Ranjan picture
Rohit Ranjan
Apply for job
apply for job

AWS System Engineer

Founded 2011
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Pune
Experience icon
3 - 9 years
Experience icon
Best in industry6 - 15 lacs/annum

Bonzai is a cloud based enterprise software platform that helps brands to create, traffic, measure and optimize their HTML5 ads. The self-serve drag and drop tool allows brands to build dynamic and responsive ad units for all screens, without a single line of code.

Job posted by
apply for job
apply for job
Neha Vyas picture
Neha Vyas
Job posted by
Neha Vyas picture
Neha Vyas
Apply for job
apply for job

AWS System Engineer

Founded 2011
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Pune
Experience icon
3 - undefined years
Experience icon
Best in industry6 - 15 lacs/annum

Bonzai is a cloud based enterprise software platform that helps brands to create, traffic, measure and optimize their HTML5 ads. The self-serve drag and drop tool allows brands to build dynamic and responsive ad units for all screens, without a single line of code.

Job posted by
apply for job
apply for job
Neha Vyas picture
Neha Vyas
Job posted by
Neha Vyas picture
Neha Vyas
Apply for job
apply for job
Why apply via CutShort?
Connect with actual hiring teams and get their fast response. No 3rd party recruiters. No spam.