Loading...

{{notif_text}}

SocialHelpouts is now CutShort! Read about it here\
The next CutShort event, {{next_cs_event.name}}, will happen on {{next_cs_event.startDate | date: 'd MMMM'}}The next CutShort event, {{next_cs_event.name}}, will begin in a few hoursThe CutShort event, {{next_cs_event.name}}, is LIVE.Join now!

ETL Jobs in Bangalore (Bengaluru)

Explore top ETL Job opportunities in Bangalore (Bengaluru) for Top Companies & Startups. All jobs are added by verified employees who can be contacted directly below.

Database Architect

Founded 2017
Products and services
6-50 employees
Raised funding
ETL
Data Warehouse (DWH)
DWH Cloud
Hadoop
Apache Hive
Spark
Mango DB
PostgreSQL
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 10 years
Experience icon
10 - 20 lacs/annum

candidate will be responsible for all aspects of data acquisition, data transformation, and analytics scheduling and operationalization to drive high-visibility, cross-division outcomes. Expected deliverables will include the development of Big Data ELT jobs using a mix of technologies, stitching together complex and seemingly unrelated data sets for mass consumption, and automating and scaling analytics into the GRAND's Data Lake. Key Responsibilities : - Create a GRAND Data Lake and Warehouse which pools all the data from different regions and stores of GRAND in GCC - Ensure Source Data Quality Measurement, enrichment and reporting of Data Quality - Manage All ETL and Data Model Update Routines - Integrate new data sources into DWH - Manage DWH Cloud (AWS/AZURE/Google) and Infrastructure Skills Needed : - Very strong in SQL. Demonstrated experience with RDBMS, Unix Shell scripting preferred (e.g., SQL, Postgres, Mongo DB etc) - Experience with UNIX and comfortable working with the shell (bash or KRON preferred) - Good understanding of Data warehousing concepts. Big data systems : Hadoop, NoSQL, HBase, HDFS, MapReduce - Aligning with the systems engineering team to propose and deploy new hardware and software environments required for Hadoop and to expand existing environments. - Working with data delivery teams to set up new Hadoop users. This job includes setting up Linux users, setting up and testing HDFS, Hive, Pig and MapReduce access for the new users. - Cluster maintenance as well as creation and removal of nodes using tools like Ganglia, Nagios, Cloudera Manager Enterprise, and other tools. - Performance tuning of Hadoop clusters and Hadoop MapReduce routines. - Screen Hadoop cluster job performances and capacity planning - Monitor Hadoop cluster connectivity and security - File system management and monitoring. - HDFS support and maintenance. - Collaborating with application teams to install operating system and - Hadoop updates, patches, version upgrades when required. - Defines, develops, documents and maintains Hive based ETL mappings and scripts

Job posted by
apply for job
apply for job
Job poster profile picture - Rahul Malani
Rahul Malani
Job posted by
Job poster profile picture - Rahul Malani
Rahul Malani

Senior Software Engineers - Data Integration

Founded 2015
Product
6-50 employees
Raised funding
ESB
HL7
Database Design
Enterprise Data Warehouse (EDW)
ETL
SSIS
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 8 years
Experience icon
7 - 18 lacs/annum

You will be working in the Cancer Information Data Trust (CIDT) business unit within Inspirata, India at Bangalore. Inspirata is creating the most innovative cancer information big data with associated analytics that renders data to various portals. The Data Integrator will be a critical role in bringing accurate and conditioned Data to CIDT and Digital Pathology Products. Your Role Sr. Software Engineer – Data Integration is an active and influential member of CIDT team and is responsible for the development of Inspirata’s Data Integration Bus Solution. Ideal candidate is a highly experienced engineer with exceptional skills, has an aptitude for integration technologies and the drive and desire to push the boundaries to solve complex problems. Your Responsibilities • Design and development of a middleware bus technology • Have the developed product acquire data residing in different sources of medical data elements and provide a unified and trusted view of the data • Experience with Integration/ESB tools such as Orion Rhapsody, CorePoint, Informatica, TIBCO, Talend or similar tools • Have deep understanding of data management best practices, including ETL, data modeling, data management, file management, reference data management, etc. • Should have at least working knowledge of integrating unstructured data (NO SQL) along with structured data (RDBMS) • Ability to identify multiple approaches to problem solving and recommend the best case solution Requirements • B.E. / M.Tech in Computer Science or related field with 5 - 8 years of product development experience in ETL, Data Modeling, SQL, Data Analysis in a health care industry (preferred) • Knowledge of HL7 protocols, PHI regulations • Good communication skills

Job posted by
apply for job
apply for job
Job poster profile picture - Anuj Seth
Anuj Seth
Job posted by
Job poster profile picture - Anuj Seth
Anuj Seth
Why apply via CutShort?
Connect with actual hiring teams and get their fast response. No 3rd party recruiters. No spam.