Responsibilities 1. Design & develop detail ETL specification based on the business requirement using SSIS & Oracle Procedures. 2. Good understanding of Data Warehousing & Dimensional Modelling concepts. 3. Develops, tests and implements enterprise data movement (ETL and CDC) solutions. 4. Design, develop & test SSRS reports & dashboards based on large amount of data sourcing from multidimensional analysis cube. 5. Ability to translate business needs into the technical solutions by working with business and IT stakeholders 6. Assist in the effort estimation and planning for SSIS & SSRS implementations 7. Troubleshoot & provide solutions based on the priorities for any issues being reported by client in ETL loads and reports. 8. Optimization of ETL processing time using industry best practices & standards to provide the report in timely manner. 9. Support in documenting solution design, user guide, troubleshooting guide wherever required. 10. Design & development ETL using SSIS. Required Skills 1. Minimum 3+ years of working experience in ETL, Data Warehousing using MSBI (SSRS, SSIS, Power BI) 2. Extensive experience working with SSIS, Power BI for developing management dashboard. 3. Experience in working with large scale data warehouse implementation project using MSBI & SQL Server. 4. Extensive experience in design and development of SSIS & automation of its processing. 5. Extensive experience in design and development of SSRS visualization & reports considering multiple sources. 6. Experience in working with SQL / PLSQL scripting using SQL Server. 7. Extensive experience to optimize the performance of ETL to match the assigned execution windows. 8. Experience in handling large amount of data coming from disparate systems. 9. Good to have the working experience in the Insurance Brokerage Domain.
1.Net (C# )- OMAR web application(portal) developed in C# .Net and running through the web server(IIS). 2. SQL server –Loads the data from various systems(Informatica,CTM,SNOW,HDI) and maintain it in OMAR database. 3.SSIS/ INFA –For Loading data from various source systems and generate the reports (excel format/CSV format/auto email with URL). 4.control-M /SQL Agent- CTM used to schedule the SSIS/INFA jobs .SQL Agent to schedule the SQL jobs(stored procedures). 5. Windows services – used to copy the active job CSV file from Control-M shared drive path to ETL shared drive path. To send the OMAR job failures notifications for some of the jobs ,when the data in OMAR is not up to date. To run the OMAR balancing service when the EDW claim load completes. This service runs for every 5 mins and keeps checking for the claim load completion. On completion balancing SSIS packages triggers through the windows service ,which is written in C#.net code.
candidate will be responsible for all aspects of data acquisition, data transformation, and analytics scheduling and operationalization to drive high-visibility, cross-division outcomes. Expected deliverables will include the development of Big Data ELT jobs using a mix of technologies, stitching together complex and seemingly unrelated data sets for mass consumption, and automating and scaling analytics into the GRAND's Data Lake. Key Responsibilities : - Create a GRAND Data Lake and Warehouse which pools all the data from different regions and stores of GRAND in GCC - Ensure Source Data Quality Measurement, enrichment and reporting of Data Quality - Manage All ETL and Data Model Update Routines - Integrate new data sources into DWH - Manage DWH Cloud (AWS/AZURE/Google) and Infrastructure Skills Needed : - Very strong in SQL. Demonstrated experience with RDBMS, Unix Shell scripting preferred (e.g., SQL, Postgres, Mongo DB etc) - Experience with UNIX and comfortable working with the shell (bash or KRON preferred) - Good understanding of Data warehousing concepts. Big data systems : Hadoop, NoSQL, HBase, HDFS, MapReduce - Aligning with the systems engineering team to propose and deploy new hardware and software environments required for Hadoop and to expand existing environments. - Working with data delivery teams to set up new Hadoop users. This job includes setting up Linux users, setting up and testing HDFS, Hive, Pig and MapReduce access for the new users. - Cluster maintenance as well as creation and removal of nodes using tools like Ganglia, Nagios, Cloudera Manager Enterprise, and other tools. - Performance tuning of Hadoop clusters and Hadoop MapReduce routines. - Screen Hadoop cluster job performances and capacity planning - Monitor Hadoop cluster connectivity and security - File system management and monitoring. - HDFS support and maintenance. - Collaborating with application teams to install operating system and - Hadoop updates, patches, version upgrades when required. - Defines, develops, documents and maintains Hive based ETL mappings and scripts
About the Company: The company focuses on high-end, niche technical skills, predominantly in SOA, Integration, Cloud, Mobile and Big Data. THBS provides software services to enterprise clients across different industry verticals through a combination of offshore and onsite services. The company has its development center in Bangalore (India) and Sales & Operations offices are established in Bristol (UK), New Jersey (US), Dubai, Spain, Germany, Austria and Paris to have a reach to the local market. The company has been CMMi Level 3 certified. It has also been certified to comply with British Security Standards 7799 (now termed as ISO 27001). About the requirement: Looking for Fiarano consultants for Bangalore location with 4-8 yrs of fiarano ESB experience. the candidate should have end to end understanding of requirement with good communication skills