20+ ETL Jobs in Hyderabad | ETL Job openings in Hyderabad
Apply to 20+ ETL Jobs in Hyderabad on CutShort.io. Explore the latest ETL Job opportunities across top companies like Google, Amazon & Adobe.
Responsibilities include:
- Develop and maintain data validation logic in our proprietary Control Framework tool
- Actively participate in business requirement elaboration and functional design sessions to develop an understanding of our Operational teams’ analytical needs, key data flows and sources
- Assist Operational teams in the buildout of Checklists and event monitoring workflows within our Enterprise Control Framework platform
- Build effective working relationships with Operational users, Reporting and IT development teams and business partners across the organization
- Conduct interviews, generate user stories, develop scenarios and workflow analyses
- Contribute to the definition of reporting solutions that empower Operational teams to make immediate decisions as to the best course of action
- Perform some business user acceptance testing
- Provide production support and troubleshooting for existing operational dashboards
- Conduct regular demos and training of new features for the stakeholder community
Qualifications
- Bachelor’s degree or equivalent in Business, Accounting, Finance, MIS, Information Technology or related field of study
- Minimum 5 years’ of SQL required
- Experience querying data on cloud platforms (AWS/ Azure/ Snowflake) required
- Exceptional problem solving and analytical skills, attention to detail and organization
- Able to independently troubleshoot and gather supporting evidence
- Prior experience developing within a BI reporting tool (e.g. Spotfire, Tableau, Looker, Information Builders) a plus
- Database Management and ETL development experience a plus
- Self-motivated, self-assured, and self-managed
- Able to multi-task to meet time-driven goals
- Asset management experience, including investment operation a plus
Job Summary:
We are looking for an experienced ETL Tester with 5 to 7 years of experience and expertise
in the banking domain. The candidate will be responsible for testing ETL processes,
ensuring data quality, and validating data flows in large-scale projects.
Key Responsibilities:
Design and execute ETL test cases, ensuring data integrity and accuracy.
Perform data validation using complex SQL queries.
Collaborate with business analysts to define testing requirements.
Track defects and work with developers to resolve issues.
Conduct performance testing for ETL processes.
Banking Domain Knowledge: Strong understanding of banking processes such as
payments, loans, credit, accounts, and regulatory reporting.
Required Skills:
5-7 years of ETL testing experience.
Strong SQL skills and experience with ETL tools (Informatica, SSIS, etc.).
Knowledge of banking domain processes.
Experience with test management tools (JIRA, HP ALM).
Familiarity with Agile methodologies.
Location – Hyderabad
Technical Skills:
- Ability to understand and translate business requirements into design.
- Proficient in AWS infrastructure components such as S3, IAM, VPC, EC2, and Redshift.
- Experience in creating ETL jobs using Python/PySpark.
- Proficiency in creating AWS Lambda functions for event-based jobs.
- Knowledge of automating ETL processes using AWS Step Functions.
- Competence in building data warehouses and loading data into them.
Responsibilities:
- Understand business requirements and translate them into design.
- Assess AWS infrastructure needs for development work.
- Develop ETL jobs using Python/PySpark to meet requirements.
- Implement AWS Lambda for event-based tasks.
- Automate ETL processes using AWS Step Functions.
- Build data warehouses and manage data loading.
- Engage with customers and stakeholders to articulate the benefits of proposed solutions and frameworks.
Daily and monthly responsibilities
- Review and coordinate with business application teams on data delivery requirements.
- Develop estimation and proposed delivery schedules in coordination with development team.
- Develop sourcing and data delivery designs.
- Review data model, metadata and delivery criteria for solution.
- Review and coordinate with team on test criteria and performance of testing.
- Contribute to the design, development and completion of project deliverables.
- Complete in-depth data analysis and contribution to strategic efforts
- Complete understanding of how we manage data with focus on improvement of how data is sourced and managed across multiple business areas.
Basic Qualifications
- Bachelor’s degree.
- 5+ years of data analysis working with business data initiatives.
- Knowledge of Structured Query Language (SQL) and use in data access and analysis.
- Proficient in data management including data analytical capability.
- Excellent verbal and written communications also high attention to detail.
- Experience with Python.
- Presentation skills in demonstrating system design and data analysis solutions.
Invesco is seeking a skilled React.js Developer with a strong background in data analytics to join our team. The Engineer within the Enterprise Risk function will manage complex data engineering, analysis, and programming tasks in support of the execution of the Enterprise Risk and Internal Audit activities and projects as defined by the Enterprise Risk Analytics leadership team. The Engineer will manage and streamline data extraction, transformation, and load processes, design use cases, analyze data, and apply creativity and data science techniques to deliver effective and efficient solutions enabling greater risk intelligence and insights.
Key Responsibilities / Duties:
- Acquire, transform, and manage data supporting risk and control-related activities
- Design, build, and maintain data analytics and data science tools supporting individual tasks and projects.
- Maintain programming code, software packages, and databases to support ongoing analytics activities.
- Exercise judgment in determining the application of data analytics for business processes, including the identification and location of data sources.
- Actively discover data analytics capabilities within the firm and leverage such capabilities where possible.
- Introduce new data analytics-related tools and technologies to business partners and consumers.
- Support business partners, data analysts, consumers’ understanding of product logic and related system processes
- Share learnings and alternative techniques with other Engineers and Data Analysts.
- Perform other duties and special projects as assigned by the Enterprise Risk Analytics leadership team and other leaders across Enterprise Risk and Internal Audit.
- Actively contribute to developing a culture of innovation within the department and risk and control awareness throughout the organization.
- Keep Head of Data Science & Engineering and departmental leadership informed of activities.
Work Experience / Knowledge:
- Minimum 5 years of experience in data analysis, data management, software development or data-related risk management roles; previous experience in programming will be considered.
- Experience within the financial services sector preferred
Skills / Other Personal Attributes Required:
- Proactive problem solver with the ability to identify, design, and deliver solutions based on high level objectives and detailed requirements. Thoroughly identify and investigate issues and determine the appropriate course of action
- Excellent in code development supporting data analytics and visualization, preferably programming languages and libraries such as JavaScript, R, Python, NextUI, ReactUI, Shiny, Streamlit, Plotly and D3.js
- Excellent with data extraction, transformation, and load processes (ETL), structured query language (SQL), and database management.
- Strong self-learner to continuously develop new technical capabilities to become more efficient and productive
- Experience using end user data analytics software such as Tableau, PowerBI, SAS, and Excel a plus
- Proficient with various disciplines of data science, such as machine learning, natural language processing and network science
- Experience with end-to-end implementation of web applications on AWS, including using services such as EC2, EKS, RDS, ALB, Route53 and Airflow a plus
- Self-starter and motivated; must be able to work without frequent direct supervision
- Proficient with Microsoft Office applications (Teams, Outlook, MS Word, Excel, PowerPoint etc.)
- Excellent analytical and problem-solving skills
- Strong project management and administrative skills
- Strong written and verbal communication skills (English)
- Results-oriented and comfortable as an individual contributor on specific assignments
- Ability to handle confidential information and communicate clearly with individuals at a wide range of levels on sensitive matters
- Demonstrated ability to work in a diverse, cross-functional, and international environment
- Adaptable and comfortable with changing environment
- Demonstrates high professional ethics
Formal Education:
- Bachelor’s degree in Information Systems, Computer Science, Computer Engineering, Mathematics, Statistics, Data Science, or Statistics preferred. Other technology or quantitative finance-related degrees considered depending upon relevant experience
- MBA, Master’s degree in Information Systems, Computer Science, Mathematics, Statistics, Data Science, or Finance a plus
License / Registration / Certification:
- Professional data science, analytics, business intelligence, visualization, and/or development designation (e.g., CAP, CBIP, or other relevant product-specific certificates) or actively pursuing the completion of such designation preferred
- Other certifications considered depending on domain and relevant experience
Working Conditions:
Potential for up to 10% domestic and international travel
An 8 year old IT Services and consulting company.
CTC Budget: 35-50LPA
Location: Hyderabad/Bangalore
Experience: 8+ Years
Company Overview:
An 8-year-old IT Services and consulting company based in Hyderabad providing services in maximizing product value while delivering rapid incremental innovation, possessing extensive SaaS company M&A experience including 20+ closed transactions on both the buy and sell sides. They have over 100 employees and looking to grow the team.
Work with, learn from, and contribute to a diverse, collaborative
development team
● Use plenty of PHP, Go, JavaScript, MySQL, PostgreSQL, ElasticSearch,
Redshift, AWS Services and other technologies
● Build efficient and reusable abstractions and systems
● Create robust cloud-based systems used by students globally at scale
● Experiment with cutting edge technologies and contribute to the
company’s product roadmap
● Deliver data at scale to bring value to clients Requirements
You will need:
● Experience working with a server side language in a full-stack environment
● Experience with various database technologies (relational, nosql,
document-oriented, etc) and query concepts in high performance
environments
● Experience in one of these areas: React, Backbone
● Understanding of ETL concepts and processes
● Great knowledge of design patterns and back end architecture best
practices
● Sound knowledge of Front End basics like JavaScript, HTML, CSS
● Experience with TDD, automated testing
● 12+ years’ experience as a developer
● Experience with Git or Mercurial
● Fluent written & spoken English
It would be great if you have:
● B.Sc or M.Sc degree in Software Engineering, Computer Science or similar
● Experience and/or interest in API Design
● Experience with Symfony and/or Doctrine
● Experience with Go and Microservices
● Experience with message queues e.g. SQS, Kafka, Kinesis, RabbitMQ
● Experience working with a modern Big Data stack
● Contributed to open source projects
● Experience working in an Agile environment
We are looking for a Senior Data Engineer to join the Customer Innovation team, who will be responsible for acquiring, transforming, and integrating customer data onto our Data Activation Platform from customers’ clinical, claims, and other data sources. You will work closely with customers to build data and analytics solutions to support their business needs, and be the engine that powers the partnership that we build with them by delivering high-fidelity data assets.
In this role, you will work closely with our Product Managers, Data Scientists, and Software Engineers to build the solution architecture that will support customer objectives. You'll work with some of the brightest minds in the industry, work with one of the richest healthcare data sets in the world, use cutting-edge technology, and see your efforts affect products and people on a regular basis. The ideal candidate is someone that
- Has healthcare experience and is passionate about helping heal people,
- Loves working with data,
- Has an obsessive focus on data quality,
- Is comfortable with ambiguity and making decisions based on available data and reasonable assumptions,
- Has strong data interrogation and analysis skills,
- Defaults to written communication and delivers clean documentation, and,
- Enjoys working with customers and problem solving for them.
A day in the life at Innovaccer:
- Define the end-to-end solution architecture for projects by mapping customers’ business and technical requirements against the suite of Innovaccer products and Solutions.
- Measure and communicate impact to our customers.
- Enabling customers on how to activate data themselves using SQL, BI tools, or APIs to solve questions they have at speed.
What You Need:
- 4+ years of experience in a Data Engineering role, a Graduate degree in Computer Science, Statistics, Informatics, Information Systems, or another quantitative field.
- 4+ years of experience working with relational databases like Snowflake, Redshift, or Postgres.
- Intermediate to advanced level SQL programming skills.
- Data Analytics and Visualization (using tools like PowerBI)
- The ability to engage with both the business and technical teams of a client - to document and explain technical problems or concepts in a clear and concise way.
- Ability to work in a fast-paced and agile environment.
- Easily adapt and learn new things whether it’s a new library, framework, process, or visual design concept.
What we offer:
- Industry certifications: We want you to be a subject matter expert in what you do. So, whether it’s our product or our domain, we’ll help you dive in and get certified.
- Quarterly rewards and recognition programs: We foster learning and encourage people to take risks. We recognize and reward your hard work.
- Health benefits: We cover health insurance for you and your loved ones.
- Sabbatical policy: We encourage people to take time off and rejuvenate, learn new skills, and pursue their interests so they can generate new ideas with Innovaccer.
- Pet-friendly office and open floor plan: No boring cubicles.
Data Integration, Preparation & Management Solutions
Technical Project Manager
Exp : 10 to 15 Years
Responsibilities
- Participate in meetings with US client teams to understand business requirements and project milestones. Provide technical suggestions and strategy in project planning.
- Prepare Project Plan and track the project progress of deliverables and Milestones. Report the status to higher management regularly.
- Monitor Budget and Timeline at regular Intervals and plan proactive steps to control them.
- Identifies opportunities for improving business prospects with the client.
- Help team in resolving technical and functional aspects across project life cycle.
- Planning and execution of training, mentoring, and coaching team members.
- Hold regular project reviews with internal & client stakeholders.
- Prepare organized and informative presentations whenever required.
- Resolve and/or escalate issues as and when it is imperative.
Required Skill
- At least 2 years’ experience in managing large technology engineering team or L2/L3 Technology Support team with an overall experience of at least 10 years in IT industry.
- Experience in BI Tools like MicroStrategy, OBIEE, Tableau or ETL Tools like Informatica, Talend, DataStage, and SSIS.
- Experience in Datawarehouse and BI Reporting projects as developer or Lead or Architect
- Experience in generating reports on SLAs, KPIs, metrics and reporting to senior leadership.
- Experience in attracting and hiring excellent talent Ability to mentor, and bring best out of the team. Flexible with working hours based on the service requirement.
- Demonstrate organizational and leadership skills
- Excellent communication (written and spoken) skills
- Experience or knowledge in tools such as JIRA, Confluence, ServiceNow, Splunk, Other monitoring tools etc . Exp on ETL,DWH concepts, L2/L3 Exp
Job Description
Mandatory Requirements
-
Experience in AWS Glue
-
Experience in Apache Parquet
-
Proficient in AWS S3 and data lake
-
Knowledge of Snowflake
-
Understanding of file-based ingestion best practices.
-
Scripting language - Python & pyspark
CORE RESPONSIBILITIES
-
Create and manage cloud resources in AWS
-
Data ingestion from different data sources which exposes data using different technologies, such as: RDBMS, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies
-
Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform
-
Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations
-
Develop an infrastructure to collect, transform, combine and publish/distribute customer data.
-
Define process improvement opportunities to optimize data collection, insights and displays.
-
Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible
-
Identify and interpret trends and patterns from complex data sets
-
Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders.
-
Key participant in regular Scrum ceremonies with the agile teams
-
Proficient at developing queries, writing reports and presenting findings
-
Mentor junior members and bring best industry practices.
QUALIFICATIONS
-
5-7+ years’ experience as data engineer in consumer finance or equivalent industry (consumer loans, collections, servicing, optional product, and insurance sales)
-
Strong background in math, statistics, computer science, data science or related discipline
-
Advanced knowledge one of language: Java, Scala, Python, C#
-
Production experience with: HDFS, YARN, Hive, Spark, Kafka, Oozie / Airflow, Amazon Web Services (AWS), Docker / Kubernetes, Snowflake
-
Proficient with
-
Data mining/programming tools (e.g. SAS, SQL, R, Python)
-
Database technologies (e.g. PostgreSQL, Redshift, Snowflake. and Greenplum)
-
Data visualization (e.g. Tableau, Looker, MicroStrategy)
-
Comfortable learning about and deploying new technologies and tools.
-
Organizational skills and the ability to handle multiple projects and priorities simultaneously and meet established deadlines.
-
Good written and oral communication skills and ability to present results to non-technical audiences
-
Knowledge of business intelligence and analytical tools, technologies and techniques.
Familiarity and experience in the following is a plus:
-
AWS certification
-
Spark Streaming
-
Kafka Streaming / Kafka Connect
-
ELK Stack
-
Cassandra / MongoDB
-
CI/CD: Jenkins, GitLab, Jira, Confluence other related tools
Greetings! We are looking for Product Manager for our Data modernization product. We need a resource with good knowledge on Big Data/DWH. should have strong Stakeholders management and Presentation skills
Client of People First Consultants
Key skills : Python, Numpy, Panda, SQL, ETL
Roles and Responsibilities:
- The work will involve the development of workflows triggered by events from other systems
- Design, develop, test, and deliver software solutions in the FX Derivatives group
- Analyse requirements for the solutions they deliver, to ensure that they provide the right solution
- Develop easy to use documentation for the frameworks and tools developed for adaption by other teams
Familiarity with event-driven programming in Python
- Must have unit testing and debugging skills
- Good problem solving and analytical skills
- Python packages such as NumPy, Scikit learn
- Testing and debugging applications.
- Developing back-end components.
We at Datametica Solutions Private Limited are looking for an SQL Lead / Architect who has a passion for the cloud with knowledge of different on-premises and cloud Data implementation in the field of Big Data and Analytics including and not limiting to Teradata, Netezza, Exadata, Oracle, Cloudera, Hortonworks and alike.
Ideal candidates should have technical experience in migrations and the ability to help customers get value from Datametica's tools and accelerators.
Job Description :
Experience: 6+ Years
Work Location: Pune / Hyderabad
Technical Skills :
- Good programming experience as an Oracle PL/SQL, MySQL, and SQL Server Developer
- Knowledge of database performance tuning techniques
- Rich experience in a database development
- Experience in Designing and Implementation Business Applications using the Oracle Relational Database Management System
- Experience in developing complex database objects like Stored Procedures, Functions, Packages and Triggers using SQL and PL/SQL
Required Candidate Profile :
- Excellent communication, interpersonal, analytical skills and strong ability to drive teams
- Analyzes data requirements and data dictionary for moderate to complex projects • Leads data model related analysis discussions while collaborating with Application Development teams, Business Analysts, and Data Analysts during joint requirements analysis sessions
- Translate business requirements into technical specifications with an emphasis on highly available and scalable global solutions
- Stakeholder management and client engagement skills
- Strong communication skills (written and verbal)
About Us!
A global leader in the Data Warehouse Migration and Modernization to the Cloud, we empower businesses by migrating their Data/Workload/ETL/Analytics to the Cloud by leveraging Automation.
We have expertise in transforming legacy Teradata, Oracle, Hadoop, Netezza, Vertica, Greenplum along with ETLs like Informatica, Datastage, AbInitio & others, to cloud-based data warehousing with other capabilities in data engineering, advanced analytics solutions, data management, data lake and cloud optimization.
Datametica is a key partner of the major cloud service providers - Google, Microsoft, Amazon, Snowflake.
We have our own products!
Eagle Data warehouse Assessment & Migration Planning Product
Raven Automated Workload Conversion Product
Pelican Automated Data Validation Product, which helps automate and accelerate data migration to the cloud.
Why join us!
Datametica is a place to innovate, bring new ideas to live, and learn new things. We believe in building a culture of innovation, growth, and belonging. Our people and their dedication over these years are the key factors in achieving our success.
Benefits we Provide!
Working with Highly Technical and Passionate, mission-driven people
Subsidized Meals & Snacks
Flexible Schedule
Approachable leadership
Access to various learning tools and programs
Pet Friendly
Certification Reimbursement Policy
Check out more about us on our website below!
www.datametica.com
We at Datametica Solutions Private Limited are looking for SQL Engineers who have a passion for cloud with knowledge of different on-premise and cloud Data implementation in the field of Big Data and Analytics including and not limiting to Teradata, Netezza, Exadata, Oracle, Cloudera, Hortonworks and alike.
Ideal candidates should have technical experience in migrations and the ability to help customers get value from Datametica's tools and accelerators.
Job Description
Experience : 4-10 years
Location : Pune
Mandatory Skills -
- Strong in ETL/SQL development
- Strong Data Warehousing skills
- Hands-on experience working with Unix/Linux
- Development experience in Enterprise Data warehouse projects
- Good to have experience working with Python, shell scripting
Opportunities -
- Selected candidates will be provided training opportunities on one or more of the following: Google Cloud, AWS, DevOps Tools, Big Data technologies like Hadoop, Pig, Hive, Spark, Sqoop, Flume and Kafka
- Would get chance to be part of the enterprise-grade implementation of Cloud and Big Data systems
- Will play an active role in setting up the Modern data platform based on Cloud and Big Data
- Would be part of teams with rich experience in various aspects of distributed systems and computing
About Us!
A global Leader in the Data Warehouse Migration and Modernization to the Cloud, we empower businesses by migrating their Data/Workload/ETL/Analytics to the Cloud by leveraging Automation.
We have expertise in transforming legacy Teradata, Oracle, Hadoop, Netezza, Vertica, Greenplum along with ETLs like Informatica, Datastage, AbInitio & others, to cloud-based data warehousing with other capabilities in data engineering, advanced analytics solutions, data management, data lake and cloud optimization.
Datametica is a key partner of the major cloud service providers - Google, Microsoft, Amazon, Snowflake.
We have our own products!
Eagle – Data warehouse Assessment & Migration Planning Product
Raven – Automated Workload Conversion Product
Pelican - Automated Data Validation Product, which helps automate and accelerate data migration to the cloud.
Why join us!
Datametica is a place to innovate, bring new ideas to live and learn new things. We believe in building a culture of innovation, growth and belonging. Our people and their dedication over these years are the key factors in achieving our success.
Benefits we Provide!
Working with Highly Technical and Passionate, mission-driven people
Subsidized Meals & Snacks
Flexible Schedule
Approachable leadership
Access to various learning tools and programs
Pet Friendly
Certification Reimbursement Policy
Check out more about us on our website below!
http://www.datametica.com/">www.datametica.com
As a Software Engineer at Quince, you'll be responsible for designing and building scalable infrastructure and build applications to solve some very interesting problems in the logistics and finance tech space.
Responsibilities:
- Design and architect solutions on the cloud for various business problems with workflow efficiency and scale in mind.
- Be on the forefront with the business team to learn, understand, identify and translate function requirements into technical opportunities.
- End-to-end ownership - from scoping the requirements to the final delivery of the solution with keen eye to details and quality.
- Build and improve logistics components for this innovative M2C supply-chain model.
- Build and maintain scalable ETL data pipelines.
Requirements:
- Bachelors/Masters/PhD in Computer Science or closely related subject.
- 1-5 years of experience in building software solutions.
- Good at data structures and their practical applications.
- Proficiency in Kotlin, Java, Python.
- Experience in deploying and maintaining applications on cloud platforms (Ex: AWS, Google cloud).
- Proficiency with SQL and databases - relational and/or nosql (Snowflakes, AWS RedShift, etc).
- Experience with messaging middleware such as Kafka is good to have.
- Expertise in designing and implementing enterprise scale database (OLTP) and Data warehouse solutions.
- Hands on experience in implementing Azure SQL Database, Azure SQL Date warehouse (Azure Synapse Analytics) and big data processing using Azure Databricks and Azure HD Insight.
- Expert in writing T-SQL programming for complex stored procedures, functions, views and query optimization.
- Should be aware of Database development for both on-premise and SAAS Applications using SQL Server and PostgreSQL.
- Experience in ETL and ELT implementations using Azure Data Factory V2 and SSIS.
- Experience and expertise in building machine learning models using Logistic and linear regression, Decision tree and Random forest Algorithms.
- PolyBase queries for exporting and importing data into Azure Data Lake.
- Building data models both tabular and multidimensional using SQL Server data tools.
- Writing data preparation, cleaning and processing steps using Python, SCALA, and R.
- Programming experience using python libraries NumPy, Pandas and Matplotlib.
- Implementing NOSQL databases and writing queries using cypher.
- Designing end user visualizations using Power BI, QlikView and Tableau.
- Experience working with all versions of SQL Server 2005/2008/2008R2/2012/2014/2016/2017/2019
- Experience using the expression languages MDX and DAX.
- Experience in migrating on-premise SQL server database to Microsoft Azure.
- Hands on experience in using Azure blob storage, Azure Data Lake Storage Gen1 and Azure Data Lake Storage Gen2.
- Performance tuning complex SQL queries, hands on experience using SQL Extended events.
- Data modeling using Power BI for Adhoc reporting.
- Raw data load automation using T-SQL and SSIS
- Expert in migrating existing on-premise database to SQL Azure.
- Experience in using U-SQL for Azure Data Lake Analytics.
- Hands on experience in generating SSRS reports using MDX.
- Experience in designing predictive models using Python and SQL Server.
- Developing machine learning models using Azure Databricks and SQL Server
• 5+ years’ experience developing and maintaining modern ingestion pipeline using
technologies like Spark, Apache Nifi etc).
• 2+ years’ experience with Healthcare Payors (focusing on Membership, Enrollment, Eligibility,
• Claims, Clinical)
• Hands on experience on AWS Cloud and its Native components like S3, Athena, Redshift &
• Jupyter Notebooks
• Strong in Spark Scala & Python pipelines (ETL & Streaming)
• Strong experience in metadata management tools like AWS Glue
• String experience in coding with languages like Java, Python
• Worked on designing ETL & streaming pipelines in Spark Scala / Python
• Good experience in Requirements gathering, Design & Development
• Working with cross-functional teams to meet strategic goals.
• Experience in high volume data environments
• Critical thinking and excellent verbal and written communication skills
• Strong problem-solving and analytical abilities, should be able to work and delivery
individually
• Good-to-have AWS Developer certified, Scala coding experience, Postman-API and Apache
Airflow or similar schedulers experience
• Nice-to-have experience in healthcare messaging standards like HL7, CCDA, EDI, 834, 835, 837
• Good communication skills
Informatica PowerCenter, Informatica Change Data Capture, Azure SQL, Azure Data Lake
Job Description
Minimum of 15 years of Experience with Informatica ETL, Database technologies Experience with Azure database technologies including Azure SQL Server, Azure Data Lake Exposure to Change data capture technology Lead and guide development of an Informatica based ETL architecture. Develop solution in highly demanding environment and provide hands on guidance to other team members. Head complex ETL requirements and design. Implement an Informatica based ETL solution fulfilling stringent performance requirements. Collaborate with product development teams and senior designers to develop architectural requirements for the requirements. Assess requirements for completeness and accuracy. Determine if requirements are actionable for ETL team. Conduct impact assessment and determine size of effort based on requirements. Develop full SDLC project plans to implement ETL solution and identify resource requirements. Perform as active, leading role in shaping and enhancing overall ETL Informatica architecture and Identify, recommend and implement ETL process and architecture improvements. Assist and verify design of solution and production of all design phase deliverables. Manage build phase and quality assure code to ensure fulfilling requirements and adhering to ETL architecture.
Minimum 2 years of work experience on Snowflake and Azure storage.
Minimum 3 years of development experience in ETL Tool Experience.
Strong SQL database skills in other databases like Oracle, SQL Server, DB2 and Teradata
Good to have Hadoop and Spark experience.
Good conceptual knowledge on Data-Warehouse and various methodologies.
Working knowledge in any of the scripting like UNIX / Shell
Good Presentation and communication skills.
Should be flexible with the overlapping working hours.
Should be able to work independently and be proactive.
Good understanding of Agile development cycle.
ETL Developer – Talend
Job Duties:
- ETL Developer is responsible for Design and Development of ETL Jobs which follow standards,
best practices and are maintainable, modular and reusable.
- Proficiency with Talend or Pentaho Data Integration / Kettle.
- ETL Developer will analyze and review complex object and data models and the metadata
repository in order to structure the processes and data for better management and efficient
access.
- Working on multiple projects, and delegating work to Junior Analysts to deliver projects on time.
- Training and mentoring Junior Analysts and building their proficiency in the ETL process.
- Preparing mapping document to extract, transform, and load data ensuring compatibility with
all tables and requirement specifications.
- Experience in ETL system design and development with Talend / Pentaho PDI is essential.
- Create quality rules in Talend.
- Tune Talend / Pentaho jobs for performance optimization.
- Write relational(sql) and multidimensional(mdx) database queries.
- Functional Knowledge of Talend Administration Center/ Pentaho data integrator, Job Servers &
Load balancing setup, and all its administrative functions.
- Develop, maintain, and enhance unit test suites to verify the accuracy of ETL processes,
dimensional data, OLAP cubes and various forms of BI content including reports, dashboards,
and analytical models.
- Exposure in Map Reduce components of Talend / Pentaho PDI.
- Comprehensive understanding and working knowledge in Data Warehouse loading, tuning, and
maintenance.
- Working knowledge of relational database theory and dimensional database models.
- Creating and deploying Talend / Pentaho custom components is an add-on advantage.
- Nice to have java knowledge.
Skills and Qualification:
- BE, B.Tech / MS Degree in Computer Science, Engineering or a related subject.
- Having an experience of 3+ years.
- Proficiency with Talend or Pentaho Data Integration / Kettle.
- Ability to work independently.
- Ability to handle a team.
- Good written and oral communication skills.