3+ Data flow Jobs in India
Apply to 3+ Data flow Jobs on CutShort.io. Find your next job, effortlessly. Browse Data flow Jobs and apply today!
Delhi
5 - 8 yrs
₹11L - ₹15L / yr
SQL
Software Testing (QA)
Data modeling
ETL
Data extraction
+14 more
Review Criteria
- Strong Data / ETL Test Engineer
- 5+ years of overall experience in Testing/QA
- 3+ years of hands-on end-to-end data testing/ETL testing experience, covering data extraction, transformation, loading validation, reconciliation, working across BI / Analytics / Data Warehouse / e-Governance platforms
- Must have strong understanding and hands-on exposure to Data Warehouse concepts and processes, including fact & dimension tables, data models, data flows, aggregations, and historical data handling.
- Must have experience in Data Migration Testing, including validation of completeness, correctness, reconciliation, and post-migration verification from legacy platforms to upgraded/cloud-based data platforms.
- Must have independently handled test strategy, test planning, test case design, execution, defect management, and regression cycles for ETL and BI testing
- Hands-on experience with ETL tools and SQL-based data validation is mandatory (Working knowledge or hands-on exposure to Redshift and/or Qlik will be considered sufficient)
- Must hold a Bachelor’s degree B.E./B.Tech else should have master's in M.Tech/MCA/M.Sc/MS
- Must demonstrate strong verbal and written communication skills, with the ability to work closely with business stakeholders, data teams, and QA leadership
- Mandatory Location: Candidate must be based within Delhi NCR (100 km radius)
Preferred
- Relevant certifications such as ISTQB or Data Analytics / BI certifications (Power BI, Snowflake, AWS, etc.)
Job Specific Criteria
- CV Attachment is mandatory
- Do you have experience working on Government projects/companies, mention brief about project?
- Do you have experience working on enterprise projects/companies, mention brief about project?
- Please mention the names of 2 key projects you have worked on related to Data Warehouse / ETL / BI testing?
- Do you hold any ISTQB or Data / BI certifications (Power BI, Snowflake, AWS, etc.)?
- Do you have exposure to BI tools such as Qlik?
- Are you willing to relocate to Delhi and why (if not from Delhi)?
- Are you available for a face-to-face round?
Role & Responsibilities
- 5 years’ experience in Data Testing across BI Analytics platforms with at least 2 largescale enterprise Data Warehouse Analytics eGovernance programs
- Proficiency in ETL Data Warehouse and BI report dashboard validation including test planning data reconciliation acceptance criteria definition defect triage and regression cycle management for BI landscapes
- Proficient in analyzing business requirements and data mapping specifications BRDs Data Models Source to Target Mappings User Stories Reports Dashboards to define comprehensive test scenarios and test cases
- Ability to review high level and low-level data models ETL workflows API specifications and business logic implementations to design test strategies ensuring accuracy consistency and performance of data pipelines
- Ability to test and validate the migrated data from old platform to an upgraded platform and ensure the completeness and correctness of migration
- Experience of conducting test of migrated data and defining test scenarios and test cases for the same
- Experience with BI tools like Qlik ETL platforms Data Lake platforms Redshift to support end to end validation
- Exposure to Data Quality Metadata Management and Data Governance frameworks ensuring KPIs metrics and dashboards align with business expectations
Ideal Candidate
- 5 years’ experience in Data Testing across BI Analytics platforms with at least 2 largescale enterprise Data Warehouse Analytics eGovernance programs
- Proficiency in ETL Data Warehouse and BI report dashboard validation including test planning data reconciliation acceptance criteria definition defect triage and regression cycle management for BI landscapes
- Proficient in analyzing business requirements and data mapping specifications BRDs Data Models Source to Target Mappings User Stories Reports Dashboards to define comprehensive test scenarios and test cases
- Ability to review high level and low-level data models ETL workflows API specifications and business logic implementations to design test strategies ensuring accuracy consistency and performance of data pipelines
- Ability to test and validate the migrated data from old platform to an upgraded platform and ensure the completeness and correctness of migration
- Experience of conducting test of migrated data and defining test scenarios and test cases for the same
- Experience with BI tools like Qlik ETL platforms Data Lake platforms Redshift to support end to end validation
- Exposure to Data Quality Metadata Management and Data Governance frameworks ensuring KPIs metrics and dashboards align with business expectations
Read more
Visakhapatnam
3 - 5 yrs
₹2L - ₹4L / yr
Hadoop
Apache Sqoop
Apache Hive
Apache Spark
Apache Pig
+9 more
Position : Data Engineer
Duration : Full Time
Location : Vishakhapatnam, Bangalore, Chennai
years of experience : 3+ years
Job Description :
- 3+ Years of working as a Data Engineer with thorough understanding of data frameworks that collect, manage, transform and store data that can derive business insights.
- Strong communications (written and verbal) along with being a good team player.
- 2+ years of experience within the Big Data ecosystem (Hadoop, Sqoop, Hive, Spark, Pig, etc.)
- 2+ years of strong experience with SQL and Python (Data Engineering focused).
- Experience with GCP Data Services such as BigQuery, Dataflow, Dataproc, etc. is an added advantage and preferred.
- Any prior experience in ETL tools such as DataStage, Informatica, DBT, Talend, etc. is an added advantage for the role.
Duration : Full Time
Location : Vishakhapatnam, Bangalore, Chennai
years of experience : 3+ years
Job Description :
- 3+ Years of working as a Data Engineer with thorough understanding of data frameworks that collect, manage, transform and store data that can derive business insights.
- Strong communications (written and verbal) along with being a good team player.
- 2+ years of experience within the Big Data ecosystem (Hadoop, Sqoop, Hive, Spark, Pig, etc.)
- 2+ years of strong experience with SQL and Python (Data Engineering focused).
- Experience with GCP Data Services such as BigQuery, Dataflow, Dataproc, etc. is an added advantage and preferred.
- Any prior experience in ETL tools such as DataStage, Informatica, DBT, Talend, etc. is an added advantage for the role.
Read more
Noida
2 - 5 yrs
₹20L - ₹35L / yr
Linux/Unix
Hadoop
Apache Spark
+4 more
Responsibilities
● Create and maintain optimal data pipeline architecture.
● Assemble large, complex data sets that meet functional / non-functional
business requirements.
● Building and optimizing ‘big data’ data pipelines, architectures and data sets.
● Maintain, organize & automate data processes for various use cases.
● Identifying trends, doing follow-up analysis, preparing visualizations.
● Creating daily, weekly and monthly reports of product KPIs.
● Create informative, actionable and repeatable reporting that highlights
relevant business trends and opportunities for improvement.
Required Skills And Experience:
● 2-5 years of work experience in data analytics- including analyzing large data sets.
● BTech in Mathematics/Computer Science
● Strong analytical, quantitative and data interpretation skills.
● Hands-on experience with Python, Apache Spark, Hadoop, NoSQL
databases(MongoDB preferred), Linux is a must.
● Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
● Experience with Google Cloud Data Analytics Products such as BigQuery, Dataflow, Dataproc etc. (or similar cloud-based platforms).
● Experience working within a Linux computing environment, and use of
command-line tools including knowledge of shell/Python scripting for
automating common tasks.
● Previous experience working at startups and/or in fast-paced environments.
● Previous experience as a data engineer or in a similar role.
● Create and maintain optimal data pipeline architecture.
● Assemble large, complex data sets that meet functional / non-functional
business requirements.
● Building and optimizing ‘big data’ data pipelines, architectures and data sets.
● Maintain, organize & automate data processes for various use cases.
● Identifying trends, doing follow-up analysis, preparing visualizations.
● Creating daily, weekly and monthly reports of product KPIs.
● Create informative, actionable and repeatable reporting that highlights
relevant business trends and opportunities for improvement.
Required Skills And Experience:
● 2-5 years of work experience in data analytics- including analyzing large data sets.
● BTech in Mathematics/Computer Science
● Strong analytical, quantitative and data interpretation skills.
● Hands-on experience with Python, Apache Spark, Hadoop, NoSQL
databases(MongoDB preferred), Linux is a must.
● Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
● Experience with Google Cloud Data Analytics Products such as BigQuery, Dataflow, Dataproc etc. (or similar cloud-based platforms).
● Experience working within a Linux computing environment, and use of
command-line tools including knowledge of shell/Python scripting for
automating common tasks.
● Previous experience working at startups and/or in fast-paced environments.
● Previous experience as a data engineer or in a similar role.
Read more
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
