Cutshort logo
Data flow Jobs in Delhi, NCR and Gurgaon

2+ Data flow Jobs in Delhi, NCR and Gurgaon | Data flow Job openings in Delhi, NCR and Gurgaon

Apply to 2+ Data flow Jobs in Delhi, NCR and Gurgaon on CutShort.io. Explore the latest Data flow Job opportunities across top companies like Google, Amazon & Adobe.

icon
Tata Consultancy Services
Agency job
via Risk Resources LLP hyd by Jhansi Padiy
Chennai, Bengaluru (Bangalore), Hyderabad, Pune, Delhi
3.5 - 10 yrs
₹6L - ₹25L / yr
Google Cloud Platform (GCP)
bigquery
Data flow
Cloud Storage

GCP Data Engineer Job Description

A GCP Data Engineer is responsible for designing, building, and maintaining data pipelines, architectures, and systems on Google Cloud Platform (GCP). Here's a breakdown of the job:


Key Responsibilities

- Data Pipeline Development: Design and develop data pipelines using GCP services like Dataflow, BigQuery, and Cloud Pub/Sub.

- Data Architecture: Design and implement data architectures to meet business requirements.

- Data Processing: Process and analyze large datasets using GCP services like BigQuery and Cloud Dataflow.

- Data Integration: Integrate data from various sources using GCP services like Cloud Data Fusion and Cloud Pub/Sub.

- Data Quality: Ensure data quality and integrity by implementing data validation and data cleansing processes.


Essential Skills

- GCP Services: Strong understanding of GCP services like BigQuery, Cloud Dataflow, Cloud Pub/Sub, and Cloud Storage.

- Data Engineering: Experience with data engineering concepts, including data pipelines, data warehousing, and data integration.

- Programming Languages: Proficiency in programming languages like Python, Java, or Scala.

- Data Processing: Knowledge of data processing frameworks like Apache Beam and Apache Spark.

- Data Analysis: Understanding of data analysis concepts and tools like SQL and data visualization.

Read more
Top startup of India -  News App

Top startup of India - News App

Agency job
via Jobdost by Sathish Kumar
Noida
2 - 5 yrs
₹20L - ₹35L / yr
Linux/Unix
skill iconPython
Hadoop
Apache Spark
skill iconMongoDB
+4 more
Responsibilities
● Create and maintain optimal data pipeline architecture.
● Assemble large, complex data sets that meet functional / non-functional
business requirements.
● Building and optimizing ‘big data’ data pipelines, architectures and data sets.
● Maintain, organize & automate data processes for various use cases.
● Identifying trends, doing follow-up analysis, preparing visualizations.
● Creating daily, weekly and monthly reports of product KPIs.
● Create informative, actionable and repeatable reporting that highlights
relevant business trends and opportunities for improvement.

Required Skills And Experience:
● 2-5 years of work experience in data analytics- including analyzing large data sets.
● BTech in Mathematics/Computer Science
● Strong analytical, quantitative and data interpretation skills.
● Hands-on experience with Python, Apache Spark, Hadoop, NoSQL
databases(MongoDB preferred), Linux is a must.
● Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
● Experience with Google Cloud Data Analytics Products such as BigQuery, Dataflow, Dataproc etc. (or similar cloud-based platforms).
● Experience working within a Linux computing environment, and use of
command-line tools including knowledge of shell/Python scripting for
automating common tasks.
● Previous experience working at startups and/or in fast-paced environments.
● Previous experience as a data engineer or in a similar role.
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort