Loading...

{{notif_text}}

Apache Sqoop Jobs in Bangalore (Bengaluru)

Explore top Apache Sqoop Job opportunities in Bangalore (Bengaluru) for Top Companies & Startups. All jobs are added by verified employees who can be contacted directly below.

Data Engineering/Data Analytics
at at Leading StartUp Focused On Employee Growth
at

Founded 2015
Products and services{{j_company_types[3 - 1]}}
20-100 employees
{{j_company_stages[1 - 1]}}
via Qrata
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
2 - 6 years
Salary icon
Best in industry{{renderSalaryString({min: 2500000, max: 4500000, duration: "undefined", currency: "INR", equity: false})}}

Job posted by
apply for job
apply for job
Blessy Fernandes picture
Blessy Fernandes
Job posted by
Blessy Fernandes picture
Blessy Fernandes
Apply for job
apply for job

Data Engineer
at Capgeminiat Capgemini

Founded 2018
Products and services{{j_company_types[3 - 1]}}
10-50 employees
{{j_company_stages[3 - 1]}}
via Nu-Pie
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
4 - 8 years
Salary icon
Best in industry{{renderSalaryString({min: 700000, max: 1500000, duration: "undefined", currency: "INR", equity: false})}}

Job posted by
apply for job
apply for job
Sanjay Biswakarma picture
Sanjay Biswakarma
Job posted by
Sanjay Biswakarma picture
Sanjay Biswakarma
Apply for job
apply for job

Big Data Engineer
at Codemonkat Codemonk

Founded 2018
Products and services{{j_company_types[2 - 1]}}
20-100 employees
{{j_company_stages[1 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
2 - 4 years
Salary icon
Best in industry{{renderSalaryString({min: 500000, max: 900000, duration: "undefined", currency: "INR", equity: false})}}

Job posted by
apply for job
apply for job
susan P picture
susan P
Job posted by
susan P picture
susan P
Apply for job
apply for job

Big Data Engineer
at Rakutenat Rakuten

Founded 2005
Products and services{{j_company_types[2 - 1]}}
200-500 employees
{{j_company_stages[3 - 1]}}
via zyoin
{{rendered_skills_map[skill] || skill}}
Location icon
Remote, Bengaluru (Bangalore)
Experience icon
5 - 8 years
Salary icon
Best in industry{{renderSalaryString({min: 2000000, max: 3800000, duration: "undefined", currency: "INR", equity: false})}}

Job posted by
apply for job
apply for job
RAKESH RANJAN picture
RAKESH RANJAN
Job posted by
RAKESH RANJAN picture
RAKESH RANJAN
Apply for job
apply for job

Hadoop Developer
at MNCat MNC
at MNC

Founded 2015
Products and services{{j_company_types[3 - 1]}}
50-200 employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 6 years
Salary icon
Best in industry{{renderSalaryString({min: 600000, max: 1500000, duration: "undefined", currency: "INR", equity: false})}}

Job posted by
apply for job
apply for job
Harpreet kour picture
Harpreet kour
Job posted by
Harpreet kour picture
Harpreet kour
Apply for job
apply for job

Lead Data Engineer
at Lymbycat Lymbyc

Founded 2012
Products and services{{j_company_types[1 - 1]}}
100-500 employees
{{j_company_stages[3 - 1]}}
via Lymbyc
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore), Chennai
Experience icon
4 - 8 years
Salary icon
Best in industry{{renderSalaryString({min: 900000, max: 1400000, duration: "undefined", currency: "INR", equity: false})}}

Key skill set : Apache NiFi, Kafka Connect (Confluent), Sqoop, Kylo, Spark, Druid, Presto, RESTful services, Lambda / Kappa architectures Responsibilities : - Build a scalable, reliable, operable and performant big data platform for both streaming and batch analytics - Design and implement data aggregation, cleansing and transformation layers Skills : - Around 4+ years of hands-on experience designing and operating large data platforms - Experience in Big data Ingestion, Transformation and stream/batch processing technologies using Apache NiFi, Apache Kafka, Kafka Connect (Confluent), Sqoop, Spark, Storm, Hive etc; - Experience in designing and building streaming data platforms in Lambda, Kappa architectures - Should have working experience in one of NoSQL, OLAP data stores like Druid, Cassandra, Elasticsearch, Pinot etc; - Experience in one of data warehousing tools like RedShift, BigQuery, Azure SQL Data Warehouse - Exposure to other Data Ingestion, Data Lake and querying frameworks like Marmaray, Kylo, Drill, Presto - Experience in designing and consuming microservices - Exposure to security and governance tools like Apache Ranger, Apache Atlas - Any contributions to open source projects a plus - Experience in performance benchmarks will be a plus

Job posted by
apply for job
apply for job
Venky Thiriveedhi picture
Venky Thiriveedhi
Job posted by
Venky Thiriveedhi picture
Venky Thiriveedhi
Apply for job
apply for job
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.
Okay
Privacy policy
File upload not supportedAudio recording not supported
This browser does not support file upload. Please follow the instructions to upload your resume.This browser does not support audio recording. Please follow the instructions to record audio.
  1. Click on the 3 dots
  2. Click on "Copy link"
  3. Open Google Chrome (or any other browser) and enter the copied link in the URL bar
Done