Loading...

{{notif_text}}

Join the fight against Covid-19 in India. Check the resources we have curated for you here.https://fightcovid.cutshort.io

Apache Sqoop Jobs in Bangalore (Bengaluru)

Explore top Apache Sqoop Job opportunities in Bangalore (Bengaluru) for Top Companies & Startups. All jobs are added by verified employees who can be contacted directly below.

Big Data Engineer
at Rakutenat Rakuten

Founded 2005
Products and services{{j_company_types[2 - 1]}}
200-500 employees
{{j_company_stages[3 - 1]}}
via zyoin
{{rendered_skills_map[skill] || skill}}
Location icon
Remote, Bengaluru (Bangalore)
Experience icon
5 - 8 years
Salary icon
Best in industry{{renderSalaryString({min: 2000000, max: 3800000, duration: "undefined", currency: "INR", equity: false})}}

Job posted by
apply for job
apply for job
RAKESH RANJAN picture
RAKESH RANJAN
Job posted by
RAKESH RANJAN picture
RAKESH RANJAN
Apply for job
apply for job

Hadoop Developer
at MNCat MNC
at MNC

Founded 2015
Products and services{{j_company_types[3 - 1]}}
50-200 employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 6 years
Salary icon
Best in industry{{renderSalaryString({min: 600000, max: 1500000, duration: "undefined", currency: "INR", equity: false})}}

Job posted by
apply for job
apply for job
Harpreet kour picture
Harpreet kour
Job posted by
Harpreet kour picture
Harpreet kour
Apply for job
apply for job

Lead Data Engineer
at Lymbycat Lymbyc

Founded 2012
Products and services{{j_company_types[1 - 1]}}
100-500 employees
{{j_company_stages[3 - 1]}}
via Lymbyc
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore), Chennai
Experience icon
4 - 8 years
Salary icon
Best in industry{{renderSalaryString({min: 900000, max: 1400000, duration: "undefined", currency: "INR", equity: false})}}

Key skill set : Apache NiFi, Kafka Connect (Confluent), Sqoop, Kylo, Spark, Druid, Presto, RESTful services, Lambda / Kappa architectures Responsibilities : - Build a scalable, reliable, operable and performant big data platform for both streaming and batch analytics - Design and implement data aggregation, cleansing and transformation layers Skills : - Around 4+ years of hands-on experience designing and operating large data platforms - Experience in Big data Ingestion, Transformation and stream/batch processing technologies using Apache NiFi, Apache Kafka, Kafka Connect (Confluent), Sqoop, Spark, Storm, Hive etc; - Experience in designing and building streaming data platforms in Lambda, Kappa architectures - Should have working experience in one of NoSQL, OLAP data stores like Druid, Cassandra, Elasticsearch, Pinot etc; - Experience in one of data warehousing tools like RedShift, BigQuery, Azure SQL Data Warehouse - Exposure to other Data Ingestion, Data Lake and querying frameworks like Marmaray, Kylo, Drill, Presto - Experience in designing and consuming microservices - Exposure to security and governance tools like Apache Ranger, Apache Atlas - Any contributions to open source projects a plus - Experience in performance benchmarks will be a plus

Job posted by
apply for job
apply for job
Venky Thiriveedhi picture
Venky Thiriveedhi
Job posted by
Venky Thiriveedhi picture
Venky Thiriveedhi
Apply for job
apply for job
Why apply via CutShort?
Connect with actual hiring teams and get their fast response. No spam.
File upload not supportedAudio recording not supported
This browser does not support file upload. Please follow the instructions to upload your resume.This browser does not support audio recording. Please follow the instructions to record audio.
  1. Click on the 3 dots
  2. Click on "Copy link"
  3. Open Google Chrome (or any other browser) and enter the copied link in the URL bar
Done