Loading...

{{notif_text}}

Last chance to connect with exciting companies hiring right now - Register now!|L I V E{{days_remaining}} days {{hours_remaining}} hours left!

Apache Sqoop Jobs in Bangalore (Bengaluru)

Explore top Apache Sqoop Job opportunities in Bangalore (Bengaluru) for Top Companies & Startups. All jobs are added by verified employees who can be contacted directly below.

Sr. Data Engineer (7+ years)

Founded 2015
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
7 - 12 years
Salary icon
Best in industry{{renderSalaryString({min: 2000000, max: 3000000, duration: "undefined", currency: "INR", equity: false})}}

3. Key Result Areas   ·         Create and maintain optimal data pipeline, ·         Assemble large, complex data sets that meet functional / non-functional business requirements. ·         Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. ·         Keep our data separated and secure ·         Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader. ·         Build analytics tools that utilize the data pipeline to provide actionable insights into key business performance metrics. ·         Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. ·         Work with data and analytics experts to strive for greater functionality in our data systems   4. Knowledge, Skills and Experience   Core Skills: We are looking for a candidate with 7+ years of experience in a Data Engineer role. They should also have experience using the following software/tools: ·         Experience in developing Big Data applications using Spark, Hive, Sqoop, Kafka, and Map Reduce ·         Experience with stream-processing systems: Spark-Streaming, Strom etc. ·         Experience with object-oriented/object function scripting languages: Python, Scala etc ·         Experience in designing and building dimensional data models to improve accessibility, efficiency, and quality of data ·         Should be proficient in writing Advanced SQLs, Expertise in performance tuning of SQLs. Experience with data science and machine learning tools and technologies is a plus ·         Experience with relational SQL and NoSQL databases, including Postgres and Cassandra. ·         Experience with Azure cloud services is a plus ·         Financial Services Knowledge is a plus

Job posted by
apply for job
apply for job
Varun Reddy picture
Varun Reddy
Job posted by
Varun Reddy picture
Varun Reddy
Apply for job
apply for job

Lead Data Engineer

Founded 2012
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[3 - 1]}}
via Lymbyc
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore), Chennai
Experience icon
4 - 8 years
Salary icon
Best in industry{{renderSalaryString({min: 900000, max: 1400000, duration: "undefined", currency: "INR", equity: false})}}

Key skill set : Apache NiFi, Kafka Connect (Confluent), Sqoop, Kylo, Spark, Druid, Presto, RESTful services, Lambda / Kappa architectures Responsibilities : - Build a scalable, reliable, operable and performant big data platform for both streaming and batch analytics - Design and implement data aggregation, cleansing and transformation layers Skills : - Around 4+ years of hands-on experience designing and operating large data platforms - Experience in Big data Ingestion, Transformation and stream/batch processing technologies using Apache NiFi, Apache Kafka, Kafka Connect (Confluent), Sqoop, Spark, Storm, Hive etc; - Experience in designing and building streaming data platforms in Lambda, Kappa architectures - Should have working experience in one of NoSQL, OLAP data stores like Druid, Cassandra, Elasticsearch, Pinot etc; - Experience in one of data warehousing tools like RedShift, BigQuery, Azure SQL Data Warehouse - Exposure to other Data Ingestion, Data Lake and querying frameworks like Marmaray, Kylo, Drill, Presto - Experience in designing and consuming microservices - Exposure to security and governance tools like Apache Ranger, Apache Atlas - Any contributions to open source projects a plus - Experience in performance benchmarks will be a plus

Job posted by
apply for job
apply for job
Venky Thiriveedhi picture
Venky Thiriveedhi
Job posted by
Venky Thiriveedhi picture
Venky Thiriveedhi
Apply for job
apply for job
Why apply via CutShort?
Connect with actual hiring teams and get their fast response. No spam.
File upload not supportedAudio recording not supported
This browser does not support file upload. Please follow the instructions to upload your resume.This browser does not support audio recording. Please follow the instructions to record audio.
  1. Click on the 3 dots
  2. Click on "Copy link"
  3. Open Google Chrome (or any other browser) and enter the copied link in the URL bar
Done