Loading...

{{notif_text}}

Last chance to connect with exciting companies hiring right now - Register now!|L I V E{{days_remaining}} days {{hours_remaining}} hours left!

Apache Flume Jobs in Bangalore (Bengaluru)

Explore top Apache Flume Job opportunities in Bangalore (Bengaluru) for Top Companies & Startups. All jobs are added by verified employees who can be contacted directly below.

Hadoop Developer

Founded 2015
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 6 years
Salary icon
Best in industry{{renderSalaryString({min: 600000, max: 1500000, duration: "undefined", currency: "INR", equity: false})}}

1. Design and development of data ingestion pipelines.2. Perform data migration and conversion activities.3. Develop and integrate software applications using suitable developmentmethodologies and standards, applying standard architectural patterns, takinginto account critical performance characteristics and security measures.4. Collaborate with Business Analysts, Architects and Senior Developers toestablish the physical application framework (e.g. libraries, modules, executionenvironments).5. Perform end to end automation of ETL process for various datasets that arebeing ingested into the big data platform.

Job posted by
apply for job
apply for job
Harpreet kour picture
Harpreet kour
Job posted by
Harpreet kour picture
Harpreet kour
Apply for job
apply for job

Data Engineer

Founded 2014
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
6 - 12 years
Salary icon
Best in industry{{renderSalaryString({min: 700000, max: 3000000, duration: "undefined", currency: "INR", equity: false})}}

Lead Data EngineerExperience : 6-12 YearsLocation : BangaloreType : Full-timeAbout Digit88Digit88 is a niche product engineering consulting company based out of Bangalore with experience of building offshore development centers for US startups and MNCs over the last 6+ years. The founding team has 50+ years of product engineering and services experience out of India and US.The OpportunityDigit88 development team manages and is expanding the dedicated offshore product US (Bay Area, NYC) based NLP/Chatbot platform development partner, that is building a next-generation AI/NLP/Chatbots based customer engagement platform. The candidate would be joining an existing team of 16+ engineers and help expand Platform Engineering, Production Support and Monitoring services for our client.Job Profile:Digit88 is looking for an enthusiastic, self-motivated, hands on Lead Data Engineer in ETL pipeline and Data analytics with great troubleshooting skills to join our engineering team. Experience with a fast-paced India/US product start-up or a product engineering services company in a senior/lead engineer role, building and managing a high-performance real-time system is mandatory.The applicant should have the right experience in instrumenting the applications with a client library to capture the data asynchronously to avoid overhead on the application thread and push it for further processing to derive real time analytics. Applicants must have a passion for engineering with accuracy and efficiency, be highly motivated and organized, able to work as part of a team, and also possess the ability to work independently with minimal supervision. you can be able to explain some data pipeline architecture to a given problem.To be successful in this role, you should possess:● Extensive experience in Spark and Kafka● Working Knowledge in Java, microservices and Springboot● Extensive experience as a data engineer and able to build the data pipelines and also understanding of the ETL pipelines● Extensive experience in creating and maintaining data aggregation layer using Spark● Experience in handling and successfully managing huge volume of streaming data● Processing events using Spark● Experience in setting up and running jobs on Spark● Experience in Analytics highly desirable● Translate complex functional and technical requirements into detailed design.● Extensive work exper5ience in scalable and high performance systems● Working knowledge of Linux commands and scripting● File queuing on Hadoop● Knowledge in Druid/elastic-search is a definite PLUS.Minimum Qualifications:● Bachelor's degree in Computer Science or a related field● 5+ years experience in ETL pipeline and Data analytics.● 5+ years experience building successful production software systems● 2+ years of experience working with NoSql (Cassandra/MongoDb/DynamoDb/Azure Cosmos DB), includingdata modeling techniques for NoSql.● 1+ years experience working with Hive ETL/QL and building Map/Reduce programs on HDFS, this should becovered indirectly when we say Hadoop BigData, but we can be explicit.Additional Project/Soft Skills:● Product from scratch experience: at least 2 products, should be able to work independently with India & US based team members.● Strong verbal and written communication with ability to articulate problems and solutions over phone and emails.● Strong sense of urgency, with a passion for accuracy and timeliness.● Ability to work calmly in high pressure situations and manage multiple projects/tasks.● Ability to work independently and possess superior skills in issue resolution.

Job posted by
apply for job
apply for job
Abhishek Dwivedi picture
Abhishek Dwivedi
Job posted by
Abhishek Dwivedi picture
Abhishek Dwivedi
Apply for job
apply for job
Why apply via CutShort?
Connect with actual hiring teams and get their fast response. No spam.
File upload not supportedAudio recording not supported
This browser does not support file upload. Please follow the instructions to upload your resume.This browser does not support audio recording. Please follow the instructions to record audio.
  1. Click on the 3 dots
  2. Click on "Copy link"
  3. Open Google Chrome (or any other browser) and enter the copied link in the URL bar
Done