7+ Apache Flume Jobs in India
Apply to 7+ Apache Flume Jobs on CutShort.io. Find your next job, effortlessly. Browse Apache Flume Jobs and apply today!
one of the leading payments bank
Agency job
via Mavin RPO Solutions Pvt. Ltd. by kshiteej jagtap
Navi Mumbai
3 - 5 yrs
₹7L - ₹18L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+4 more
Requirements:
- Proficiency in shell scripting.
- Proficiency in automation of tasks.
- Proficiency in Pyspark/Python.
- Proficiency in writing and understanding of sqoop.
- Understanding of Cloud Era manager.
- Good understanding of RDBMS.
- Good understanding of Excel.
- Familiarity with Hadoop ecosystem and its components.
- Understanding of data loading tools such as Flume, Sqoop etc.
- Ability to write reliable, manageable, and high-performance code.
- Good knowledge of database principles, practices, structures, and theories.
Read more
Bengaluru (Bangalore)
3 - 6 yrs
₹6L - ₹15L / yr
Apache Hadoop
Hadoop
HDFS
Apache Sqoop
Apache Flume
+5 more
1. Design and development of data ingestion pipelines.
2. Perform data migration and conversion activities.
3. Develop and integrate software applications using suitable development
methodologies and standards, applying standard architectural patterns, taking
into account critical performance characteristics and security measures.
4. Collaborate with Business Analysts, Architects and Senior Developers to
establish the physical application framework (e.g. libraries, modules, execution
environments).
5. Perform end to end automation of ETL process for various datasets that are
being ingested into the big data platform.
2. Perform data migration and conversion activities.
3. Develop and integrate software applications using suitable development
methodologies and standards, applying standard architectural patterns, taking
into account critical performance characteristics and security measures.
4. Collaborate with Business Analysts, Architects and Senior Developers to
establish the physical application framework (e.g. libraries, modules, execution
environments).
5. Perform end to end automation of ETL process for various datasets that are
being ingested into the big data platform.
Read more
Chennai
1 - 5 yrs
₹1L - ₹6L / yr
Hadoop
Big Data
HDFS
Apache Sqoop
Apache Flume
+2 more
• Looking for Big Data Engineer with 3+ years of experience.
• Hands-on experience with MapReduce-based platforms, like Pig, Spark, Shark.
• Hands-on experience with data pipeline tools like Kafka, Storm, Spark Streaming.
• Store and query data with Sqoop, Hive, MySQL, HBase, Cassandra, MongoDB, Drill, Phoenix, and Presto.
• Hands-on experience in managing Big Data on a cluster with HDFS and MapReduce.
• Handle streaming data in real time with Kafka, Flume, Spark Streaming, Flink, and Storm.
• Experience with Azure cloud, Cognitive Services, Databricks is preferred.
Read more
Pune
2 - 5 yrs
₹1L - ₹18L / yr
Hadoop
Spark
Apache Hive
Apache Flume
Java
+5 more
Description Deep experience and understanding of Apache Hadoop and surrounding technologies required; Experience with Spark, Impala, Hive, Flume, Parquet and MapReduce. Strong understanding of development languages to include: Java, Python, Scala, Shell Scripting Expertise in Apache Spark 2. x framework principals and usages. Should be proficient in developing Spark Batch and Streaming job in Python, Scala or Java. Should have proven experience in performance tuning of Spark applications both from application code and configuration perspective. Should be proficient in Kafka and integration with Spark. Should be proficient in Spark SQL and data warehousing techniques using Hive. Should be very proficient in Unix shell scripting and in operating on Linux. Should have knowledge about any cloud based infrastructure. Good experience in tuning Spark applications and performance improvements. Strong understanding of data profiling concepts and ability to operationalize analyses into design and development activities Experience with best practices of software development; Version control systems, automated builds, etc. Experienced in and able to lead the following phases of the Software Development Life Cycle on any project (feasibility planning, analysis, development, integration, test and implementation) Capable of working within the team or as an individual Experience to create technical documentation
Read more
Mumbai
3 - 100 yrs
₹4L - ₹15L / yr
Spark
Big Data
Hadoop
HDFS
Apache Sqoop
+2 more
Looking for Big data Developers in Mumbai Location
Read more
Noida, Hyderabad, NCR (Delhi | Gurgaon | Noida)
2 - 6 yrs
₹4L - ₹12L / yr
Spark
Hadoop
MongoDB
Python
Scala
+3 more
Looking for a technically sound and excellent trainer on big data technologies. Get an opportunity to become popular in the industry and get visibility. Host regular sessions on Big data related technologies and get paid to learn.
Read more
Pune
3 - 7 yrs
₹10L - ₹15L / yr
HDFS
Apache Flume
Apache HBase
Hadoop
Impala
+3 more
Securonix is a Big Data Security Analytics product company. The only product which delivers real-time behavior analytics (UEBA) on Big Data.
Read more
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs