Loading...

{{notif_text}}

Last chance to connect with exciting companies hiring right now - Register now!|L I V E{{days_remaining}} days {{hours_remaining}} hours left!

HDFS Jobs in Bangalore (Bengaluru)

Explore top HDFS Job opportunities in Bangalore (Bengaluru) for Top Companies & Startups. All jobs are added by verified employees who can be contacted directly below.

Principal Software Engineer

Founded 2015
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[2 - 1]}}
via Dremio
{{rendered_skills_map[skill] || skill}}
Location icon
Hyderabad, Bengaluru (Bangalore)
Experience icon
15 - 20 years
Salary icon
Best in industryBest in industry

About the Role The Dremio India team owns the DataLake Engine along with Cloud Infrastructure and services that power it. With focus on next generation data analytics supporting modern table formats like Iceberg, Deltalake, and open source initiatives such as Apache Arrow, Project Nessie and hybrid-cloud infrastructure, this team provides various opportunities to learn, deliver, and grow in career. We are looking for technical leaders with passion and experience in architecting and delivering high-quality distributed systems at massive scale. Responsibilities & ownership Lead end-to-end delivery and customer success of next-generation features related to scalability, reliability, robustness, usability, security, and performance of the product Lead and mentor others about concurrency, parallelization to deliver scalability, performance and resource optimization in a multithreaded and distributed environment Propose and promote strategic company-wide tech investments taking care of business goals, customer requirements, and industry standards Lead the team to solve complex, unknown and ambiguous problems, and customer issues cutting across team and module boundaries with technical expertise, and influence others Review and influence designs of other team members  Design and deliver architectures that run optimally on public clouds like GCP, AWS, and Azure Partner with other leaders to nurture innovation and engineering excellence in the team Drive priorities with others to facilitate timely accomplishments of business objectives Perform RCA of customer issues and drive investments to avoid similar issues in future Collaborate with Product Management, Support, and field teams to ensure that customers are successful with Dremio Proactively suggest learning opportunities about new technology and skills, and be a role model for constant learning and growth Requirements B.S./M.S/Equivalent in Computer Science or a related technical field or equivalent experience Fluency in Java/C++ with 15+ years of experience developing production-level software Strong foundation in data structures, algorithms, multi-threaded and asynchronous programming models and their use in developing distributed and scalable systems 8+ years experience in developing complex and scalable distributed systems and delivering, deploying, and managing microservices successfully Subject Matter Expert in one or more of query processing or optimization, distributed systems, concurrency, micro service based architectures, data replication, networking, storage systems Experience in taking company-wide initiatives, convincing stakeholders, and delivering them Expert in solving complex, unknown and ambiguous problems spanning across teams and taking initiative in planning and delivering them with high quality Ability to anticipate and propose plan/design changes based on changing requirements  Passion for quality, zero downtime upgrades, availability, resiliency, and uptime of the platform Passion for learning and delivering using latest technologies Hands-on experience of working projects on AWS, Azure, and GCP  Experience with containers and Kubernetes for orchestration and container management in private and public clouds (AWS, Azure,  and GCP)  Understanding of distributed file systems such as  S3, ADLS or HDFS Excellent communication skills and affinity for collaboration and teamwork

Job posted by
apply for job
apply for job
Kiran B picture
Kiran B
Job posted by
Kiran B picture
Kiran B
Apply for job
apply for job

Hadoop Developer

Founded 2015
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 6 years
Salary icon
Best in industry{{renderSalaryString({min: 600000, max: 1500000, duration: "undefined", currency: "INR", equity: false})}}

1. Design and development of data ingestion pipelines.2. Perform data migration and conversion activities.3. Develop and integrate software applications using suitable developmentmethodologies and standards, applying standard architectural patterns, takinginto account critical performance characteristics and security measures.4. Collaborate with Business Analysts, Architects and Senior Developers toestablish the physical application framework (e.g. libraries, modules, executionenvironments).5. Perform end to end automation of ETL process for various datasets that arebeing ingested into the big data platform.

Job posted by
apply for job
apply for job
Harpreet kour picture
Harpreet kour
Job posted by
Harpreet kour picture
Harpreet kour
Apply for job
apply for job

Data Engineer

Founded 2018
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Remote, Mumbai, Pune, Bengaluru (Bangalore)
Experience icon
4 - 10 years
Salary icon
Best in industryBest in industry

What is Contentstack? Contentstack combines the best Content Management System (CMS) and Digital Experience Platform (DXP) technology. It enables enterprises to manage content across all digital channels and create inimitable digital experiences. The Contentstack platform was designed from the ground up for large-scale, complex, and mission-critical deployments. Recently recognized as the Gartner PeerInsights Customers' Choice for WCM, Contentstack is the preferred API-first, headless CMS for enterprises across the globe.    What Are We Looking For? Contentstack is looking for a Data Engineer.   Roles and responsibilities: Primary responsibilities included designing and scaling ETL pipelines, and ensuring data sanity. Collaborate with multiple groups and produce operational efficiency Develop, construct, test and maintain architectures Align architecture with business requirements Identify ways to improve data reliability, efficiency and quality Optimize database systems for performance and reliability Implementation of model workflows to prepare/analyse/learn/predict and supply the outcomes through API contract(s) Establishing programming patterns, documenting components and provide infrastructure for analysis and execution Set up practices on data reporting and continuous monitoring Provide excellence, open to new ideas and contribute to communities Industrialise the data science models and embed intelligence in product & business applications Find hidden patterns using data Prepare data for predictive and prescriptive modeling Deploy sophisticated analytics programs, machine learning and statistical methods   Mandatory Skills 3+ relevant work experience as a Data Engineer Working experience in HDFS, Big table, MR, Spark, Data warehouse, ETL etc.. Advanced proficiency in Java,Scala, SQL, NoSQL Strong knowledge in Shell/Perl/R/Python/Ruby Proficiency in Statistical procedures, Experiments and Machine Learning techniques. Exceptional problem solving abilities   Job type – Full time employment Job location –  Mumbai/ Pune/ Bangalore/Remote Work schedule – Monday to Friday, 10am to 7pm Minimum qualification – Graduate. Years of experience –  3 + yearsNo of position - 2 Travel opportunities - On need basis within/outside India. Candidate should have valid passport What Really Gets Us Excited About You? Experience in working with product based start-up companies Knowledge of working with SAAS products.   What Do We Offer?   Interesting Work | We hire curious trendspotters and brave trendsetters. This is NOT your boring, routine, cushy, rest-and-vest corporate job. This is the “challenge yourself” role where you learn something new every day, never stop growing, and have fun while you’re doing it.    Tribe Vibe | We are more than colleagues, we are a tribe. We have a strict “no a**hole policy” and enforce it diligently. This means we spend time together - with spontaneous office happy hours, organized outings, and community volunteer opportunities. We are a diverse and distributed team, but we like to stay connected.   Bragging Rights | We are dreamers and dream makers, hustlers, and honeybadgers. Our efforts pay off and we work with the most prestigious brands, from big-name retailers to airlines, to professional sports teams. Your contribution will make an impact with many of the most recognizable names in almost every industry including Chase, The Miami HEAT, Cisco, Shell, Express, Riot Games, IcelandAir, Morningstar, and many more!   A Seat at the Table |  One Team One Dream is one of our values, and it shows. We don’t believe in artificial hierarchies. If you’re part of the tribe, you get a seat at the table. This includes unfiltered access to our C-Suite and regular updates about the business and its performance. Which, btw, is through the roof, so it’s a great time to be joining…

Job posted by
apply for job
apply for job
Rahul Jana picture
Rahul Jana
Job posted by
Rahul Jana picture
Rahul Jana
Apply for job
apply for job

Data Engineer

Founded 2006
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
4 - 9 years
Salary icon
Best in industry{{renderSalaryString({min: 1500000, max: 3000000, duration: "undefined", currency: "INR", equity: false})}}

REQUIREMENT:  Previous experience of working in large scale data engineering  4+ years of experience working in data engineering and/or backend technologies with cloud experience (any) is mandatory.  Previous experience of architecting and designing backend for large scale data processing.  Familiarity and experience of working in different technologies related to data engineering – different database technologies, Hadoop, spark, storm, hive etc.  Hands-on and have the ability to contribute a key portion of data engineering backend.  Self-inspired and motivated to drive for exceptional results.  Familiarity and experience working with different stages of data engineering – data acquisition, data refining, large scale data processing, efficient data storage for business analysis.  Familiarity and experience working with different DB technologies and how to scale them. RESPONSIBILITY:  End to end responsibility to come up with data engineering architecture, design, development and then implementation of it.  Build data engineering workflow for large scale data processing.  Discover opportunities in data acquisition.  Bring industry best practices for data engineering workflow.  Develop data set processes for data modelling, mining and production.  Take additional tech responsibilities for driving an initiative to completion  Recommend ways to improve data reliability, efficiency and quality  Goes out of their way to reduce complexity.  Humble and outgoing - engineering cheerleaders.

Job posted by
apply for job
apply for job
Meenu Singh picture
Meenu Singh
Job posted by
Meenu Singh picture
Meenu Singh
Apply for job
apply for job
Why apply via CutShort?
Connect with actual hiring teams and get their fast response. No spam.
File upload not supportedAudio recording not supported
This browser does not support file upload. Please follow the instructions to upload your resume.This browser does not support audio recording. Please follow the instructions to record audio.
  1. Click on the 3 dots
  2. Click on "Copy link"
  3. Open Google Chrome (or any other browser) and enter the copied link in the URL bar
Done