Loading...

{{notif_text}}

The next CutShort event, {{next_cs_event.name}}, in partnership with UpGrad, will happen on {{next_cs_event.startDate | date: 'd MMMM'}}The next CutShort event, {{next_cs_event.name}}, in partnership with UpGrad, will begin in a few hoursThe CutShort event, {{next_cs_event.name}}, in partnership with UpGrad, is LIVE.Join now!
{{hours_remaining}}:{{minutes_remaining}}:{{seconds_remaining}}Want to save 90% of your recruiting time? Learn how in our next webinar on 22nd March at 3 pmLearn more

Apache HBase Jobs

Explore top Apache HBase Job opportunities for Top Companies & Startups. All jobs are added by verified employees who can be contacted directly below.

Machine Learning Engineer

Founded 2018
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Remote, Bengaluru (Bangalore)
Experience icon
3 - 9 years
Experience icon
Best in industry20 - 50 lacs/annum

How often have you read job descriptions and gone ‘I have read this before’ or ‘the real job description will come out during the interviews, so why bother reading this’. In other instances when job descriptions are actually well-written, ie not just copied and pasted from somewhere and try doing justice to what you’d be doing at the job, 2-4 months of a typical interview cycle make those descriptions obsolete by the time you actually start at the job. Also not unsurprising then: just like you ignore or skim through job descriptions, most recruiters do the same with your resumes – look for specific keywords and leave all the assessment for during the interview itself. Even worse: the human recruiter in some cases is being replaced by an algorithm to automate screening. You, therefore, will try to put as many keywords in your resume to ensure you get that interview call. Nobody is being ingenuine in this process but the very process is fundamentally broken. And that is exactly what we want to solve: create an effective ‘matching of work to the worker’ that is an accurate and real-time reflection of both ends, thus increasing the actual engagement with the work itself. Responsibilities In this role, you’ll build and implement novel Machine Learning and Deep Learning systems on our platform as well as help build the infrastructure to train and deploy them. Specifically, you will: - Design and implement the infrastructure required to train models at scale. - Work with the data team’s infrastructure to build real-time and offline feature databases. - Work with the data team to create the infrastructure to build and maintain the datasets from which models are created - Build the model serving systems with which we can deploy our models to production - As we grow, scale the ML system to be able to support more use cases and ML model types. Requirements - 1+ years of experience building production-ready ML models and systems. - 3+ years of building distributed systems and/or scalable backend systems and the ability to maintain such systems in production. - Strong software engineering fundamentals - understanding of data structures and algorithms, O-notation, ability to maintain a test suite and write clear maintainable code. - Familiarity with the majority of the following tools: Tensorflow, Numpy, Scipy, SparkML, pandas, scikit-learn. - Demonstrated leadership and self-direction and willingness to both teach others and learn new techniques. - Experience with big data processing and storage systems: Hadoop, Spark, Hbase, Cassandra etc. - Strong programming skills in Python. Intermediate to Advanced knowledge of SQL and ability to wrangle data from many disparate data sources - Technologies we use: MySQL, Python, AWS, Snowflake, R, and Looker, among many others.

Job posted by
apply for job
apply for job
Maitrayee  picture
Maitrayee
Job posted by
Maitrayee  picture
Maitrayee
Apply for job
apply for job

Big Data Developer

Founded 2011
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Chennai
Experience icon
1 - 5 years
Experience icon
Best in industry1 - 6 lacs/annum

• Looking for Big Data Engineer with 3+ years of experience. • Hands-on experience with MapReduce-based platforms, like Pig, Spark, Shark. • Hands-on experience with data pipeline tools like Kafka, Storm, Spark Streaming. • Store and query data with Sqoop, Hive, MySQL, HBase, Cassandra, MongoDB, Drill, Phoenix, and Presto. • Hands-on experience in managing Big Data on a cluster with HDFS and MapReduce. • Handle streaming data in real time with Kafka, Flume, Spark Streaming, Flink, and Storm. • Experience with Azure cloud, Cognitive Services, Databricks is preferred.

Job posted by
apply for job
apply for job
John Richardson picture
John Richardson
Job posted by
John Richardson picture
John Richardson
Apply for job
apply for job

Hadoop Developer

Founded 2012
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
4 - 7 years
Experience icon
Best in industry23 - 30 lacs/annum

Position Description Demonstrates up-to-date expertise in Software Engineering and applies this to the development, execution, and improvement of action plans Models compliance with company policies and procedures and supports company mission, values, and standards of ethics and integrity Provides and supports the implementation of business solutions Provides support to the business Troubleshoots business, production issues and on call support. Minimum Qualifications BS/MS in Computer Science or related field 5+ years’ experience building web applications Solid understanding of computer science principles Excellent Soft Skills Understanding the major algorithms like searching and sorting Strong skills in writing clean code using languages like Java and J2EE technologies. Understanding how to engineer the RESTful, Micro services and knowledge of major software patterns like MVC, Singleton, Facade, Business Delegate Deep knowledge of web technologies such as HTML5, CSS, JSON Good understanding of continuous integration tools and frameworks like Jenkins Experience in working with the Agile environments, like Scrum and Kanban. Experience in dealing with the performance tuning for very large-scale apps. Experience in writing scripting using Perl, Python and Shell scripting. Experience in writing jobs using Open source cluster computing frameworks like Spark Relational database design experience- MySQL, Oracle, SOLR, NoSQL - Cassandra, Mango DB and Hive. Aptitude for writing clean, succinct and efficient code. Attitude to thrive in a fun, fast-paced start-up like environment

Job posted by
apply for job
apply for job
Sampreetha Pai picture
Sampreetha Pai
Job posted by
Sampreetha Pai picture
Sampreetha Pai
Apply for job
apply for job

Hadoop Developer

Founded 2016
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[1 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Mumbai
Experience icon
3 - 20+ years
Experience icon
Best in industry4 - 15 lacs/annum

Looking for Big data Developers in Mumbai Location

Job posted by
apply for job
apply for job
Sheela P picture
Sheela P
Job posted by
Sheela P picture
Sheela P
Apply for job
apply for job

Hadoop Engineers

Founded 2012
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
4 - 7 years
Experience icon
Best in industry24 - 30 lacs/annum

Position Description Demonstrates up-to-date expertise in Software Engineering and applies this to the development, execution, and improvement of action plans Models compliance with company policies and procedures and supports company mission, values, and standards of ethics and integrity Provides and supports the implementation of business solutions Provides support to the business Troubleshoots business, production issues and on call support. Minimum Qualifications BS/MS in Computer Science or related field 5+ years’ experience building web applications Solid understanding of computer science principles Excellent Soft Skills Understanding the major algorithms like searching and sorting Strong skills in writing clean code using languages like Java and J2EE technologies. Understanding how to engineer the RESTful, Micro services and knowledge of major software patterns like MVC, Singleton, Facade, Business Delegate Deep knowledge of web technologies such as HTML5, CSS, JSON Good understanding of continuous integration tools and frameworks like Jenkins Experience in working with the Agile environments, like Scrum and Kanban. Experience in dealing with the performance tuning for very large-scale apps. Experience in writing scripting using Perl, Python and Shell scripting. Experience in writing jobs using Open source cluster computing frameworks like Spark Relational database design experience- MySQL, Oracle, SOLR, NoSQL - Cassandra, Mango DB and Hive. Aptitude for writing clean, succinct and efficient code. Attitude to thrive in a fun, fast-paced start-up like environment

Job posted by
apply for job
apply for job
Sampreetha Pai picture
Sampreetha Pai
Job posted by
Sampreetha Pai picture
Sampreetha Pai
Apply for job
apply for job

Hadoop Lead Engineers

Founded 2012
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
7 - 9 years
Experience icon
Best in industry27 - 34 lacs/annum

Position Description Assists in providing guidance to small groups of two to three engineers, including offshore associates, for assigned Engineering projects Demonstrates up-to-date expertise in Software Engineering and applies this to the development, execution, and improvement of action plans Generate weekly, monthly and yearly report using JIRA and Open source tools and provide updates to leadership teams. Proactively identify issues, identify root cause for the critical issues. Work with cross functional teams, Setup KT sessions and mentor the team members. Co-ordinate with Sunnyvale and Bentonville teams. Models compliance with company policies and procedures and supports company mission, values, and standards of ethics and integrity Provides and supports the implementation of business solutions Provides support to the business Troubleshoots business, production issues and on call support. Minimum Qualifications BS/MS in Computer Science or related field 8+ years’ experience building web applications Solid understanding of computer science principles Excellent Soft Skills Understanding the major algorithms like searching and sorting Strong skills in writing clean code using languages like Java and J2EE technologies. Understanding how to engineer the RESTful, Micro services and knowledge of major software patterns like MVC, Singleton, Facade, Business Delegate Deep knowledge of web technologies such as HTML5, CSS, JSON Good understanding of continuous integration tools and frameworks like Jenkins Experience in working with the Agile environments, like Scrum and Kanban. Experience in dealing with the performance tuning for very large-scale apps. Experience in writing scripting using Perl, Python and Shell scripting. Experience in writing jobs using Open source cluster computing frameworks like Spark Relational database design experience- MySQL, Oracle, SOLR, NoSQL - Cassandra, Mango DB and Hive. Aptitude for writing clean, succinct and efficient code. Attitude to thrive in a fun, fast-paced start-up like environment

Job posted by
apply for job
apply for job
Sampreetha Pai picture
Sampreetha Pai
Job posted by
Sampreetha Pai picture
Sampreetha Pai
Apply for job
apply for job

Python Developer

Founded 2015
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 7 years
Experience icon
Best in industry7 - 15 lacs/annum

We are looking for a full time senior resource to lead the python driven dev team. Candidate will be responsible for design, dev and taking to production highly scalable applications. Opportunity to work on leading ML frameworks as well as custom built frameworks for enhanced financial analytics. Crediwatch is an automated & intelligent data curation platform which helps businesses make faster and smarter decisions. Crediwatch aids sophisticated credit and other risk assessment models by providing data intelligence, predictive analysis, decision enabling technologies which maximises customer profitability and performance. Crediwatch has received accolades in Citibank Tech4Integrity challenge (worldwide), Barclays Rise accelerator and Tech30 by YourStory to name a few. We are based in the heart of Bangalore and are growing fast.

Job posted by
apply for job
apply for job
Hemanth G C picture
Hemanth G C
Job posted by
Hemanth G C picture
Hemanth G C
Apply for job
apply for job

Hadoop Developer

Founded 2008
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[1 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Pune
Experience icon
3 - 7 years
Experience icon
Best in industry10 - 15 lacs/annum

Securonix is a Big Data Security Analytics product company. The only product which delivers real-time behavior analytics (UEBA) on Big Data.

Job posted by
apply for job
apply for job
Ramakrishna Murthy picture
Ramakrishna Murthy
Job posted by
Ramakrishna Murthy picture
Ramakrishna Murthy
Apply for job
apply for job