Loading...

{{notif_text}}

SocialHelpouts is now CutShort! Read about it here
The next CutShort event, {{next_cs_event.name}}, will happen on {{next_cs_event.startDate | date: 'd MMMM'}}The next CutShort event, {{next_cs_event.name}}, will begin in a few hoursThe CutShort event, {{next_cs_event.name}}, is LIVE.Join now!

Hadoop Administrator
Posted by Ramakrishna Murthy

apply to this job

Locations

Bengaluru (Bangalore)

Experience

2 - 5 years

Salary

INR 5L - 15L

Skills

Hadoop
Cloudera
Hortonworks

Job description

Securonix is a security analytics product company. Our product provides real-time behavior analytics capabilities and uses the following Hadoop components - Kafka, Spark, Impala, HBase. We support very large customers for all our customers globally, with full access to the cluster. Cloudera Certification is a big plus.

About the company

undefined

Founded

2008

Type

Product

Size

250+ employees

Stage

Bootstrapped
View company

Similar jobs

Enterprise Architect

Founded 2012
Products and services
51-250 employees
Raised funding
Big Data
Hadoop
HDFS
HIVE
data streaming
IAAS
Azure
net
Location icon
Bengaluru (Bangalore)
Experience icon
10 - 15 years
Experience icon
35 - 50 lacs/annum

At least 10 years of hands-on experience in migration of complex software packages and products to Azure (Cloud Service Providers CSP) IaaS and PaaS  At least 7 years of hands-on experience on programming and scripting languages (.Net, C#, WCF, MVC Web API, SQL Server, SQL Azure, Powershell).  Good to have experience in IT systems, operations, automation and configuration tools to enable continuous-integration and deployment (Jenkins)  Solid understanding of database management systems–traditional RDBMS ( MS SQL)  Ability to wear multiple hats spanning the software-development- life-cycle across Requirements, Design, Code Development, QA, Testing, and Deployment –experience working in an Agile/Scrum methodology  Analytical and Communication skills

Job posted by
apply for job
Job poster profile picture - Thouseef Ahmed
Thouseef Ahmed
Job posted by
Job poster profile picture - Thouseef Ahmed
Thouseef Ahmed
apply for job
view job details

Data Scientist

Founded 2017
Product
1-5 employees
Raised funding
Data Science
Python
Hadoop
Elastic Search
Machine Learning
Big Data
Spark
Algorithms
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 5 years
Experience icon
12 - 25 lacs/annum

Responsibilities Exp 3~5 years Build up a strong and scalable crawler system for leveraging external user & content data source from Facebook, Youtube and others internet products or service. Getting top trending keywords & topic from social media. Design and build initial version of the real-time analytics product from Machine Learning Models to recommend video contents in real time to 10M+ User Profiles independently. Architect and build Big Data infrastructures using Java, Kafka, Storm, Hadoop, Spark and other related frameworks, experience with Elastic search is a plus Excellent Analytical, Research and Problem Solving skills, in-depth knowledge of Data Structure Desired Skills and Experience B.S./M.S. degree in computer science, mathematics, statistics or a similar quantitative field with good college background 3+ years of work experience in relevant field (Data Engineer, R&D engineer, etc) Experience in Machine Learning and Prediction & Recommendation techniques Experience with Hadoop/MapReduce/Elastic-Stack/ELK and Big Data querying tools, such as Pig, Hive, and Impala Proficiency in a major programming language (e.g. Java/C/Scala) and/or a scripting language (Python) Experience with one or more NoSQL databases, such as MongoDB, Cassandra, HBase, Hive, Vertica, Elastic Search Experience with cloud solutions/AWS, strong knowledge in Linux and Apache Experience with any map-reduce SPARK/EMR Experience in building reports and/or data visualization Strong communication skills and ability to discuss the product with PMs and business owners

Job posted by
apply for job
Job poster profile picture - Xin Lin
Xin Lin
Job posted by
Job poster profile picture - Xin Lin
Xin Lin
apply for job
view job details

Big Data Evangelist

Founded 2016
Products and services
6-50 employees
Profitable
Spark
Hadoop
Apache Kafka
Apache Flume
Scala
Python
MongoDB
Cassandra
Location icon
Noida
Experience icon
2 - 6 years
Experience icon
4 - 12 lacs/annum

Looking for a technically sound and excellent trainer on big data technologies. Get an opportunity to become popular in the industry and get visibility. Host regular sessions on Big data related technologies and get paid to learn.

Job posted by
apply for job
Job poster profile picture - Suchit Majumdar
Suchit Majumdar
Job posted by
Job poster profile picture - Suchit Majumdar
Suchit Majumdar
apply for job
view job details

Data Scientist

Founded 2013
Product
6-50 employees
Raised funding
Big Data
Data Science
Machine Learning
R Programming
Python
Haskell
Hadoop
Location icon
Mumbai
Experience icon
3 - 7 years
Experience icon
5 - 15 lacs/annum

Data Scientist - We are looking for a candidate to build great recommendation engines and power an intelligent m.Paani user journey Responsibilities : - Data Mining using methods like associations, correlations, inferences, clustering, graph analysis etc. - Scale machine learning algorithm that powers our platform to support our growing customer base and increasing data volume - Design and implement machine learning, information extraction, probabilistic matching algorithms and models - Care about designing the full machine learning pipeline. - Extending company's data with 3rd party sources. - Enhancing data collection procedures. - Processing, cleaning and verifying data collected. - Ad hoc analysis of the data and present clear results. - Creating advanced analytics products that provide actionable insights. The Individual : - We are looking for a candidate with the following skills, experience and attributes: Required : - Someone with 2+ years of work experience in machine learning. - Educational qualification relevant to the role. Degree in Statistics, certificate courses in Big Data, Machine Learning etc. - Knowledge of Machine Learning techniques and algorithms. - Knowledge in languages and toolkits like Python, R, Numpy. - Knowledge of data visualization tools like D3,js, ggplot2. - Knowledge of query languages like SQL, Hive, Pig . - Familiar with Big Data architecture and tools like Hadoop, Spark, Map Reduce. - Familiar with NoSQL databases like MongoDB, Cassandra, HBase. - Good applied statistics skills like distributions, statistical testing, regression etc. Compensation & Logistics : This is a full-time opportunity. Compensation will be in line with startup, and will be based on qualifications and experience. The position is based in Mumbai, India, and the candidate must live in Mumbai or be willing to relocate.

Job posted by
apply for job
Job poster profile picture - Julie K
Julie K
Job posted by
Job poster profile picture - Julie K
Julie K
apply for job
view job details

Database Architect

Founded 2017
Products and services
6-50 employees
Raised funding
ETL
Data Warehouse (DWH)
DWH Cloud
Hadoop
Apache Hive
Spark
Mango DB
PostgreSQL
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 10 years
Experience icon
10 - 20 lacs/annum

candidate will be responsible for all aspects of data acquisition, data transformation, and analytics scheduling and operationalization to drive high-visibility, cross-division outcomes. Expected deliverables will include the development of Big Data ELT jobs using a mix of technologies, stitching together complex and seemingly unrelated data sets for mass consumption, and automating and scaling analytics into the GRAND's Data Lake. Key Responsibilities : - Create a GRAND Data Lake and Warehouse which pools all the data from different regions and stores of GRAND in GCC - Ensure Source Data Quality Measurement, enrichment and reporting of Data Quality - Manage All ETL and Data Model Update Routines - Integrate new data sources into DWH - Manage DWH Cloud (AWS/AZURE/Google) and Infrastructure Skills Needed : - Very strong in SQL. Demonstrated experience with RDBMS, Unix Shell scripting preferred (e.g., SQL, Postgres, Mongo DB etc) - Experience with UNIX and comfortable working with the shell (bash or KRON preferred) - Good understanding of Data warehousing concepts. Big data systems : Hadoop, NoSQL, HBase, HDFS, MapReduce - Aligning with the systems engineering team to propose and deploy new hardware and software environments required for Hadoop and to expand existing environments. - Working with data delivery teams to set up new Hadoop users. This job includes setting up Linux users, setting up and testing HDFS, Hive, Pig and MapReduce access for the new users. - Cluster maintenance as well as creation and removal of nodes using tools like Ganglia, Nagios, Cloudera Manager Enterprise, and other tools. - Performance tuning of Hadoop clusters and Hadoop MapReduce routines. - Screen Hadoop cluster job performances and capacity planning - Monitor Hadoop cluster connectivity and security - File system management and monitoring. - HDFS support and maintenance. - Collaborating with application teams to install operating system and - Hadoop updates, patches, version upgrades when required. - Defines, develops, documents and maintains Hive based ETL mappings and scripts

Job posted by
apply for job
Job poster profile picture - Rahul Malani
Rahul Malani
Job posted by
Job poster profile picture - Rahul Malani
Rahul Malani
apply for job
view job details

Data Scientist

Founded 2015
Services
6-50 employees
Profitable
Big Data
Data Science
Machine Learning
R Programming
Python
Haskell
Hadoop
Location icon
Hyderabad
Experience icon
6 - 10 years
Experience icon
10 - 15 lacs/annum

It is one of the largest communication technology companies in the world. They operate America's largest 4G LTE wireless network and the nation's premiere all-fiber broadband network.

Job posted by
apply for job
Job poster profile picture - Sangita Deka
Sangita Deka
Job posted by
Job poster profile picture - Sangita Deka
Sangita Deka
apply for job
view job details

Hadoop Developer

Founded 2008
Product
250+ employees
Bootstrapped
HDFS
Apache Flume
Apache HBase
Hadoop
Impala
Apache Kafka
SOLR Cloud
Apache Spark
Location icon
Pune
Experience icon
3 - 7 years
Experience icon
10 - 15 lacs/annum

Securonix is a Big Data Security Analytics product company. The only product which delivers real-time behavior analytics (UEBA) on Big Data.

Job posted by
apply for job
Job poster profile picture - Ramakrishna Murthy
Ramakrishna Murthy
Job posted by
Job poster profile picture - Ramakrishna Murthy
Ramakrishna Murthy
apply for job
view job details

HBase Architect Developer

Founded 2017
Products and services
6-50 employees
Bootstrapped
Apache HBase
Hadoop
MapReduce
Location icon
Bengaluru (Bangalore)
Experience icon
1 - 3 years
Experience icon
6 - 20 lacs/annum

www.aaknet.co.in/careers/careers-at-aaknet.html You are extra-ordinary, a rock-star, hardly found a place to leverage or challenge your potential, did not spot a sky rocketing opportunity yet? Come play with us – face the challenges we can throw at you, chances are you might be humiliated (positively); do not take it that seriously though! Please be informed, we rate CHARACTER, attitude high if not more than your great skills, experience and sharpness etc. :) Best wishes & regards, Team Aak!

Job posted by
apply for job
Job poster profile picture - Debdas Sinha
Debdas Sinha
Job posted by
Job poster profile picture - Debdas Sinha
Debdas Sinha
apply for job
view job details

Data Scientist

Founded 2012
Product
51-250 employees
Profitable
Big Data
Data Science
Machine Learning
R Programming
Python
Haskell
Hadoop
Location icon
Bengaluru (Bangalore)
Experience icon
2 - 6 years
Experience icon
5 - 25 lacs/annum

Do apply if any of this sounds familiar! o You have expertise in NLP, Machine Learning, Information Retrieval and Data Mining. o Experience building systems based on machine learning and/or deep learning methods. o You have expertise in Graphical Models like HMM, CRF etc. o Familiar with learning to rank, matrix factorization, recommendation system. o You are familiar with the latest data science trends, tools and packages. o You have strong technical and programming skills. You are familiar with relevant technologies and languages (e.g. Python, Java, Scala etc.) o You have knowledge of Lucene based search-engines like ElasticSearch, Solr, etc and NoSQL DBs like Neo4j and MongoDB. o You are really smart and you have some way of proving it (e.g. you hold a MS/M.Tech or PhD in Computer Science, Machine Learning, Mathematics, Statistics or related field). o There is at least one project on your resume that you are extremely proud to present. o You have at least 4 years’ experience driving projects, tackling roadblocks and navigating solutions/projects through to completion o Execution - ability to manage own time and work effectively with others on projects o Communication - excellent verbal and written communication skills, ability to communicate technical topics to non-technical individuals Good to have: o Experience in a data-driven environment: Leveraging analytics and large amounts of (streaming) data to drive significant business impact. o Knowledge of MapReduce, Hadoop, Spark, etc. o Experience in creating compelling data visualizations

Job posted by
apply for job
Job poster profile picture - Naveen Taalanki
Naveen Taalanki
Job posted by
Job poster profile picture - Naveen Taalanki
Naveen Taalanki
apply for job
view job details

Big Data Engineer

Founded 2015
Products and services
6-50 employees
Profitable
Apache Storm
Spark
Apache Kafka
Hadoop
Zookeeper
Kubernetes
Docker
Amazon Web Services (AWS)
Location icon
Noida
Experience icon
2 - 7 years
Experience icon
5 - 12 lacs/annum

Our company is working on some really interesting projects in Big Data Domain in various fields (Utility, Retail, Finance). We are working with some big corporates and MNCs around the world. While working here as Big Data Engineer, you will be dealing with big data in structured and unstructured form and as well as streaming data from Industrial IOT infrastructure. You will be working on cutting edge technologies and exploring many others while also contributing back to the open-source community. You will get to know and work on end-to-end processing pipeline which deals with all type of work like storing, processing, machine learning, visualization etc.

Job posted by
apply for job
Job poster profile picture - Harsh Choudhary
Harsh Choudhary
Job posted by
Job poster profile picture - Harsh Choudhary
Harsh Choudhary
apply for job
view job details
Want to apply for this role at Securonix?
Hiring team responds within a day
apply for jobs
Why apply on CutShort?
Connect with actual hiring teams and get their fast response. No 3rd party recruiters. No spam.