Loading...

{{notif_text}}

SocialHelpouts is now CutShort! Read about it here\
The next CutShort event, {{next_cs_event.name}}, will happen on {{next_cs_event.startDate | date: 'd MMMM'}}The next CutShort event, {{next_cs_event.name}}, will begin in a few hoursThe CutShort event, {{next_cs_event.name}}, is LIVE.Join now!

Hadoop Administrator
Posted by Ramakrishna Murthy

apply to this job

Locations

Bengaluru (Bangalore)

Experience

2 - 5 years

Salary

INR 5L - 15L

Skills

Hadoop
Cloudera
Hortonworks

Job description

Securonix is a security analytics product company. Our product provides real-time behavior analytics capabilities and uses the following Hadoop components - Kafka, Spark, Impala, HBase. We support very large customers for all our customers globally, with full access to the cluster. Cloudera Certification is a big plus.

About the company

undefined

Founded

2008

Type

Product

Size

250+ employees

Stage

Bootstrapped
View company

Similar jobs

Big Data Engineer

Founded 2015
Products and services
6-50 employees
Profitable
Spark
Apache Kafka
Hadoop
Pig
HDFS
Location icon
Noida, NCR (Delhi | Gurgaon | Noida)
Experience icon
3 - 7 years
Experience icon
5 - 12 lacs/annum

Together we will create wonderful solutions which deliver value for us and our customers.

Job posted by
apply for job
apply for job
Job poster profile picture - Sneha Pandey
Sneha Pandey
Job posted by
Job poster profile picture - Sneha Pandey
Sneha Pandey

Technical Architect/CTO

Founded 2016
Products and services
1-5 employees
Bootstrapped
Python
C/C++
Big Data
Cloud Computing
Technical Architecture
Hadoop
Spark
Cassandra
Location icon
Mumbai
Experience icon
5 - 11 years
Experience icon
15 - 30 lacs/annum

ABOUT US: Arque Capital is a FinTech startup working with AI in Finance in domains like Asset Management (Hedge Funds, ETFs and Structured Products), Robo Advisory, Bespoke Research, Alternate Brokerage, and other applications of Technology & Quantitative methods in Big Finance. PROFILE DESCRIPTION: 1. Get the "Tech" in order for the Hedge Fund - Help answer fundamentals of technology blocks to be used, choice of certain platform/tech over other, helping team visualize product with the available resources and assets 2. Build, manage, and validate a Tech Roadmap for our Products 3. Architecture Practices - At startups, the dynamics changes very fast. Making sure that best practices are defined and followed by team is very important. CTO’s may have to garbage guy and clean the code time to time. Making reviews on Code Quality is an important activity that CTO should follow. 4. Build progressive learning culture and establish predictable model of envisioning, designing and developing products 5. Product Innovation through Research and continuous improvement 6. Build out the Technological Infrastructure for the Hedge Fund 7. Hiring and building out the Technology team 8. Setting up and managing the entire IT infrastructure - Hardware as well as Cloud 9. Ensure company-wide security and IP protection REQUIREMENTS: Computer Science Engineer from Tier-I colleges only (IIT, IIIT, NIT, BITS, DHU, Anna University, MU) 5-10 years of relevant Technology experience (no infra or database persons) Expertise in Python and C++ (3+ years minimum) 2+ years experience of building and managing Big Data projects Experience with technical design & architecture (1+ years minimum) Experience with High performance computing - OPTIONAL Experience as a Tech Lead, IT Manager, Director, VP, or CTO 1+ year Experience managing Cloud computing infrastructure (Amazon AWS preferred) - OPTIONAL Ability to work in an unstructured environment Looking to work in a small, start-up type environment based out of Mumbai COMPENSATION: Co-Founder status and Equity partnership

Job posted by
apply for job
apply for job
Job poster profile picture - Hrishabh Sanghvi
Hrishabh Sanghvi
Job posted by
Job poster profile picture - Hrishabh Sanghvi
Hrishabh Sanghvi

Big Data Engineer

Founded 2015
Products and services
6-50 employees
Profitable
Java
Apache Storm
Apache Kafka
Hadoop
Python
Apache Hive
Location icon
Noida
Experience icon
1 - 6 years
Experience icon
4 - 9 lacs/annum

We are a team of Big Data, IoT, ML and security experts. We are a technology company working in Big Data Analytics domain ranging from industrial IOT to Machine Learning and AI. What we doing is really challenging, interesting and cutting edge and we need similar passion people to work with us.

Job posted by
apply for job
apply for job
Job poster profile picture - Sneha Pandey
Sneha Pandey
Job posted by
Job poster profile picture - Sneha Pandey
Sneha Pandey

Big Data Engineer

Founded 2015
Products and services
6-50 employees
Profitable
AWS CloudFormation
Spark
Apache Kafka
Hadoop
HDFS
Location icon
Noida
Experience icon
1 - 7 years
Experience icon
4 - 9 lacs/annum

We are a team of Big Data, IoT, ML and security experts. We are a technology company working in Big Data Analytics domain ranging from industrial IOT to Machine Learning and AI. What we doing is really challenging, interesting and cutting edge and we need similar passion people to work with us.

Job posted by
apply for job
apply for job
Job poster profile picture - Sneha Pandey
Sneha Pandey
Job posted by
Job poster profile picture - Sneha Pandey
Sneha Pandey

Enthusiastic Cloud-ML Engineers with a keen sense of curiosity

Founded 2012
Products and services
51-250 employees
Raised funding
Java
Python
Spark
Hadoop
MongoDB
Scala
Natural Language Processing (NLP)
Machine Learning
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 12 years
Experience icon
3 - 25 lacs/annum

We are a start-up in India seeking excellence in everything we do with an unwavering curiosity and enthusiasm. We build simplified new-age AI driven Big Data Analytics platform for Global Enterprises and solve their biggest business challenges. Our Engineers develop fresh intuitive solutions keeping the user in the center of everything. As a Cloud-ML Engineer, you will design and implement ML solutions for customer use cases and problem solve complex technical customer challenges. Expectations and Tasks - Total of 7+ years of experience with minimum of 2 years in Hadoop technologies like HDFS, Hive, MapReduce - Experience working with recommendation engines, data pipelines, or distributed machine learning and experience with data analytics and data visualization techniques and software. - Experience with core Data Science techniques such as regression, classification or clustering, and experience with deep learning frameworks - Experience in NLP, R and Python - Experience in performance tuning and optimization techniques to process big data from heterogeneous sources. - Ability to communicate clearly and concisely across technology and the business teams. - Excellent Problem solving and Technical troubleshooting skills. - Ability to handle multiple projects and prioritize tasks in a rapidly changing environment. Technical Skills Core Java, Multithreading, Collections, OOPS, Python, R, Apache Spark, MapReduce, Hive, HDFS, Hadoop, MongoDB, Scala We are a retained Search Firm employed by our client - Technology Start-up @ Bangalore. Interested candidates can share their resumes with me - Jia@TalentSculpt.com. I will respond to you within 24 hours. Online assessments and pre-employment screening are part of the selection process.

Job posted by
apply for job
apply for job
Job poster profile picture - Blitzkrieg HR Consulting
Blitzkrieg HR Consulting
Job posted by
Job poster profile picture - Blitzkrieg HR Consulting
Blitzkrieg HR Consulting

Data Scientist

Founded 2017
Product
1-5 employees
Raised funding
Data Science
Python
Hadoop
Elastic Search
Machine Learning
Big Data
Spark
Algorithms
Location icon
Bengaluru (Bangalore)
Experience icon
2 - 5 years
Experience icon
12 - 25 lacs/annum

## Responsibilities * Exp 2~5 years * Design and build the initial version of the off-line product by using Machine Learning to recommend video contents to 1M+ User Profiles. * Design personalized recommendation algorithm and optimize the model * Develop the feature of the recommendation system * Analyze user behavior, build up user portrait and tag system ## Desired Skills and Experience * B.S./M.S. degree in computer science, mathematics, statistics or a similar quantitative field with good college background * 3+ years of work experience in relevant field (Data Engineer, R&D engineer, etc) * Experience in Machine Learning and Prediction & Recommendation techniques * Experience with Hadoop/MapReduce/Elastic-Stack/ELK and Big Data querying tools, such as Pig, Hive, and Impala * Proficiency in a major programming language (e.g. C/C++/Scala) and/or a scripting language (Python/R) * Experience with one or more NoSQL databases, such as MongoDB, Cassandra, HBase, Hive, Vertica, Elastic Search * Experience with cloud solutions/AWS, strong knowledge in Linux and Apache * Experience with any map-reduce SPARK/EMR * Experience in building reports and/or data visualization * Strong communication skills and ability to discuss the product with PMs and business owners

Job posted by
apply for job
apply for job
Job poster profile picture - Xin Lin
Xin Lin
Job posted by
Job poster profile picture - Xin Lin
Xin Lin

Big Data Evangelist

Founded 2016
Products and services
6-50 employees
Profitable
Spark
Hadoop
Apache Kafka
Apache Flume
Scala
Python
MongoDB
Cassandra
Location icon
Noida
Experience icon
2 - 6 years
Experience icon
4 - 12 lacs/annum

Looking for a technically sound and excellent trainer on big data technologies. Get an opportunity to become popular in the industry and get visibility. Host regular sessions on Big data related technologies and get paid to learn.

Job posted by
apply for job
apply for job
Job poster profile picture - Suchit Majumdar
Suchit Majumdar
Job posted by
Job poster profile picture - Suchit Majumdar
Suchit Majumdar

Data Scientist

Founded 2013
Product
6-50 employees
Raised funding
Big Data
Data Science
Machine Learning
R Programming
Python
Haskell
Hadoop
Location icon
Mumbai
Experience icon
3 - 7 years
Experience icon
5 - 15 lacs/annum

Data Scientist - We are looking for a candidate to build great recommendation engines and power an intelligent m.Paani user journey Responsibilities : - Data Mining using methods like associations, correlations, inferences, clustering, graph analysis etc. - Scale machine learning algorithm that powers our platform to support our growing customer base and increasing data volume - Design and implement machine learning, information extraction, probabilistic matching algorithms and models - Care about designing the full machine learning pipeline. - Extending company's data with 3rd party sources. - Enhancing data collection procedures. - Processing, cleaning and verifying data collected. - Ad hoc analysis of the data and present clear results. - Creating advanced analytics products that provide actionable insights. The Individual : - We are looking for a candidate with the following skills, experience and attributes: Required : - Someone with 2+ years of work experience in machine learning. - Educational qualification relevant to the role. Degree in Statistics, certificate courses in Big Data, Machine Learning etc. - Knowledge of Machine Learning techniques and algorithms. - Knowledge in languages and toolkits like Python, R, Numpy. - Knowledge of data visualization tools like D3,js, ggplot2. - Knowledge of query languages like SQL, Hive, Pig . - Familiar with Big Data architecture and tools like Hadoop, Spark, Map Reduce. - Familiar with NoSQL databases like MongoDB, Cassandra, HBase. - Good applied statistics skills like distributions, statistical testing, regression etc. Compensation & Logistics : This is a full-time opportunity. Compensation will be in line with startup, and will be based on qualifications and experience. The position is based in Mumbai, India, and the candidate must live in Mumbai or be willing to relocate.

Job posted by
apply for job
apply for job
Job poster profile picture - Julie K
Julie K
Job posted by
Job poster profile picture - Julie K
Julie K

Database Architect

Founded 2017
Products and services
6-50 employees
Raised funding
ETL
Data Warehouse (DWH)
DWH Cloud
Hadoop
Apache Hive
Spark
Mango DB
PostgreSQL
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 10 years
Experience icon
10 - 20 lacs/annum

candidate will be responsible for all aspects of data acquisition, data transformation, and analytics scheduling and operationalization to drive high-visibility, cross-division outcomes. Expected deliverables will include the development of Big Data ELT jobs using a mix of technologies, stitching together complex and seemingly unrelated data sets for mass consumption, and automating and scaling analytics into the GRAND's Data Lake. Key Responsibilities : - Create a GRAND Data Lake and Warehouse which pools all the data from different regions and stores of GRAND in GCC - Ensure Source Data Quality Measurement, enrichment and reporting of Data Quality - Manage All ETL and Data Model Update Routines - Integrate new data sources into DWH - Manage DWH Cloud (AWS/AZURE/Google) and Infrastructure Skills Needed : - Very strong in SQL. Demonstrated experience with RDBMS, Unix Shell scripting preferred (e.g., SQL, Postgres, Mongo DB etc) - Experience with UNIX and comfortable working with the shell (bash or KRON preferred) - Good understanding of Data warehousing concepts. Big data systems : Hadoop, NoSQL, HBase, HDFS, MapReduce - Aligning with the systems engineering team to propose and deploy new hardware and software environments required for Hadoop and to expand existing environments. - Working with data delivery teams to set up new Hadoop users. This job includes setting up Linux users, setting up and testing HDFS, Hive, Pig and MapReduce access for the new users. - Cluster maintenance as well as creation and removal of nodes using tools like Ganglia, Nagios, Cloudera Manager Enterprise, and other tools. - Performance tuning of Hadoop clusters and Hadoop MapReduce routines. - Screen Hadoop cluster job performances and capacity planning - Monitor Hadoop cluster connectivity and security - File system management and monitoring. - HDFS support and maintenance. - Collaborating with application teams to install operating system and - Hadoop updates, patches, version upgrades when required. - Defines, develops, documents and maintains Hive based ETL mappings and scripts

Job posted by
apply for job
apply for job
Job poster profile picture - Rahul Malani
Rahul Malani
Job posted by
Job poster profile picture - Rahul Malani
Rahul Malani

Data Scientist

Founded 2015
Services
6-50 employees
Profitable
Big Data
Data Science
Machine Learning
R Programming
Python
Haskell
Hadoop
Location icon
Hyderabad
Experience icon
6 - 10 years
Experience icon
10 - 15 lacs/annum

It is one of the largest communication technology companies in the world. They operate America's largest 4G LTE wireless network and the nation's premiere all-fiber broadband network.

Job posted by
apply for job
apply for job
Job poster profile picture - Sangita Deka
Sangita Deka
Job posted by
Job poster profile picture - Sangita Deka
Sangita Deka
Want to apply for this role at Securonix?
Hiring team responds within a day
apply for jobs
Why apply on CutShort?
Connect with actual hiring teams and get their fast response. No 3rd party recruiters. No spam.