Loading...

{{notif_text}}

SocialHelpouts is now CutShort! Read about it here\
The next CutShort event, {{next_cs_event.name}}, will happen on {{next_cs_event.startDate | date: 'd MMMM'}}The next CutShort event, {{next_cs_event.name}}, will begin in a few hoursThe CutShort event, {{next_cs_event.name}}, is LIVE.Join now!

Hadoop Developer
Posted by Ramakrishna Murthy

apply to this job

Locations

Pune

Experience

3 - 7 years

Salary

INR 10L - 15L

Skills

HDFS
Apache Flume
Apache HBase
Hadoop
Impala
Apache Kafka
SOLR Cloud
Apache Spark

Job description

Securonix is a Big Data Security Analytics product company. The only product which delivers real-time behavior analytics (UEBA) on Big Data.

About the company

undefined

Founded

2008

Type

Product

Size

250+ employees

Stage

Bootstrapped
View company

Similar jobs

Software Architect

Founded 2017
Product
51-250 employees
Profitable
Java
Apache Kafka
Jenkins
Relational Database (RDBMS)
NOSQL Databases
Location icon
Bengaluru (Bangalore)
Experience icon
10 - 12 years
Experience icon
30 - 50 lacs/annum

The Challenge We use technology for the charter of building brand new platforms and services on the latest technology stack to support car rental services worldwide. You will have challenging opportunities to cross-collaborate with the inter-disciplinary product and engineering teams to solve some of the unique challenges in car rental and sharing experience. What you’ll do? – Perform as smoothly as our cars: •Architect and provide solutions to complex problems by designing API’s and microservices •Design and build multiple services using Java and GoLang. •Build and enhance existing services and api’s keeping stability, extensibility, availability and scalability in mind •Good working knowledge of Agile, developing microservices, Kafka for event delivery is a must. •Hands-on advanced data structures and algorithms also expert level in Java and GoLang is a must. •Should have successful track record of building multiple services and ensured they are integrated successfully with the consumers. •Should have working knowledge of any RDBMS and knowledge of NoSQL would be an added advantage. What do you need to succeed? – Your willingness to accelerate •10+ years of experience in backend technologies with atleast last 5 years in Java, backend api development. •Architect and provide design for complex problems. •Able to work with multiple teams to close the design and architecture and ensure they are agreed as per the need. •Atleast 2+ years of Experience in Kafka is a must. •Able to code in Java and GoLang proficiently. •Knowledge of CI tools like Jenkins. •Knowledge of REST API and backend services development. •Excellent problem-solving abilities, superior communication skills, and strong execution & delivery focus. •Hands on experience in complex Data Structure and Algorithm

Job posted by
apply for job
apply for job
Job poster profile picture - Tejaswini Ananth
Tejaswini Ananth
Job posted by
Job poster profile picture - Tejaswini Ananth
Tejaswini Ananth

Big Data Engineer

Founded 2015
Products and services
6-50 employees
Profitable
Spark
Apache Kafka
Hadoop
Pig
HDFS
Location icon
Noida, NCR (Delhi | Gurgaon | Noida)
Experience icon
3 - 7 years
Experience icon
5 - 12 lacs/annum

Together we will create wonderful solutions which deliver value for us and our customers.

Job posted by
apply for job
apply for job
Job poster profile picture - Sneha Pandey
Sneha Pandey
Job posted by
Job poster profile picture - Sneha Pandey
Sneha Pandey

Technical Architect/CTO

Founded 2016
Products and services
1-5 employees
Bootstrapped
Python
C/C++
Big Data
Cloud Computing
Technical Architecture
Hadoop
Spark
Cassandra
Location icon
Mumbai
Experience icon
5 - 11 years
Experience icon
15 - 30 lacs/annum

ABOUT US: Arque Capital is a FinTech startup working with AI in Finance in domains like Asset Management (Hedge Funds, ETFs and Structured Products), Robo Advisory, Bespoke Research, Alternate Brokerage, and other applications of Technology & Quantitative methods in Big Finance. PROFILE DESCRIPTION: 1. Get the "Tech" in order for the Hedge Fund - Help answer fundamentals of technology blocks to be used, choice of certain platform/tech over other, helping team visualize product with the available resources and assets 2. Build, manage, and validate a Tech Roadmap for our Products 3. Architecture Practices - At startups, the dynamics changes very fast. Making sure that best practices are defined and followed by team is very important. CTO’s may have to garbage guy and clean the code time to time. Making reviews on Code Quality is an important activity that CTO should follow. 4. Build progressive learning culture and establish predictable model of envisioning, designing and developing products 5. Product Innovation through Research and continuous improvement 6. Build out the Technological Infrastructure for the Hedge Fund 7. Hiring and building out the Technology team 8. Setting up and managing the entire IT infrastructure - Hardware as well as Cloud 9. Ensure company-wide security and IP protection REQUIREMENTS: Computer Science Engineer from Tier-I colleges only (IIT, IIIT, NIT, BITS, DHU, Anna University, MU) 5-10 years of relevant Technology experience (no infra or database persons) Expertise in Python and C++ (3+ years minimum) 2+ years experience of building and managing Big Data projects Experience with technical design & architecture (1+ years minimum) Experience with High performance computing - OPTIONAL Experience as a Tech Lead, IT Manager, Director, VP, or CTO 1+ year Experience managing Cloud computing infrastructure (Amazon AWS preferred) - OPTIONAL Ability to work in an unstructured environment Looking to work in a small, start-up type environment based out of Mumbai COMPENSATION: Co-Founder status and Equity partnership

Job posted by
apply for job
apply for job
Job poster profile picture - Hrishabh Sanghvi
Hrishabh Sanghvi
Job posted by
Job poster profile picture - Hrishabh Sanghvi
Hrishabh Sanghvi

Big Data Engineer

Founded 2015
Products and services
6-50 employees
Profitable
Java
Apache Storm
Apache Kafka
Hadoop
Python
Apache Hive
Location icon
Noida
Experience icon
1 - 6 years
Experience icon
4 - 9 lacs/annum

We are a team of Big Data, IoT, ML and security experts. We are a technology company working in Big Data Analytics domain ranging from industrial IOT to Machine Learning and AI. What we doing is really challenging, interesting and cutting edge and we need similar passion people to work with us.

Job posted by
apply for job
apply for job
Job poster profile picture - Sneha Pandey
Sneha Pandey
Job posted by
Job poster profile picture - Sneha Pandey
Sneha Pandey

Big Data Engineer

Founded 2015
Products and services
6-50 employees
Profitable
AWS CloudFormation
Spark
Apache Kafka
Hadoop
HDFS
Location icon
Noida
Experience icon
1 - 7 years
Experience icon
4 - 9 lacs/annum

We are a team of Big Data, IoT, ML and security experts. We are a technology company working in Big Data Analytics domain ranging from industrial IOT to Machine Learning and AI. What we doing is really challenging, interesting and cutting edge and we need similar passion people to work with us.

Job posted by
apply for job
apply for job
Job poster profile picture - Sneha Pandey
Sneha Pandey
Job posted by
Job poster profile picture - Sneha Pandey
Sneha Pandey

Backend Developer

Founded 2012
Product
250+ employees
Profitable
Java
PHP
RESTful APIs
NOSQL Databases
Apache Kafka
Cassandra
Location icon
NCR (Delhi | Gurgaon | Noida)
Experience icon
4 - 7 years
Experience icon
13 - 30 lacs/annum

Technically Hands-on, prior experience with scalable Architecture -Bring 4-8 years of software engineering and product delivery experience, with strong background in algorithms -Excellent command over Data Structures and Algorithms -Exceptional coding skills in an Object Oriented programming language (Java preferred) -Strong problem solving and analytical skills -Experience with web technologies, PHP/ Java, Python, Linux, Apache, MySQL, solr, memcache, redis -Experience in architecting & building real-time, large scale e-commerce applications - Experience with high performance websites catering to millions of daily traffic is a plus

Job posted by
apply for job
apply for job
Job poster profile picture - Wajahat Ali
Wajahat Ali
Job posted by
Job poster profile picture - Wajahat Ali
Wajahat Ali

Hadoop Developer

Founded 2009
Products and services
250+ employees
Profitable
HDFS
Hbase
Spark
Flume
hive
Sqoop
Scala
Location icon
Mumbai
Experience icon
5 - 14 years
Experience icon
8 - 18 lacs/annum

US based Multinational Company Hands on Hadoop

Job posted by
apply for job
apply for job
Job poster profile picture - Neha Mayekar
Neha Mayekar
Job posted by
Job poster profile picture - Neha Mayekar
Neha Mayekar

Enthusiastic Cloud-ML Engineers with a keen sense of curiosity

Founded 2012
Products and services
51-250 employees
Raised funding
Java
Python
Spark
Hadoop
MongoDB
Scala
Natural Language Processing (NLP)
Machine Learning
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 12 years
Experience icon
3 - 25 lacs/annum

We are a start-up in India seeking excellence in everything we do with an unwavering curiosity and enthusiasm. We build simplified new-age AI driven Big Data Analytics platform for Global Enterprises and solve their biggest business challenges. Our Engineers develop fresh intuitive solutions keeping the user in the center of everything. As a Cloud-ML Engineer, you will design and implement ML solutions for customer use cases and problem solve complex technical customer challenges. Expectations and Tasks - Total of 7+ years of experience with minimum of 2 years in Hadoop technologies like HDFS, Hive, MapReduce - Experience working with recommendation engines, data pipelines, or distributed machine learning and experience with data analytics and data visualization techniques and software. - Experience with core Data Science techniques such as regression, classification or clustering, and experience with deep learning frameworks - Experience in NLP, R and Python - Experience in performance tuning and optimization techniques to process big data from heterogeneous sources. - Ability to communicate clearly and concisely across technology and the business teams. - Excellent Problem solving and Technical troubleshooting skills. - Ability to handle multiple projects and prioritize tasks in a rapidly changing environment. Technical Skills Core Java, Multithreading, Collections, OOPS, Python, R, Apache Spark, MapReduce, Hive, HDFS, Hadoop, MongoDB, Scala We are a retained Search Firm employed by our client - Technology Start-up @ Bangalore. Interested candidates can share their resumes with me - Jia@TalentSculpt.com. I will respond to you within 24 hours. Online assessments and pre-employment screening are part of the selection process.

Job posted by
apply for job
apply for job
Job poster profile picture - Blitzkrieg HR Consulting
Blitzkrieg HR Consulting
Job posted by
Job poster profile picture - Blitzkrieg HR Consulting
Blitzkrieg HR Consulting

Senior Backend Developer

Founded 2013
Product
51-250 employees
Raised funding
Java
NodeJS (Node.js)
PostgreSQL
Postman
Git
Apache
Apache Kafka
Apache Mesos
Location icon
NCR (Delhi | Gurgaon | Noida)
Experience icon
5 - 8 years
Experience icon
20 - 35 lacs/annum

Responsibilities: Lead backend team and mentor junior engineers Design and implement REST-based APIs and microservices in Node.js Write and maintain scalable, performant code that can be shared across platforms Work and communicate with our mobile client developers and project managers, managing priorities and giving input on future features About You : You attended a top university, studying computer science or similar You have experience or interest in writing applications in Node.js You have strong server-side development and experience with databases You understand the ins and outs of RESTful web services You know your way around the UNIX command line You have great communication skills and ability to work with others You are a strong team player, with a do-whatever-it-takes attitude

Job posted by
apply for job
apply for job
Job poster profile picture - Gaurav Gunjan
Gaurav Gunjan
Job posted by
Job poster profile picture - Gaurav Gunjan
Gaurav Gunjan

SDET - Software Development Engineer in Test

Founded 2016
Product
6-50 employees
Raised funding
Distributed Systems
Scala
apache spark
Spark SQL
Spark Streaming
Java
Test Automation (QA)
Location icon
Pune
Experience icon
7 - 12 years
Experience icon
0 - 0 lacs/annum

Job Title: Distributed Systems Engineer - SDET Job Location: Pune, India Job Description: Are you looking to put your computer science skills to use? Are you looking to work for one of the hottest start-ups in Silicon Valley? Are you looking to define the next generation data management platform based on Apache Spark? Are you excited by the idea of being a Spark committer? If you answered yes to all of the questions above, we definitely want to talk to you. We are looking to add highly motivated engineers to work as a QE software engineer in our product development team in Pune. We work on cutting edge data management products that transform the way businesses operate. As a distributed systems engineer (if you are good) , you will get to work on defining key elements of our real time analytics platform, including 1. Distributed in memory data management 2. OLTP and OLAP querying in a single platform 3. Approximate Query Processing over large data sets 4. Online machine learning algorithms applied to streaming data sets 5. Streaming and continuous querying Requirements: 1. Experience in testing modern SQL, NewSQL products highly desirable 2. Experience with SQL language, JDBC, end to end testing of databases 3. Hands on Experience in writing SQL queries 4. Experience on database performance benchmarks like TPC-H, TPC-C and TPC-E a plus 5. Prior experience in benchmarking against Cassandra or MemSQL is a big plus 6. You should be able to program either in Java or have some exposure to functional programming in Scala 7. You should care about performance, and by that, we mean performance optimizations in a JVM 8. You should be self motivated and driven to succeed 9. If you are an open source committer on any project, especially an Apache project, you will fit right in 10. Experience working with Spark, SparkSQL, Spark Streaming is a BIG plus 11. Plans & authors Test plans and ensure testability is considered by development in all stages of the life cycle. 12. Plans, schedules and tracks the creations of Test plans / automation scripts using defined methodologies for manual and/or automated tests 13. Work as QE team member in troubleshooting, isolating, reproducing, tracking bugs and verifying fixes. 14. Analyze test results to ensure existing functionality and recommends corrective action. Documents test results, manages and maintains defect & test case databases to assist in process improvement and estimation of future releases. 15. Performs the assessment and planning of test efforts required for automation of new functions/features under development. Influences design changes to improve quality and feature testability. 16. If you have solved big complex problems, we want to talk to you 17. If you are a math geek, with a background in statistics, mathematics and you know what a linear regression is, this just might be the place for you 18. Exposure to stream data processing Storm, Samza is a plus Open source contributors: Send us your Github id Product: SnappyData is a new real-time analytics platform that combines probabilistic data structures, approximate query processing and in memory distributed data management to deliver powerful analytic querying and alerting capabilities on Apache Spark at a fraction of the cost of traditional big data analytics platforms. SnappyData fuses the Spark computational engine with a highly available, multi-tenanted in-memory database to execute OLAP and OLTP queries on streaming data. Further, SnappyData can store data in a variety of synopsis data structures to provide extremely fast responses on less resources. Finally, applications can either submit Spark programs or connect using JDBC/ODBC to run interactive or continuous SQL queries. Skills: 1. Distributed Systems, 2. Scala, 3. Apache Spark, 4. Spark SQL, 5. Spark Streaming, 6. Java, 7. YARN/Mesos What's in it for you: 1. Cutting edge work that is ultra meaningful 2. Colleagues who are the best of the best 3. Meaningful startup equity 4. Competitive base salary 5. Full benefits 6. Casual, Fun Office Company Overview: SnappyData is a Silicon Valley funded startup founded by engineers who pioneered the distributed in memory data business. It is advised by some of the legends of the computing industry who have been instrumental in creating multiple disruptions that have defined computing over the past 40 years. The engineering team that powers SnappyData built GemFire, one of the industry leading in memory data grids, which is used worldwide in mission critical applications ranging from finance to retail.

Job posted by
apply for job
apply for job
Job poster profile picture - Ketki Naidu
Ketki Naidu
Job posted by
Job poster profile picture - Ketki Naidu
Ketki Naidu
Want to apply for this role at Securonix?
Hiring team responds within a day
apply for jobs
Why apply on CutShort?
Connect with actual hiring teams and get their fast response. No 3rd party recruiters. No spam.