Loading...

{{notif_text}}

SocialHelpouts is now CutShort! Read about it here\
The next CutShort event, {{next_cs_event.name}}, will happen on {{next_cs_event.startDate | date: 'd MMMM'}}The next CutShort event, {{next_cs_event.name}}, will begin in a few hoursThe CutShort event, {{next_cs_event.name}}, is LIVE.Join now!

Hadoop Jobs

Explore top Hadoop Job opportunities for Top Companies & Startups. All jobs are added by verified employees who can be contacted directly below.

Senior Engineer - Java/Hadoop

Founded 2012
Product
250+ employees
Profitable
Java
Hadoop
NOSQL Databases
Cassandra
Spark
MapReduce
J2EE
Data Structures
Location icon
Bengaluru (Bangalore)
Experience icon
6 - 10 years
Experience icon
20 - 50 lacs/annum

Our Company We help people around the world save money and live better -- anytime and anywhere -- in retail stores, online and through their mobile devices. Each week, more than 220 million customers and members visit our 11,096 stores under 69 banners in 27 countries and e-commerce websites in 10 countries. With last fiscal revenues of approximately $486 billion, Walmart employs 2.2 million employees worldwide. @ Walmart Labs in Bangalore, we use technology for the charter of building brand new platforms and services on the latest technology stack to support both our stores and e-commerce businesses worldwide. Our Team The Global Data and Analytics Platforms (GDAP) team @ Walmart Labs in Bangalore provides Data Foundation Infrastructure, Visualization Portal, Machine Learning Platform, Customer platform and Data Science products that form part of core platforms and services that drive Walmart business. The group also develops analytical products for several verticals like supply chain, pricing, customer, HR etc. Our team which is part of GDAP Bangalore is responsible for creating the Customer Platform which is a one stop shop for all customer analytics for Walmart stores, a Machine Learning Platform that provides end-to-end infrastructure for Data Scientists to build ML solutions and an Enterprise Analytics group that provides analytics for HR, Global Governance and Security. The team is responsible for time critical, business critical and highly reliable systems that influence almost every part of the Walmart business. The team is spread over multiple locations and the Bangalore centre owns critical end to end pieces, that we design, build and support. Your Opportunity As part of the Customer Analytics Team @Walmart Labs, you’ll have the opportunity to make a difference by being a part of development team that builds products at Walmart scale, which is the foundation of Customer Analytics across Walmart. One of the key attribute of this job is that you are required to continuously innovate and apply technology to provide business 360 view of Walmart customers. Your Responsibility • Design, build, test and deploy cutting edge solutions at scale, impacting millions of customers worldwide drive value from data at Walmart Scale • Interact with Walmart engineering teams across geographies to leverage expertise and contribute to the tech community. • Engage with Product Management and Business to drive the agenda, set your priorities and deliver awesome product features to keep platform ahead of market scenarios. • Identify right open source tools to deliver product features by performing research, POC/Pilot and/or interacting with various open source forums • Develop and/or Contribute to add features that enable customer analytics at Walmart scale • Deploy and monitor products on Cloud platforms • Develop and implement best-in-class monitoring processes to enable data applications meet SLAs Our Ideal Candidate You have a deep interest and passion for technology. You love writing and owning codes and enjoy working with people who will keep challenging you at every stage. You have strong problem solving, analytic, decision-making and excellent communication with interpersonal skills. You are self-driven and motivated with the desire to work in a fast-paced, results-driven agile environment with varied responsibilities. Your Qualifications • Bachelor's Degree and 7+ yrs. of experience or Master’s Degree with 6+ years of experience in Computer Science or related field • Expertise in Big Data Ecosystem with deep experience in Java, Hadoop, Spark, Storm, Cassandra, NoSQL etc. • Expertise in MPP architecture and knowledge of MPP engine (Spark, Impala etc). • Experience in building scalable/highly available distributed systems in production. • Understanding of stream processing with expert knowledge on Kafka and either Spark streaming or Storm. • Experience with SOA. • Knowledge of graph database neo4j, Titan is definitely a plus. • Knowledge of Software Engineering best practices with experience on implementing CI/CD, Log aggregation/Monitoring/alerting for production system.

Job posted by
apply for job
apply for job
Job poster profile picture - Lakshman Dornala
Lakshman Dornala
Job posted by
Job poster profile picture - Lakshman Dornala
Lakshman Dornala

Data Science Engineer

Founded 2017
Product
6-50 employees
Profitable
Data Structures
Algorithms
Scala
Machine Learning (ML)
Deep Learning
Spark
Big Data
Hadoop
Location icon
Bengaluru (Bangalore)
Experience icon
0 - 3 years
Experience icon
12 - 20 lacs/annum

Couture.ai is building a patent-pending AI platform targeted towards vertical-specific solutions. The platform is already licensed by Reliance Jio and few European retailers, to empower real-time experiences for their combined >200 million end users. For this role, credible display of innovation in past projects (or academia) is a must. We are looking for a candidate who lives and talks Data & Algorithms, love to play with BigData engineering, hands-on with Apache Spark, Kafka, RDBMS/NoSQL DBs, Big Data Analytics and handling Unix & Production Server. Tier-1 college (BE from IITs, BITS-Pilani, top NITs, IIITs or MS in Stanford, Berkley, CMU, UW–Madison) or exceptionally bright work history is a must. Let us know if this interests you to explore the profile further.

Job posted by
apply for job
apply for job
Job poster profile picture - Shobhit Agarwal
Shobhit Agarwal
Job posted by
Job poster profile picture - Shobhit Agarwal
Shobhit Agarwal

Staff Engineer - Java/Hadoop

Founded 2012
Product
250+ employees
Profitable
Java
Hadoop
Spark
Big Data
J2EE
Data Structures
NOSQL Databases
Cassandra
Location icon
Bengaluru (Bangalore)
Experience icon
8 - 14 years
Experience icon
30 - 70 lacs/annum

Our Company We help people around the world save money and live better -- anytime and anywhere -- in retail stores, online and through their mobile devices. Each week, more than 220 million customers and members visit our 11,096 stores under 69 banners in 27 countries and e-commerce websites in 10 countries. With last fiscal revenues of approximately $486 billion, Walmart employs 2.2 million employees worldwide. @ Walmart Labs in Bangalore, we use technology for the charter of building brand new platforms and services on the latest technology stack to support both our stores and e-commerce businesses worldwide. Our Team The Global Data and Analytics Platforms (GDAP) team @ Walmart Labs in Bangalore provides Data Foundation Infrastructure, Visualization Portal, Machine Learning Platform, Customer platform and Data Science products that form part of core platforms and services that drive Walmart business. The group also develops analytical products for several verticals like supply chain, pricing, customer, HR etc. Our team which is part of GDAP Bangalore is responsible for creating the Customer Platform which is a one stop shop for all customer analytics for Walmart stores, a Machine Learning Platform that provides end-to-end infrastructure for Data Scientists to build ML solutions and an Enterprise Analytics group that provides analytics for HR, Global Governance and Security. The team is responsible for time critical, business critical and highly reliable systems that influence almost every part of the Walmart business. The team is spread over multiple locations and the Bangalore centre owns critical end to end pieces, that we design, build and support. Your Opportunity As part of the Customer Analytics Team @Walmart Labs, you’ll have the opportunity to make a difference by being a part of development team that builds products at Walmart scale, which is the foundation of Customer Analytics across Walmart. One of the key attribute of this job is that you are required to continuously innovate and apply technology to provide business 360 view of Walmart customers. Your Responsibility • Design, build, test and deploy cutting edge solutions at scale, impacting millions of customers worldwide drive value from data at Walmart Scale • Interact with Walmart engineering teams across geographies to leverage expertise and contribute to the tech community. • Engage with Product Management and Business to drive the agenda, set your priorities and deliver awesome product features to keep platform ahead of market scenarios. • Identify right open source tools to deliver product features by performing research, POC/Pilot and/or interacting with various open source forums • Develop and/or Contribute to add features that enable customer analytics at Walmart scale • Deploy and monitor products on Cloud platforms • Develop and implement best-in-class monitoring processes to enable data applications meet SLAs Our Ideal Candidate You have a deep interest and passion for technology. You love writing and owning codes and enjoy working with people who will keep challenging you at every stage. You have strong problem solving, analytic, decision-making and excellent communication with interpersonal skills. You are self-driven and motivated with the desire to work in a fast-paced, results-driven agile environment with varied responsibilities. Your Qualifications • Bachelor's Degree and 8+ yrs. of experience or Master’s Degree with 6+ years of experience in Computer Science or related field • Expertise in Big Data Ecosystem with deep experience in Java, Hadoop, Spark, Storm, Cassandra, NoSQL etc. • Expertise in MPP architecture and knowledge of MPP engine (Spark, Impala etc). • Experience in building scalable/highly available distributed systems in production. • Understanding of stream processing with expert knowledge on Kafka and either Spark streaming or Storm. • Experience with SOA. • Knowledge of graph database neo4j, Titan is definitely a plus. • Knowledge of Software Engineering best practices with experience on implementing CI/CD, Log aggregation/Monitoring/alerting for production system.

Job posted by
apply for job
apply for job
Job poster profile picture - Lakshman Dornala
Lakshman Dornala
Job posted by
Job poster profile picture - Lakshman Dornala
Lakshman Dornala

Senior Software Engineer

Founded 2017
Product
6-50 employees
Profitable
Java
Data Structures
Algorithms
Scala
Apache Kafka
RabbitMQ
Hadoop
Location icon
Bengaluru (Bangalore)
Experience icon
2 - 7 years
Experience icon
10 - 15 lacs/annum

Couture.ai is building a patent-pending AI platform targeted towards vertical-specific solutions. The platform is already licensed by Reliance Jio and few European retailers, to empower real-time experiences for their combined >200 million end users. The founding team consists of BITS Pilani alumni with experience of creating global startup success stories. The core team, we are building, consists of some of the best minds in India in artificial intelligence research and data engineering. We are looking for multiple different roles with 2-7 year of research/large-scale production implementation experience with: - Rock-solid algorithmic capabilities. - Production deployments for massively large-scale systems, real-time personalization, big data analytics, and semantic search. - Or credible research experience in innovating new ML algorithms and neural nets. Github profile link is highly valued. For right fit into the Couture.ai family, compensation is no bar.

Job posted by
apply for job
apply for job
Job poster profile picture - Shobhit Agarwal
Shobhit Agarwal
Job posted by
Job poster profile picture - Shobhit Agarwal
Shobhit Agarwal

Lead Data Engineer

Founded 2017
Product
6-50 employees
Profitable
Spark
Apache Kafka
Hadoop
TensorFlow
Scala
Machine Learning (ML)
OpenStack
MxNet
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 9 years
Experience icon
25 - 50 lacs/annum

Couture.ai is building a patent-pending AI platform targeted towards vertical-specific solutions. The platform is already licensed by Reliance Jio and few European retailers, to empower real-time experiences for their combined >200 million end users. For this role, credible display of innovation in past projects is a must. We are looking for hands-on leaders in data engineering with the 5-11 year of research/large-scale production implementation experience with: - Proven expertise in Spark, Kafka, and Hadoop ecosystem. - Rock-solid algorithmic capabilities. - Production deployments for massively large-scale systems, real-time personalization, big data analytics and semantic search. - Expertise in Containerization (Docker, Kubernetes) and Cloud Infra, preferably OpenStack. - Experience with Spark ML, Tensorflow (& TF Serving), MXNet, Scala, Python, NoSQL DBs, Kubernetes, ElasticSearch/Solr in production. Tier-1 college (BE from IITs, BITS-Pilani, IIITs, top NITs, DTU, NSIT or MS in Stanford, UC, MIT, CMU, UW–Madison, ETH, top global schools) or exceptionally bright work history is a must. Let us know if this interests you to explore the profile further.

Job posted by
apply for job
apply for job
Job poster profile picture - Shobhit Agarwal
Shobhit Agarwal
Job posted by
Job poster profile picture - Shobhit Agarwal
Shobhit Agarwal

Senior Data Engineer

Founded 2017
Product
6-50 employees
Profitable
Shell Scripting
Apache Kafka
TensorFlow
Spark
Hadoop
Elastic Search
MXNet
SMACK
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 7 years
Experience icon
20 - 40 lacs/annum

Couture.ai is building a patent-pending AI platform targeted towards vertical-specific solutions. The platform is already licensed by Reliance Jio and few European retailers, to empower real-time experiences for their combined >200 million end users. For this role, credible display of innovation in past projects is a must. We are looking for multiple different roles with 2-8 year of research/large-scale production implementation experience with: - Proven expertise in Spark, Kafka, and Hadoop ecosystem. - Rock-solid algorithmic capabilities. - Production deployments for massively large-scale systems, real-time personalization, big data analytics, and semantic search. - Or experience with Spark ML, Tensorflow (& TF Serving), MXNet, Scala, Python, NoSQL DBs, Kubernetes or ElasticSearch/Solr in production. Tier-1 college (BE from IITs, BITS-Pilani, IIITs, top NITs, or MS in Stanford, Berkley, CMU, UW–Madison) or exceptionally bright work history is a must. Let us know if this interests you to explore the profile further.

Job posted by
apply for job
apply for job
Job poster profile picture - Shobhit Agarwal
Shobhit Agarwal
Job posted by
Job poster profile picture - Shobhit Agarwal
Shobhit Agarwal

Big Data- Hadoop

Founded 2011
Product
51-250 employees
Bootstrapped
Apache Hive
Hadoop
Java
Apache Kafka
Location icon
Bengaluru (Bangalore)
Experience icon
2 - 5 years
Experience icon
12 - 30 lacs/annum

Technical Lead – Big data analytics We are looking for a senior engineer to work on our next generation marketing analytics platform. The engineer should have working experience in handling big sets of raw data and transforming them into meaningful insights using any of these tools - Hive/Presto/Spark, Redshift, Kafka/Kinesis etc. LeadSquared is a leading customer acquisition SaaS platform used by over 15,000 users across 25 countries to run their sales and marketing processes. Our goal is to have million+ users on our platform in the next 5 years, which is an extraordinary and exciting challenge for Engineering team to work on. The Role LeadSquared is looking for a senior engineer to be part of Marketing Analytics platform where we are building a system to gather multi-channel customer behavior data and generate meaningful insights and actions to eventually accelerate revenues. The individual will work in a small team to build the system to ingest large volumes of data, and setup ways to transform the data to generate insights as well as real-time interactive analytics. Requirements • Passion for building and delivering great software. • Ability to work in a small team and take full ownership and responsibility of critical projects • 5+ years of experience in data-driven environment designing and building business applications • Strong software development skills in one or more programming languages (Python, Java or C#) • Atleast 1-year experience in distributed analytic processing technologies such as Hadoop, Hive, Pig, Presto, MapReduce, Kafka, Spark etc. Basic Qualifications • Strong understanding of Distributed Computing Principles • Proficiency with Distributed file\object storage systems like HDFS • Hands-on experience with computation frameworks like Spark Streaming, MapReduce V2 • Effectively implemented one of big data ingestion and transformation pipelines e.g Kafka, Kinesis, Fluentd, LogStash, ELK stack • Database proficiency and strong experience in one of NoSQL data store systems e.g MongoDB, HBase, Cassandra • Hands-on working knowledge of data warehouse systems e.g Hive, AWS Redshift • Participated in scaling and processing of large sets of data [in the order of Petabytes] Preferred Qualifications • Expert level proficiency in SQL. Ability to perform complex data analysis with large volumes of data • Understanding of ad-hoc interactive query engines like Apache Drill, Presto, Google Big Query, AWS Athena • Exposure to one or more search stores like Solr, ElasticSearch is a plus • Experience working with distributed messaging systems like RabbitMQ • Exposure to infrastructure automation tools like Chef

Job posted by
apply for job
apply for job
Job poster profile picture - Vish As
Vish As
Job posted by
Job poster profile picture - Vish As
Vish As

Technical Architect/CTO

Founded 2016
Products and services
1-5 employees
Bootstrapped
Python
C/C++
Big Data
Cloud Computing
Technical Architecture
Hadoop
Spark
Cassandra
Location icon
Mumbai
Experience icon
5 - 11 years
Experience icon
15 - 30 lacs/annum

ABOUT US: Arque Capital is a FinTech startup working with AI in Finance in domains like Asset Management (Hedge Funds, ETFs and Structured Products), Robo Advisory, Bespoke Research, Alternate Brokerage, and other applications of Technology & Quantitative methods in Big Finance. PROFILE DESCRIPTION: 1. Get the "Tech" in order for the Hedge Fund - Help answer fundamentals of technology blocks to be used, choice of certain platform/tech over other, helping team visualize product with the available resources and assets 2. Build, manage, and validate a Tech Roadmap for our Products 3. Architecture Practices - At startups, the dynamics changes very fast. Making sure that best practices are defined and followed by team is very important. CTO’s may have to garbage guy and clean the code time to time. Making reviews on Code Quality is an important activity that CTO should follow. 4. Build progressive learning culture and establish predictable model of envisioning, designing and developing products 5. Product Innovation through Research and continuous improvement 6. Build out the Technological Infrastructure for the Hedge Fund 7. Hiring and building out the Technology team 8. Setting up and managing the entire IT infrastructure - Hardware as well as Cloud 9. Ensure company-wide security and IP protection REQUIREMENTS: Computer Science Engineer from Tier-I colleges only (IIT, IIIT, NIT, BITS, DHU, Anna University, MU) 5-10 years of relevant Technology experience (no infra or database persons) Expertise in Python and C++ (3+ years minimum) 2+ years experience of building and managing Big Data projects Experience with technical design & architecture (1+ years minimum) Experience with High performance computing - OPTIONAL Experience as a Tech Lead, IT Manager, Director, VP, or CTO 1+ year Experience managing Cloud computing infrastructure (Amazon AWS preferred) - OPTIONAL Ability to work in an unstructured environment Looking to work in a small, start-up type environment based out of Mumbai COMPENSATION: Co-Founder status and Equity partnership

Job posted by
apply for job
apply for job
Job poster profile picture - Hrishabh Sanghvi
Hrishabh Sanghvi
Job posted by
Job poster profile picture - Hrishabh Sanghvi
Hrishabh Sanghvi

Big Data Engineer

Founded 2015
Products and services
6-50 employees
Profitable
Java
Apache Storm
Apache Kafka
Hadoop
Python
Apache Hive
Location icon
Noida
Experience icon
1 - 6 years
Experience icon
4 - 9 lacs/annum

We are a team of Big Data, IoT, ML and security experts. We are a technology company working in Big Data Analytics domain ranging from industrial IOT to Machine Learning and AI. What we doing is really challenging, interesting and cutting edge and we need similar passion people to work with us.

Job posted by
apply for job
apply for job
Job poster profile picture - Sneha Pandey
Sneha Pandey
Job posted by
Job poster profile picture - Sneha Pandey
Sneha Pandey

Big Data Engineer

Founded 2015
Products and services
6-50 employees
Profitable
AWS CloudFormation
Spark
Apache Kafka
Hadoop
HDFS
Location icon
Noida
Experience icon
1 - 7 years
Experience icon
4 - 9 lacs/annum

We are a team of Big Data, IoT, ML and security experts. We are a technology company working in Big Data Analytics domain ranging from industrial IOT to Machine Learning and AI. What we doing is really challenging, interesting and cutting edge and we need similar passion people to work with us.

Job posted by
apply for job
apply for job
Job poster profile picture - Sneha Pandey
Sneha Pandey
Job posted by
Job poster profile picture - Sneha Pandey
Sneha Pandey

Data Scientist

Founded 2017
Product
1-5 employees
Raised funding
Data Science
Python
Hadoop
Elastic Search
Machine Learning (ML)
Big Data
Spark
Algorithms
Location icon
Bengaluru (Bangalore)
Experience icon
4 - 8 years
Experience icon
20 - 30 lacs/annum

## Responsibilities * Exp 4~8 years * Design and build the initial version of the off-line product by using Machine Learning to recommend video contents to 1M+ User Profiles. * Design personalized recommendation algorithm and optimize the model * Develop the feature of the recommendation system * Analyze user behavior, build up user portrait and tag system ## Desired Skills and Experience * B.S./M.S. degree in computer science, mathematics, statistics or a similar quantitative field with good college background * 3+ years of work experience in relevant field (Data Engineer, R&D engineer, etc) * Experience in Machine Learning and Prediction & Recommendation techniques * Experience with Hadoop/MapReduce/Elastic-Stack/ELK and Big Data querying tools, such as Pig, Hive, and Impala * Proficiency in a major programming language (e.g. C/C++/Scala) and/or a scripting language (Python/R) * Experience with one or more NoSQL databases, such as MongoDB, Cassandra, HBase, Hive, Vertica, Elastic Search * Experience with cloud solutions/AWS, strong knowledge in Linux and Apache * Experience with any map-reduce SPARK/EMR * Experience in building reports and/or data visualization * Strong communication skills and ability to discuss the product with PMs and business owners

Job posted by
apply for job
apply for job
Job poster profile picture - Xin Lin
Xin Lin
Job posted by
Job poster profile picture - Xin Lin
Xin Lin

Big Data Evangelist

Founded 2016
Products and services
6-50 employees
Profitable
Spark
Hadoop
Apache Kafka
Apache Flume
Scala
Python
MongoDB
Cassandra
Location icon
Noida
Experience icon
2 - 6 years
Experience icon
4 - 12 lacs/annum

Looking for a technically sound and excellent trainer on big data technologies. Get an opportunity to become popular in the industry and get visibility. Host regular sessions on Big data related technologies and get paid to learn.

Job posted by
apply for job
apply for job
Job poster profile picture - Suchit Majumdar
Suchit Majumdar
Job posted by
Job poster profile picture - Suchit Majumdar
Suchit Majumdar

Database Architect

Founded 2017
Products and services
6-50 employees
Raised funding
ETL
Data Warehouse (DWH)
DWH Cloud
Hadoop
Apache Hive
Spark
Mango DB
PostgreSQL
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 10 years
Experience icon
10 - 20 lacs/annum

candidate will be responsible for all aspects of data acquisition, data transformation, and analytics scheduling and operationalization to drive high-visibility, cross-division outcomes. Expected deliverables will include the development of Big Data ELT jobs using a mix of technologies, stitching together complex and seemingly unrelated data sets for mass consumption, and automating and scaling analytics into the GRAND's Data Lake. Key Responsibilities : - Create a GRAND Data Lake and Warehouse which pools all the data from different regions and stores of GRAND in GCC - Ensure Source Data Quality Measurement, enrichment and reporting of Data Quality - Manage All ETL and Data Model Update Routines - Integrate new data sources into DWH - Manage DWH Cloud (AWS/AZURE/Google) and Infrastructure Skills Needed : - Very strong in SQL. Demonstrated experience with RDBMS, Unix Shell scripting preferred (e.g., SQL, Postgres, Mongo DB etc) - Experience with UNIX and comfortable working with the shell (bash or KRON preferred) - Good understanding of Data warehousing concepts. Big data systems : Hadoop, NoSQL, HBase, HDFS, MapReduce - Aligning with the systems engineering team to propose and deploy new hardware and software environments required for Hadoop and to expand existing environments. - Working with data delivery teams to set up new Hadoop users. This job includes setting up Linux users, setting up and testing HDFS, Hive, Pig and MapReduce access for the new users. - Cluster maintenance as well as creation and removal of nodes using tools like Ganglia, Nagios, Cloudera Manager Enterprise, and other tools. - Performance tuning of Hadoop clusters and Hadoop MapReduce routines. - Screen Hadoop cluster job performances and capacity planning - Monitor Hadoop cluster connectivity and security - File system management and monitoring. - HDFS support and maintenance. - Collaborating with application teams to install operating system and - Hadoop updates, patches, version upgrades when required. - Defines, develops, documents and maintains Hive based ETL mappings and scripts

Job posted by
apply for job
apply for job
Job poster profile picture - Rahul Malani
Rahul Malani
Job posted by
Job poster profile picture - Rahul Malani
Rahul Malani

Hadoop Administrator

Founded 2008
Product
250+ employees
Bootstrapped
Hadoop
Cloudera
Hortonworks
Location icon
Bengaluru (Bangalore)
Experience icon
2 - 5 years
Experience icon
5 - 15 lacs/annum

Securonix is a security analytics product company. Our product provides real-time behavior analytics capabilities and uses the following Hadoop components - Kafka, Spark, Impala, HBase. We support very large customers for all our customers globally, with full access to the cluster. Cloudera Certification is a big plus.

Job posted by
apply for job
apply for job
Job poster profile picture - Ramakrishna Murthy
Ramakrishna Murthy
Job posted by
Job poster profile picture - Ramakrishna Murthy
Ramakrishna Murthy

Hadoop Developer

Founded 2008
Product
250+ employees
Bootstrapped
HDFS
Apache Flume
Apache HBase
Hadoop
Impala
Apache Kafka
SOLR Cloud
Apache Spark
Location icon
Pune
Experience icon
3 - 7 years
Experience icon
10 - 15 lacs/annum

Securonix is a Big Data Security Analytics product company. The only product which delivers real-time behavior analytics (UEBA) on Big Data.

Job posted by
apply for job
apply for job
Job poster profile picture - Ramakrishna Murthy
Ramakrishna Murthy
Job posted by
Job poster profile picture - Ramakrishna Murthy
Ramakrishna Murthy

Big Data Engineer

Founded 2015
Products and services
6-50 employees
Profitable
Apache Storm
Spark
Apache Kafka
Hadoop
Zookeeper
Kubernetes
Docker
Amazon Web Services (AWS)
Location icon
Noida
Experience icon
2 - 7 years
Experience icon
5 - 12 lacs/annum

Our company is working on some really interesting projects in Big Data Domain in various fields (Utility, Retail, Finance). We are working with some big corporates and MNCs around the world. While working here as Big Data Engineer, you will be dealing with big data in structured and unstructured form and as well as streaming data from Industrial IOT infrastructure. You will be working on cutting edge technologies and exploring many others while also contributing back to the open-source community. You will get to know and work on end-to-end processing pipeline which deals with all type of work like storing, processing, machine learning, visualization etc.

Job posted by
apply for job
apply for job
Job poster profile picture - Harsh Choudhary
Harsh Choudhary
Job posted by
Job poster profile picture - Harsh Choudhary
Harsh Choudhary

HBase Architect Developer

Founded 2017
Products and services
6-50 employees
Bootstrapped
Apache HBase
Hadoop
MapReduce
Location icon
Bengaluru (Bangalore)
Experience icon
1 - 3 years
Experience icon
6 - 20 lacs/annum

www.aaknet.co.in/careers/careers-at-aaknet.html You are extra-ordinary, a rock-star, hardly found a place to leverage or challenge your potential, did not spot a sky rocketing opportunity yet? Come play with us – face the challenges we can throw at you, chances are you might be humiliated (positively); do not take it that seriously though! Please be informed, we rate CHARACTER, attitude high if not more than your great skills, experience and sharpness etc. :) Best wishes & regards, Team Aak!

Job posted by
apply for job
apply for job
Job poster profile picture - Debdas Sinha
Debdas Sinha
Job posted by
Job poster profile picture - Debdas Sinha
Debdas Sinha

Freelance Faculty

Founded 2009
Product
250+ employees
Profitable
Java
Amazon Web Services (AWS)
Big Data
Corporate Training
Data Science
Digital Marketing
Hadoop
Location icon
Anywhere, United States, Canada
Experience icon
3 - 10 years
Experience icon
2 - 10 lacs/annum

To introduce myself I head Global Faculty Acquisition for Simplilearn. About My Company: SIMPLILEARN is a company which has transformed 500,000+ carriers across 150+ countries with 400+ courses and yes we are a Registered Professional Education Provider providing PMI-PMP, PRINCE2, ITIL (Foundation, Intermediate & Expert), MSP, COBIT, Six Sigma (GB, BB & Lean Management), Financial Modeling with MS Excel, CSM, PMI - ACP, RMP, CISSP, CTFL, CISA, CFA Level 1, CCNA, CCNP, Big Data Hadoop, CBAP, iOS, TOGAF, Tableau, Digital Marketing, Data scientist with Python, Data Science with SAS & Excel, Big Data Hadoop Developer & Administrator, Apache Spark and Scala, Tableau Desktop 9, Agile Scrum Master, Salesforce Platform Developer, Azure & Google Cloud. : Our Official website : www.simplilearn.com If you're interested in teaching, interacting, sharing real life experiences and passion to transform Careers, please join hands with us. Onboarding Process • Updated CV needs to be sent to my email id , with relevant certificate copy. • Sample ELearning access will be shared with 15days trail post your registration in our website. • My Subject Matter Expert will evaluate you on your areas of expertise over a telephonic conversation - Duration 15 to 20 minutes • Commercial Discussion. • We will register you to our on-going online session to introduce you to our course content and the Simplilearn style of teaching. • A Demo will be conducted to check your training style, Internet connectivity. • Freelancer Master Service Agreement Payment Process : • Once a workshop/ Last day of the training for the batch is completed you have to share your invoice. • An automated Tracking Id will be shared from our automated ticketing system. • Our Faculty group will verify the details provided and share the invoice to our internal finance team to process your payment, if there are any additional information required we will co-ordinate with you. • Payment will be processed in 15 working days as per the policy this 15 days is from the date of invoice received. Please share your updated CV to get this for next step of on-boarding process.

Job posted by
apply for job
apply for job
Job poster profile picture - STEVEN JOHN
STEVEN JOHN
Job posted by
Job poster profile picture - STEVEN JOHN
STEVEN JOHN

Big Data Engineer,

Founded 2014
Products and services
51-250 employees
Profitable
Spark Streamimng
spark sql
Java
Hadoop
Scala
Spark
Location icon
Pune
Experience icon
4 - 8 years
Experience icon
5 - 16 lacs/annum

Greetings from Info Vision labs InfoVision was founded in 1995 by technology professionals with a vision to provide quality and cost-effective IT solutions worldwide. InfoVision is a global IT Services and Solutions company with primary focus on Strategic Resources, Enterprise Applications and Technology Solutions. Our core practice areas include Applications Security, Business Analytics, Visualization & Collaboration and Wireless & IP Communications. Our IT services cover the full range of needs of enterprises, from Staffing to Solutions. Over the past decade, our ability to serve our clients has steadily evolved. It now covers multiple industries, numerous geographies and flexible delivery models, as well as the state-of-the-art technologies. InfoVision opened its development and delivery center in 2014, at Pune and has been expanding with project engagements with clients based in US and India. We can offer the right individuals an industry leading package and fast career growth prospects. Please get to know about us at - http://infovisionlabs.com/about/

Job posted by
apply for job
apply for job
Job poster profile picture - Ankita Lonagre
Ankita Lonagre
Job posted by
Job poster profile picture - Ankita Lonagre
Ankita Lonagre

Big Data

Founded 2014
Products and services
51-250 employees
Profitable
Hadoop
Scala
Spark
Location icon
Pune
Experience icon
5 - 10 years
Experience icon
5 - 5 lacs/annum

We at InfoVision Labs, are passionate about technology and what our clients would like to get accomplished. We continuously strive to understand business challenges, changing competitive landscape and how the cutting edge technology can help position our client to the forefront of the competition.We are a fun loving team of Usability Experts and Software Engineers, focused on Mobile Technology, Responsive Web Solutions and Cloud Based Solutions. Job Responsibilities: ◾Minimum 3 years of experience in Big Data skills required. ◾Complete life cycle experience with Big Data is highly preferred ◾Skills – Hadoop, Spark, “R”, Hive, Pig, H-Base and Scala ◾Excellent communication skills ◾Ability to work independently with no-supervision.

Job posted by
apply for job
apply for job
Job poster profile picture - Shekhar Singh kshatri
Shekhar Singh kshatri
Job posted by
Job poster profile picture - Shekhar Singh kshatri
Shekhar Singh kshatri

Data Scientist

Founded 2014
Product
6-50 employees
Profitable
R
Python
Big Data
Data Science
Hadoop
Machine Learning (ML)
Haskell
Location icon
Ahmedabad
Experience icon
3 - 7 years
Experience icon
5 - 12 lacs/annum

Job Role Develop and refine algorithms for machine learning from large datasets. Write offline as well as efficient runtime programs for meaning extraction and real-time response systems. Develop and improve Ad-Targeting based on various criteria like demographics, location, user-interests and many more. Design and develop techniques for handling real-time budget and campaign updates. Be open to learning new technologies. Collaborate with team members in building products Skills Required MS/PhD in Computer Science or other highly quantitative field Minimum 8 - 10 yrs of hands on experience in different machine-learning techniques Strong expertise in Big-data processing (Combination of the technologies you should be familiar with Kafka, Storm, Logstash, ElasticSearch, Hadoop, Spark) Strong coding skills in at-least one object-oriented programming language (e.g. Java, Python) Strong problem solving and analytical ability Prior 3+ year experience in advertising technology is preferred

Job posted by
apply for job
apply for job
Job poster profile picture - Ankit Vyas
Ankit Vyas
Job posted by
Job poster profile picture - Ankit Vyas
Ankit Vyas

Senior Software Engineer

Founded 2014
Product
6-50 employees
Raised funding
Python
Big Data
Hadoop
Scala
Spark
Location icon
Bengaluru (Bangalore)
Experience icon
6 - 10 years
Experience icon
5 - 40 lacs/annum

Check our JD: https://www.zeotap.com/job/senior-tech-lead-m-f-for-zeotap/oEQK2fw0

Job posted by
apply for job
apply for job
Job poster profile picture - Projjol Banerjea
Projjol Banerjea
Job posted by
Job poster profile picture - Projjol Banerjea
Projjol Banerjea

Data Scientist

Founded 2009
Products and services
51-250 employees
Profitable
R
Python
Big Data
Data Science
Hadoop
Machine Learning (ML)
Haskell
Location icon
NCR (Delhi | Gurgaon | Noida)
Experience icon
3 - 7 years
Experience icon
5 - 10 lacs/annum

JSM is a data sciences company, founded in 2009 with the purpose of helping clients make data driven decisions. JSM specifically focuses on unstructured data. It is estimated that 90% of all data generated is unstructured and still mostly under-utilized for actionable insights largely due to high costs involved in speedily mining these large volumes of data. JSM is committed to creating cost effective innovative solutions in pursuit of highly actionable, easy to consume insights with a clearly defined ROI

Job posted by
apply for job
apply for job
Job poster profile picture - Manas Ranjan Kar
Manas Ranjan Kar
Job posted by
Job poster profile picture - Manas Ranjan Kar
Manas Ranjan Kar

Big Data Engineer

Founded 2007
Product
250+ employees
Raised funding
Java
Cassandra
Apache Hive
Pig
Big Data
Hadoop
JSP
NodeJS (Node.js)
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 10 years
Experience icon
16 - 35 lacs/annum

- Passion to build analytics & personalisation platform at scale - 4 to 9 years of software engineering experience with product based company in data analytics/big data domain - Passion for the Designing and development from the scratch. - Expert level Java programming and experience leading full lifecycle of application Dev. - Exp in Analytics, Hadoop, Pig, Hive, Mapreduce, ElasticSearch, MongoDB is an additional advantage - Strong communication skills, verbal and written

Job posted by
apply for job
apply for job
Job poster profile picture - Vijaya Kiran
Vijaya Kiran
Job posted by
Job poster profile picture - Vijaya Kiran
Vijaya Kiran

Sr. Tech Lead

Founded 2015
Products and services
6-50 employees
Profitable
Hibernate
Amazon RedShift
Java
MySQL
Amazon Web Services (AWS)
Bootstrap
Hadoop
Laravel
MongoDB
Spring
Location icon
Pune
Experience icon
8 - 10 years
Experience icon
17 - 20 lacs/annum

Responsibilities: Responsible for all aspects of development and support for internally created or supported application software, including: the development methodologies, technologies (language, databases, support tools), development and testing hardware/software environments, and management of the application development staff and project workload for the agency. Your job is to manage a project and manage a set of engineers. You are responsible for making your team happy and productive, helping them manage their careers. You are responsible for delivering great product on time and with quality. ESSENTIAL DUTIES AND RESPONSIBILITIES • Supervise the projects and responsibilities of the Web and Software Developers. • Responsible for the prioritization of projects assigned to the Application Development team. • Responsible for the complete development lifecycle of the agency software systems; including gathering requirements, database management, software development, testing, implementation, user follow up, support and Project Management. • Responsible for the Integrity, Maintenance and changes to the Application Development Servers and Databases. (DBA) • Responsible for developing and implementing change control processes for the development team to follow. • Provides ad-hoc reporting and decision support required for management decision processes. • Makes technology decisions that effect Software Development. • Works on special I.T. projects as needed. Familiarity with Technologies: • Java, Spring, Hibernate, Laravel • MySQL, MongoDB, Amazon RedShift, Hadoop • Angular.js, Boostrap • AWS cloud infrastructure QUALIFICATIONS • Bachelor’s degree in Information Science or Computer Science required. • 8-10 years of Application Development Experience required. • Five plus years of Database Design and Analysis required. • Strong verbal communication skills required.

Job posted by
apply for job
apply for job
Job poster profile picture - Aditya Bhelande
Aditya Bhelande
Job posted by
Job poster profile picture - Aditya Bhelande
Aditya Bhelande

Data Scientist

Founded 2014
Services
6-50 employees
Profitable
R
Artificial Neural Networks
UIMA
Python
Big Data
Hadoop
Machine Learning (ML)
Natural Language Processing (NLP)
Location icon
Navi Mumbai
Experience icon
4 - 8 years
Experience icon
5 - 15 lacs/annum

Nextalytics is an offshore research, development and consulting company based in India that focuses on high quality and cost effective software development and data science solutions. At Nextalytics, we have developed a culture that encourages employees to be creative, innovative, and playful. We reward intelligence, dedication and out-of-the-box thinking; if you have these, Nextalytics will be the perfect launch pad for your dreams. Nextalytics is looking for smart, driven and energetic new team members.

Job posted by
apply for job
apply for job
Job poster profile picture - Harshal Patni
Harshal Patni
Job posted by
Job poster profile picture - Harshal Patni
Harshal Patni
Abinitio
Cognos
Microstrategy
Business Analysts
Hadoop
Informatica
Tableau
Location icon
Pune, New Yor, Chicago, Hyderabad
Experience icon
1 - 15 years
Experience icon
5 - 10 lacs/annum

Exusia, Inc. (ex-OO-see-ah: translated from Greek to mean "Immensely Powerful and Agile") was founded with the objective of addressing a growing gap in the data innovation and engineering space as the next global leader in big data, analytics, data integration and cloud computing solutions. Exusia is a multinational, delivery centric firm that provides consulting and software as a service (SaaS) solutions to leading financial, government, healthcare, telecommunications and high technology organizations facing the largest data volumes and the most complex information management requirements. Exusia was founded in the United States in 2012 with headquarters in New York City and regional US offices in Chicago, Atlanta and Los Angeles. Exusia’s international presence continues to expand and is driven from Toronto (Canada), Sao Paulo (Brazil), Johannesburg (South Africa) and Pune (India). Our mission is to empower clients to grow revenue, optimize costs and satisfy regulatory requirements through the innovative use of information and analytics. We leverage a unique blend of strategy, intellectual property, technical execution and outsourcing to enable our clients to achieve significant returns on investment for their business, data and technology initiatives. At the core of our philosophy is a quality-first, trust-building, delivery-focused client relationship. The foundation of this relationship is the talent of our team. By recruiting and retaining the best talent in the industry, we are able to deliver to clients, whose data volumes and requirements number among the largest in the world, a broad range of customized, cutting edge solutions.

Job posted by
apply for job
apply for job
Job poster profile picture - Dhaval Upadhyay
Dhaval Upadhyay
Job posted by
Job poster profile picture - Dhaval Upadhyay
Dhaval Upadhyay
Why apply via CutShort?
Connect with actual hiring teams and get their fast response. No 3rd party recruiters. No spam.