Bigdata Engineer

at BDIPlus

DP
Posted by Silita S
icon
Bengaluru (Bangalore)
icon
3 - 7 yrs
icon
₹5L - ₹12L / yr
icon
Full time
Skills
Big Data
Hadoop
Java
Python
PySpark
kafka

Roles and responsibilities:

 

  1. Responsible for development and maintenance of applications with technologies involving Enterprise Java and Distributed  technologies.
  2. Experience in Hadoop, Kafka, Spark, Elastic Search, SQL, Kibana, Python, experience w/ machine learning and Analytics     etc.
  3. Collaborate with developers, product manager, business analysts and business users in conceptualizing, estimating and developing new software applications and enhancements..
  4. Collaborate with QA team to define test cases, metrics, and resolve questions about test results.
  5. Assist in the design and implementation process for new products, research and create POC for possible solutions.
  6. Develop components based on business and/or application requirements
  7. Create unit tests in accordance with team policies & procedures
  8. Advise, and mentor team members in specialized technical areas as well as fulfill administrative duties as defined by support process
  9. Work with cross-functional teams during crisis to address and resolve complex incidents and problems in addition to assessment, analysis, and resolution of cross-functional issues. 
Read more

About BDIPlus

Founded
2014
Type
Product
Size
100-500 employees
Stage
Profitable
View full company details
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Data Scientist

at Vahak

Founded 2016  •  Product  •  20-100 employees  •  Raised funding
Data Science
R Programming
Python
DA
PowerBI
Tableau
SQL
MySQL
icon
Bengaluru (Bangalore)
icon
2 - 10 yrs
icon
₹10L - ₹25L / yr

Who Are We?

 

Vahak (https://www.vahak.in) is India’s largest & most trusted online transport marketplace & directory for road transport businesses and individual commercial vehicle (Trucks, Trailers, Containers, Hyva, LCVs) owners for online truck and load booking, transport business branding and transport business network expansion. Lorry owners can find intercity and intracity loads from all over India and connect with other businesses to find trusted transporters and best deals in the Indian logistics services market. With the Vahak app, users can book loads and lorries from a live transport marketplace with over 5 Lakh + Transporters and Lorry owners in over 10,000+ locations for daily transport requirements.

Vahak has raised a capital of $5+ Million in a Pre-Series A round from RTP Global along with participation from Luxor Capital and Leo Capital. The other marquee angel investors include Kunal Shah, Founder and CEO, CRED; Jitendra Gupta, Founder and CEO, Jupiter; Vidit Aatrey and Sanjeev Barnwal, Co-founders, Meesho; Mohd Farid, Co-founder, Sharechat; Amrish Rau, CEO, Pine Labs; Harsimarbir Singh, Co-founder, Pristyn Care; Rohit and Kunal Bahl, Co-founders, Snapdeal; and Ravish Naresh, Co-founder and CEO, Khatabook.

 

Data Scientist:

We at Vahak, are looking for an enthusiastic and passionate Data Scientist to join our young & diverse team.You will play a key role in the data science group, crunching numbers, building advanced analytical models and predicting critical business metrics from the volumes of big data.

Our goal as a group is to drive powerful, big data analytics products with scalable results.We love people who are humble and collaborative with hunger for excellence.

Responsibilities:

  • Be the go-to person for all advanced analytics (ML/AI) use cases within the larger data science group
  • Build predictive models and machine-learning algorithms to solve business problems by leveraging both batch and real-time datasets
  • Collaborate with engineering and product development teams in the data collection and deployment phase of the model building process
  • Present the model findings using a crisp presentation and use of data visualization techniques
  • Analyze large amounts of information to discover trends and patterns
  • Mine and analyze data from company databases to drive optimization and improvement of product development, marketing techniques and business strategies.

Requirements:

  • Bachelor’s or Masters in a highly numerate discipline such as Engineering, Science and Economics
  • 2+ years of proven experience working as a Data Scientist preferably in ecommerce/web based or consumer technologies company
  • Hands  on experience in building machine learning models from scratch and deploying the same for large scale use cases
  • Hands on experience of working with machine learning frameworks,libraries, data structures and data modelling techniques
  • Strong problem solving skills with an emphasis on product development.
  • Experience using statistical computer languages (R, Python, SLQ, etc.) to manipulate data and draw insights from large data sets.
  • Knowledge of a variety of machine learning techniques (clustering, decision tree learning, artificial neural networks, etc.) and their real-world advantages/drawbacks.
  • Knowledge of advanced statistical techniques and concepts (regression, properties of distributions, statistical tests and proper usage, etc.) and experience with applications.
  • Demonstrated experience of participating in Data Science competitions on platforms like Kaggle would be an added advantage
  • Experience using business intelligence tools e.g. Tableau, Power BI would be an added advantage (not mandatory)

 

Read more
Job posted by
Vahak Talent

Technical Project Manager

at Celebal Technologies

Founded 2015  •  Products & Services  •  1000-5000 employees  •  Profitable
PySpark
Data engineering
Big Data
Hadoop
Spark
Cloud Computing
NOSQL Databases
Apache Hive
Apache Spark
icon
Jaipur, Noida, Gurugram, Delhi, Ghaziabad, Faridabad, Pune, Mumbai
icon
5 - 15 yrs
icon
₹7L - ₹25L / yr
Job Responsibilities:

• Project Planning and Management
o Take end-to-end ownership of multiple projects / project tracks
o Create and maintain project plans and other related documentation for project
objectives, scope, schedule and delivery milestones
o Lead and participate across all the phases of software engineering, right from
requirements gathering to GO LIVE
o Lead internal team meetings on solution architecture, effort estimation, manpower
planning and resource (software/hardware/licensing) planning
o Manage RIDA (Risks, Impediments, Dependencies, Assumptions) for projects by
developing effective mitigation plans
• Team Management
o Act as the Scrum Master
o Conduct SCRUM ceremonies like Sprint Planning, Daily Standup, Sprint Retrospective
o Set clear objectives for the project and roles/responsibilities for each team member
o Train and mentor the team on their job responsibilities and SCRUM principles
o Make the team accountable for their tasks and help the team in achieving them
o Identify the requirements and come up with a plan for Skill Development for all team
members
• Communication
o Be the Single Point of Contact for the client in terms of day-to-day communication
o Periodically communicate project status to all the stakeholders (internal/external)
• Process Management and Improvement
o Create and document processes across all disciplines of software engineering
o Identify gaps and continuously improve processes within the team
o Encourage team members to contribute towards process improvement
o Develop a culture of quality and efficiency within the team

Must have:
• Minimum 08 years of experience (hands-on as well as leadership) in software / data engineering
across multiple job functions like Business Analysis, Development, Solutioning, QA, DevOps and
Project Management
• Hands-on as well as leadership experience in Big Data Engineering projects
• Experience developing or managing cloud solutions using Azure or other cloud provider
• Demonstrable knowledge on Hadoop, Hive, Spark, NoSQL DBs, SQL, Data Warehousing, ETL/ELT,
DevOps tools
• Strong project management and communication skills
• Strong analytical and problem-solving skills
• Strong systems level critical thinking skills
• Strong collaboration and influencing skills

Good to have:
• Knowledge on PySpark, Azure Data Factory, Azure Data Lake Storage, Synapse Dedicated SQL
Pool, Databricks, PowerBI, Machine Learning, Cloud Infrastructure
• Background in BFSI with focus on core banking
• Willingness to travel

Work Environment
• Customer Office (Mumbai) / Remote Work

Education
• UG: B. Tech - Computers / B. E. – Computers / BCA / B.Sc. Computer Science
Read more
Job posted by
Payal Hasnani

Senior Software Engineer Data

at Deepintent

Founded 2015  •  Product  •  20-100 employees  •  Profitable
SQL
Python
JVM
Google Cloud Platform (GCP)
Spark
icon
Pune
icon
3 - 6 yrs
icon
Best in industry
About DeepIntent:
DeepIntent is a marketing technology company that helps healthcare brands strengthen communication with patients and healthcare professionals by enabling highly effective and performant digital advertising campaigns. Our healthcare technology platform, MarketMatch™, connects advertisers, data providers, and publishers to operate the first unified, programmatic marketplace for healthcare marketers. The platform’s built-in identity solution matches digital IDs with clinical, behavioral, and contextual data in real-time so marketers can qualify 1.6M+ verified HCPs and 225M+ patients to find their most clinically-relevant audiences, and message them on a one-to-one basis in a privacy compliant way. Healthcare marketers use MarketMatch to plan, activate, and measure digital campaigns in ways that best suit their business, from managed service engagements to technical integration or self-service solutions. DeepIntent was founded by Memorial Sloan Kettering alumni in 2016 and acquired by Propel Media, Inc. in 2017. We proudly serve major pharmaceutical and Fortune 500 companies out of our offices in New York, Bosnia and India.

Roles and Responsibilities
  • Establish formal data practice for the organisation.
  • Build & operate scalable and robust data architectures.
  • Create pipelines for the self-service introduction and usage of new data
  • Implement DataOps practices
  • Design, Develop, operate Data Pipelines which support Data scientists and machine learning Engineers.
  • Build simple, highly reliable Data storage, ingestion, transformation solutions which are easy to deploy and manage.
  • Collaborate with various business stakeholders, software engineers, machine learning engineers, analysts.
  •  
Desired Skills
  • Experience in designing, developing and operating configurable Data pipelines serving high volume and velocity data.
  • Experience working with public clouds like GCP/AWS.
  • Good understanding of software engineering, DataOps, and data architecture, Agile and DevOps methodologies.
  • Experience building Data architectures that optimize performance and cost, whether the components are prepackaged or homegrown
  • Proficient with SQL,Python or JVM based language, Bash.
  • Experience with any of Apache open source projects such as Spark, Druid, Beam, Airflow etc.and big data databases like BigQuery, Clickhouse, etc
  • Good communication skills with ability to collaborate with both technical and non technical people.
  • Ability to Think Big, take bets and innovate, Dive Deep, Bias for Action, Hire and Develop the Best, Learn and be Curious.
 
 
 
 
 
 

 

Read more
Job posted by
Indrajeet Deshmukh

Data Science Engineer

at E commerce & Retail

Agency job
via Myna Solutions
Machine Learning (ML)
Data Science
Python
Tableau
SQL
m
XGBoost
aws sagemaker
icon
Chennai
icon
5 - 10 yrs
icon
₹8L - ₹18L / yr
Job Title : DataScience Engineer
Work Location : Chennai
Experience Level : 5+yrs
Package : Upto 18 LPA
Notice Period : Immediate Joiners
It's a full-time opportunity with our client.

Mandatory Skills:Machine Learning,Python,Tableau & SQL

Job Requirements:

--2+ years of industry experience in predictive modeling, data science, and Analysis.

--Experience with ML models including but not limited to Regression, Random Forests, XGBoost.

--Experience in an ML engineer or data scientist role building and deploying ML models or hands on experience developing deep learning models.

--Experience writing code in Python and SQL with documentation for reproducibility.

--Strong Proficiency in Tableau.

--Experience handling big datasets, diving into data to discover hidden patterns, using data visualization tools, writing SQL.

--Experience writing and speaking about technical concepts to business, technical, and lay audiences and giving data-driven presentations.

--AWS Sagemaker experience is a plus not required.
Read more
Job posted by
Venkat B

Data Engineer- SQL+PySpark

at Fragma Data Systems

Founded 2015  •  Products & Services  •  employees  •  Profitable
Spark
PySpark
Big Data
Python
SQL
Windows Azure
icon
Remote, Bengaluru (Bangalore)
icon
1 - 5 yrs
icon
₹5L - ₹15L / yr
Must-Have Skills:
• Good experience in Pyspark - Including Dataframe core functions and Spark SQL
• Good experience in SQL DBs - Be able to write queries including fair complexity.
• Should have excellent experience in Big Data programming for data transformation and aggregations
• Good at ELT architecture. Business rules processing and data extraction from Data Lake into data streams for business consumption.
• Good customer communication.
• Good Analytical skill
 
 
Technology Skills (Good to Have):
  • Building and operationalizing large scale enterprise data solutions and applications using one or more of AZURE data and analytics services in combination with custom solutions - Azure Synapse/Azure SQL DWH, Azure Data Lake, Azure Blob Storage, Spark, HDInsights, Databricks, CosmosDB, EventHub/IOTHub.
  • Experience in migrating on-premise data warehouses to data platforms on AZURE cloud. 
  • Designing and implementing data engineering, ingestion, and transformation functions
  • Azure Synapse or Azure SQL data warehouse
  • Spark on Azure is available in HD insights and data bricks
Read more
Job posted by
Evelyn Charles

Data Engineer

at Global SaaS product built to help revenue teams. (TP1)

Agency job
via Multi Recruit
Spark
Data Engineer
Airflow
SQL
No SQL
Kafka
icon
Bengaluru (Bangalore)
icon
1 - 5 yrs
icon
₹30L - ₹40L / yr
  • 3-6 years of relevant work experience in a Data Engineering role.
  • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
  • Experience building and optimizing data pipelines, architectures, and data sets.
  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Strong analytic skills related to working with unstructured datasets.
  • A good understanding of Airflow, Spark, NoSQL databases, Kafka is nice to have.
  • Premium Institute Candidates only
Read more
Job posted by
Kavitha S

Data Scientist - IV

at Glance

Founded 2018  •  Product  •  500-1000 employees  •  Raised funding
Data Science
Machine Learning (ML)
Artificial Intelligence (AI)
Python
Deep Learning
R Programming
Statistical Analysis
Natural Language Processing (NLP)
Databases
Mathematical modeling
Mathematics
icon
Bengaluru (Bangalore)
icon
5 - 10 yrs
icon
₹50L - ₹80L / yr

Glance – An InMobi Group Company:

Glance is an AI-first Screen Zero content discovery platform, and it’s scaled massively in the last few months to one of the largest platforms in India. Glance is a lock-screen first mobile content platform set up within InMobi. The average mobile phone user unlocks their phone >150 times a day. Glance aims to be there, providing visually rich, easy to consume content to entertain and inform mobile users - one unlock at a time. Glance is live on more than 80 millions of mobile phones in India already, and we are only getting started on this journey! We are now into phase 2 of the Glance story - we are going global!

Roposo is part of the Glance family. It is a short video entertainment platform. All the videos created here are user generated (via upload or Roposo creation tools in camera) and there are many communities creating these videos on various themes we call channels. Around 4 million videos are created every month on Roposo and power Roposo channels, some of the channels are - HaHa TV (for comedy videos), News, Beats (for singing/ dance performances) along with a For You (personalized for a user) and Your Feed (for videos of people a user follows).

 

What’s the Glance family like?

Consistently featured among the “Great Places to Work” in India since 2017, our culture is our true north, enabling us to think big, solve complex challenges and grow with new opportunities. Glanciers are passionate and driven, creative and fun-loving, take ownership and are results-focused. We invite you to free yourself, dream big and chase your passion.

 

What can we promise? 

We offer an opportunity to have an immediate impact on the company and our products. The work that you shall do will be mission critical for Glance and will be critical for optimizing tech operations, working with highly capable and ambitious peer groups. At Glance, you get food for your body, soul, and mind with daily meals, gym, and yoga classes, cutting-edge training and tools, cocktails at drink cart Thursdays and fun at work on Funky Fridays. We even promise to let you bring your kids and pets to work. 

 

What you will be doing?

Glance is looking for a Data Scientist who will design and develop processes and systems to analyze high volume, diverse "big data" sources using advanced mathematical, statistical, querying, and reporting methods. Will use machine learning techniques and statistical analysis to predict outcomes and behaviors. Interacts with business partners to identify questions for data analysis and experiments. Identifies meaningful insights from large data and metadata sources; interprets and communicates insights and or prepares output from analysis and experiments to business partners. 

You will be working with Product leadership, taking high-level objectives and developing solutions that fulfil these requirements. Stakeholder management across Eng, Product and Business teams will be required.

 

Basic Qualifications:

  • Five+ years experience working in a Data Science role
  • Extensive experience developing and deploying ML models in real world environments
  • Bachelor's degree in Computer Science, Mathematics, Statistics, or other analytical fields
  • Exceptional familiarity with Python, Java, Spark or other open-source software with data science libraries
  • Experience in advanced math and statistics
  • Excellent familiarity with command line linux environment
  • Able to understand various data structures and common methods in data transformation
  • Experience deploying machine learning models and measuring their impact
  • Knowledge of a variety of machine learning techniques (clustering, decision tree learning, artificial neural networks, etc.) and their real-world advantages/drawbacks.

 

Preferred Qualifications

  • Experience developing recommendation systems
  • Experience developing and deploying deep learning models
  • Bachelor’s or Master's Degree or PhD that included coursework in statistics, machine learning or data analysis
  • Five+ years experience working with Hadoop, a NoSQL Database or other big data infrastructure
  • Experience with being actively engaged in data science or other research-oriented position
  • You would be comfortable collaborating with cross-functional teams.
  • Active personal GitHub account.
Read more
Job posted by
Sandeep Shadankar

Business Intelligence Developer

at Kaleidofin

Founded 2018  •  Products & Services  •  100-1000 employees  •  Profitable
PowerBI
Business Intelligence (BI)
Python
Tableau
SQL
Data modeling
icon
Chennai, Bengaluru (Bangalore)
icon
2 - 4 yrs
icon
Best in industry
We are looking for a developer to design and deliver strategic data-centric insights leveraging the next generation analytics and BI technologies. We want someone who is data-centric and insight-centric, less report centric. We are looking for someone wishing to make an impact by enabling innovation and growth; someone with passion for what they do and a vision for the future.

Responsibilities:
  • Be the analytical expert in Kaleidofin, managing ambiguous problems by using data to execute sophisticated quantitative modeling and deliver actionable insights.
  • Develop comprehensive skills including project management, business judgment, analytical problem solving and technical depth.
  • Become an expert on data and trends, both internal and external to Kaleidofin.
  • Communicate key state of the business metrics and develop dashboards to enable teams to understand business metrics independently.
  • Collaborate with stakeholders across teams to drive data analysis for key business questions, communicate insights and drive the planning process with company executives.
  • Automate scheduling and distribution of reports and support auditing and value realization.
  • Partner with enterprise architects to define and ensure proposed.
  • Business Intelligence solutions adhere to an enterprise reference architecture.
  • Design robust data-centric solutions and architecture that incorporates technology and strong BI solutions to scale up and eliminate repetitive tasks.
 Requirements:
  • Experience leading development efforts through all phases of SDLC.
  • 2+ years "hands-on" experience designing Analytics and Business Intelligence solutions.
  • Experience with Quicksight, PowerBI, Tableau and Qlik is a plus.
  • Hands on experience in SQL, data management, and scripting (preferably Python).
  • Strong data visualisation design skills, data modeling and inference skills.
  • Hands-on and experience in managing small teams.
  • Financial services experience preferred, but not mandatory.
  • Strong knowledge of architectural principles, tools, frameworks, and best practices.
  • Excellent communication and presentation skills to communicate and collaborate with all levels of the organisation.
  • Preferred candidates with less than 30 days notice period.
Read more
Job posted by
Poornima B

Director | Applied AI

at Searce Inc

Founded 2004  •  Products & Services  •  100-1000 employees  •  Profitable
Artificial Intelligence (AI)
Natural Language Processing (NLP)
Machine Learning (ML)
Deep Learning
Research and development
TensorFlow
Spark
Hadoop
Data Analytics
Data Science
Engineering Management
Django
HTML/CSS
Flask
Google Cloud Platform (GCP)
icon
Pune
icon
10 - 14 yrs
icon
₹30L - ₹35L / yr

Director - Applied AI


Who we are?

Searce is  a niche’ Cloud Consulting business with futuristic tech DNA. We do new-age tech to realise the “Next” in the “Now” for our Clients. We specialise in Cloud Data Engineering, AI/Machine Learning and Advanced Cloud infra tech such as Anthos and Kubernetes. We are one of the top & the fastest growing partners for Google Cloud and AWS globally with over 2,500 clients successfully moved to cloud.


What do we believe?

  • Best practices are overrated
      • Implementing best practices can only make one an average .
  • Honesty and Transparency
      • We believe in naked truth. We do what we tell and tell what we do.
  • Client Partnership
    • Client - Vendor relationship: No. We partner with clients instead. 
    • And our sales team comprises 100% of our clients.

How do we work ?

It’s all about being Happier first. And rest follows. Searce work culture is defined by HAPPIER.

  • Humble: Happy people don’t carry ego around. We listen to understand; not to respond.
  • Adaptable: We are comfortable with uncertainty. And we accept changes well. As that’s what life's about.
  • Positive: We are super positive about work & life in general. We love to forget and forgive. We don’t hold grudges. We don’t have time or adequate space for it.
  • Passionate: We are as passionate about the great street-food vendor across the street as about Tesla’s new model and so on. Passion is what drives us to work and makes us deliver the quality we deliver.
  • Innovative: Innovate or Die. We love to challenge the status quo.
  • Experimental: We encourage curiosity & making mistakes.
  • Responsible: Driven. Self motivated. Self governing teams. We own it.

So, what are we hunting for ?

  1. To devise strategy through the delivery of sustainable intelligent solutions, strategic customer engagements, and research and development
  2. To enable and lead our data and analytics team and develop machine learning and AI paths across strategic programs, solution implementation, and customer relationships
  3. To manage existing customers and realize new opportunities and capabilities of growth
  4. To collaborate with different stakeholders for delivering automated, high availability and secure solutions
  5. To develop talent and skills to create a high performance team that delivers superior products
  6. To communicate effectively across the organization to ensure that the team is completely aligned to business objectives
  7. To build strong interpersonal relationships with peers and other key stakeholders that will contribute to your team's success

Your bucket of Undertakings :

  1. Develop an AI roadmap aligned to client needs and vision
  2. Develop a Go-To-Market strategy of AI solutions for customers
  3. Build a diverse cross-functional team to identify and prioritize key areas of the business across AI, NLP and other cognitive solutions that will drive significant business benefit
  4. Lead AI R&D initiatives to include prototypes and minimum viable products
  5. Work closely with multiple teams on projects like Visual quality inspection, ML Ops, Conversational banking, Demand forecasting, Anomaly detection etc. 
  6. Build reusable and scalable solutions for use across the customer base
  7. Create AI white papers and enable strategic partnerships with industry leaders
  8. Align, mentor, and manage, team(s) around strategic initiatives
  9. Prototype and demonstrate AI related products and solutions for customers
  10. Establish processes, operations, measurement, and controls for end-to-end life-cycle management of the digital workforce (intelligent systems)
  11. Lead AI tech challenges and proposals with team members
  12. Assist business development teams in the expansion and enhancement of a pipeline to support short- and long-range growth plans
  13. Identify new business opportunities and prioritize pursuits for AI 

Education & Experience : 

  1. Advanced or basic degree (PhD with few years experience, or MS / BS (with many years experience)) in a quantitative field such as CS, EE, Information sciences, Statistics, Mathematics, Economics, Operations Research, or related, with focus on applied and foundational Machine Learning , AI , NLP and/or / data-driven statistical analysis & modelling
  2. 10+ years of Experience majorly in applying AI/ML/ NLP / deep learning / data-driven statistical analysis & modelling solutions to multiple domains, including financial engineering, financial processes a plus
  3. Strong, proven programming skills and with machine learning and deep learning and Big data frameworks including TensorFlow, Caffe, Spark, Hadoop. Experience with writing complex programs and implementing custom algorithms in these and other environments
  4. Experience beyond using open source tools as-is, and writing custom code on top of, or in addition to, existing open source frameworks
  5. Proven capability in demonstrating successful advanced technology solutions (either prototypes, POCs, well-cited research publications, and/or products) using ML/AI/NLP/data science in one or more domains
  6. Experience in data management, data analytics middleware, platforms and infrastructure, cloud and fog computing is a plus
  7. Excellent communication skills (oral and written) to explain complex algorithms, solutions to stakeholders across multiple disciplines, and ability to work in a diverse team
Read more
Job posted by
Mishita Juneja

Big Data Developer

at GeakMinds Technologies Pvt Ltd

Founded 2011  •  Services  •  100-1000 employees  •  Profitable
Hadoop
Big Data
HDFS
Apache Sqoop
Apache Flume
Apache HBase
Apache Kafka
icon
Chennai
icon
1 - 5 yrs
icon
₹1L - ₹6L / yr
• Looking for Big Data Engineer with 3+ years of experience. • Hands-on experience with MapReduce-based platforms, like Pig, Spark, Shark. • Hands-on experience with data pipeline tools like Kafka, Storm, Spark Streaming. • Store and query data with Sqoop, Hive, MySQL, HBase, Cassandra, MongoDB, Drill, Phoenix, and Presto. • Hands-on experience in managing Big Data on a cluster with HDFS and MapReduce. • Handle streaming data in real time with Kafka, Flume, Spark Streaming, Flink, and Storm. • Experience with Azure cloud, Cognitive Services, Databricks is preferred.
Read more
Job posted by
John Richardson
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at BDIPlus?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort