Cutshort logo
Accion Labs logo
Hadoop Developer
Hadoop Developer
Accion Labs's logo

Hadoop Developer

Neha Mayekar's profile picture
Posted by Neha Mayekar
5 - 14 yrs
₹8L - ₹18L / yr
Mumbai
Skills
HDFS
Hbase
Spark
Flume
hive
Sqoop
skill iconScala
US based Multinational Company Hands on Hadoop
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About Accion Labs

Founded :
2009
Type
Size :
100-1000
Stage :
Profitable
About

Accion Labs, Inc. ranked number one IT Company based out of Pittsburgh headquartered global technology firm.

Accion labs Inc: Winner of Fastest growing Company in Pittsburgh, Raked as #1 IT services company two years in a row (2014, 2015), by Pittsburgh Business Times Accion Labs is venture-funded, profitable and fast-growing- allowing you an opportunity to grow with us 11 global offices, 1300+ employees, 80+ tech company clients 90% of our clients we work with are Direct Clients and project based. Offering a full range of product life-cycle services in emerging technology segments including Web 2.0, Open Source, SaaS /Cloud, Mobility, IT Operations Management/ITSM, Big Data and traditional BI/DW, Automation engineering (Rackspace team), devops engineering.

 

Employee strength: 1300+ employees

 

http://accionlabs.com/

 

Why Accion Labs:

 

  • Emerging technology projects i.e. Web 2.0, SaaS, cloud, mobility, BI/DW and big data
  • Great learning environment
  • Onsite opportunity it totally depends on project requirement
  • We invest in training our resources in latest frameworks, tools, processes and best-practices and also cross-training our resources across a range of emerging technologies – enabling you to develop more marketable skill
  • Employee friendly environment with 100% focus on work-life balance, life-long learning and open communication
  • Allow our employees to directly interact with clients
Read more
Connect with the team
Profile picture
Sujata P
Profile picture
Anjali Mohandas
Profile picture
Uma Maheshwari
Profile picture
Zenia Cardoso
Profile picture
Sunny Deol
Profile picture
Nilesh Pathak
Profile picture
Sreenath Prabhu
Profile picture
Kripa Shankar Oza
Profile picture
Ajay Jaiswal
Profile picture
Neha Mayekar
Profile picture
Swathi Raj
Profile picture
Ajitha Menon C P
Profile picture
Aravind J P
Profile picture
Divya L
Company social profiles
blog

Similar jobs

Matellio India Private Limited
Harshit Sharma
Posted by Harshit Sharma
Remote only
8 - 15 yrs
₹10L - ₹27L / yr
skill iconMachine Learning (ML)
skill iconData Science
Natural Language Processing (NLP)
Computer Vision
skill iconDeep Learning
+7 more

Responsibilities include: 

  • Convert the machine learning models into application program interfaces (APIs) so that other applications can use it
  • Build AI models from scratch and help the different components of the organization (such as product managers and stakeholders) understand what results they gain from the model
  • Build data ingestion and data transformation infrastructure
  • Automate infrastructure that the data science team uses
  • Perform statistical analysis and tune the results so that the organization can make better-informed decisions
  • Set up and manage AI development and product infrastructure
  • Be a good team player, as coordinating with others is a must
Read more
Hyderabad
4 - 7 yrs
₹12L - ₹28L / yr
skill iconPython
Spark
Big Data
Hadoop
Apache Hive
Must have :

  • At least 4 to 7 years of relevant experience as Big Data Engineer
  • Hands-on experience in Scala or Python
  • Hands-on experience on major components in Hadoop Ecosystem like HDFS, Map Reduce, Hive, Impala.
  • Strong programming experience in building applications/platform using Scala or Python.
  • Experienced in implementing Spark RDD Transformations, actions to implement business analysis


We are specialized in productizing solutions of new technology. 
Our vision is to build engineers with entrepreneurial and leadership mindsets who can create highly impactful products and solutions using technology to deliver immense value to our clients.
We strive to develop innovation and passion into everything we do, whether it is services or products, or solutions.
Read more
Information Solution Provider Company
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
3 - 7 yrs
₹10L - ₹15L / yr
SQL
Hadoop
Spark
skill iconMachine Learning (ML)
skill iconData Science
+3 more

Job Description:

The data science team is responsible for solving business problems with complex data. Data complexity could be characterized in terms of volume, dimensionality and multiple touchpoints/sources. We understand the data, ask fundamental-first-principle questions, apply our analytical and machine learning skills to solve the problem in the best way possible. 

 

Our ideal candidate

The role would be a client facing one, hence good communication skills are a must. 

The candidate should have the ability to communicate complex models and analysis in a clear and precise manner. 

 

The candidate would be responsible for:

  • Comprehending business problems properly - what to predict, how to build DV, what value addition he/she is bringing to the client, etc.
  • Understanding and analyzing large, complex, multi-dimensional datasets and build features relevant for business
  • Understanding the math behind algorithms and choosing one over another
  • Understanding approaches like stacking, ensemble and applying them correctly to increase accuracy

Desired technical requirements

  • Proficiency with Python and the ability to write production-ready codes. 
  • Experience in pyspark, machine learning and deep learning
  • Big data experience, e.g. familiarity with Spark, Hadoop, is highly preferred
  • Familiarity with SQL or other databases.
Read more
Hyderabad
4 - 7 yrs
₹14L - ₹25L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+5 more

Roles and Responsibilities

Big Data Engineer + Spark Responsibilies Atleast 3 to 4 years of relevant experience as Big Data Engineer Min 1 year of relevant hands-on experience into Spark framework. Minimum 4 years of Application Development experience using any programming language like Scala/Java/Python. Hands on experience on any major components in Hadoop Ecosystem like HDFS or Map or Reduce or Hive or Impala. Strong programming experience of building applications / platforms using Scala/Java/Python. Experienced in implementing Spark RDD Transformations, actions to implement business analysis. An efficient interpersonal communicator with sound analytical problemsolving skills and management capabilities. Strive to keep the slope of the learning curve high and able to quickly adapt to new environments and technologies. Good knowledge on agile methodology of Software development.
Read more
iLink Systems
at iLink Systems
1 video
1 recruiter
Ganesh Sooriyamoorthu
Posted by Ganesh Sooriyamoorthu
Chennai, Pune, Noida, Bengaluru (Bangalore)
5 - 15 yrs
₹10L - ₹15L / yr
Apache Kafka
Big Data
skill iconJava
Spark
Hadoop
+1 more
  • KSQL
  • Data Engineering spectrum (Java/Spark)
  • Spark Scala / Kafka Streaming
  • Confluent Kafka components
  • Basic understanding of Hadoop


Read more
MNC
at MNC
Agency job
via Fragma Data Systems by Priyanka U
Bengaluru (Bangalore)
3 - 7 yrs
₹8L - ₹16L / yr
PySpark
skill iconPython
Spark
Roles and Responsibilities:

• Responsible for developing and maintaining applications with PySpark 
• Contribute to the overall design and architecture of the application developed and deployed.
• Performance Tuning wrt to executor sizing and other environmental parameters, code optimization, partitions tuning, etc.
• Interact with business users to understand requirements and troubleshoot issues.
• Implement Projects based on functional specifications.

Must-Have Skills:

• Good experience in Pyspark - Including Dataframe core functions and Spark SQL
• Good experience in SQL DBs - Be able to write queries including fair complexity.
• Should have excellent experience in Big Data programming for data transformation and aggregations
• Good at ETL architecture. Business rules processing and data extraction from Data Lake into data streams for business consumption.
• Good customer communication.
• Good Analytical skills
Read more
Service based company
Pune
6 - 12 yrs
₹6L - ₹28L / yr
Big Data
Apache Kafka
Data engineering
Cassandra
skill iconJava
+1 more

Primary responsibilities:

  • Architect, Design and Build high performance Search systems for personalization, optimization, and targeting
  • Designing systems with Solr, Akka, Cassandra, Kafka
  • Algorithmic development with primary focus Machine Learning
  • Working with rapid and innovative development methodologies like: Kanban, Continuous Integration and Daily deployments
  • Participation in design and code reviews and recommend improvements
  • Unit testing with JUnit, Performance testing and tuning
  • Coordination with internal and external teams
  • Mentoring junior engineers
  • Participate in Product roadmap and Prioritization discussions and decisions
  • Evangelize the solution with Professional services and Customer Success teams

 

Read more
MNC
at MNC
Agency job
via Fragma Data Systems by geeti gaurav mohanty
Bengaluru (Bangalore)
2 - 5 yrs
₹7L - ₹12L / yr
Spark
skill iconPython
SQL
Primary Responsibilities:
• Responsible for developing and maintaining applications with PySpark
• Contribute to the overall design and architecture of the application developed and deployed.
• Performance Tuning wrt to executor sizing and other environmental parameters, code optimization, partitions tuning, etc.
• Interact with business users to understand requirements and troubleshoot issues.
• Implement Projects based on functional specifications.


Must-Have Skills:

• Good experience in Pyspark - Including Dataframe core functions and Spark SQL
• Good customer communication.
• Good Analytical skills
Read more
Hyderabad
5 - 9 yrs
₹12L - ₹14L / yr
ETL
Snowflake
Data Warehouse (DWH)
Datawarehousing
Apache Spark
+4 more
Overall experience of 4 – 8 years of experience in DW / BI technologies.
Minimum 2 years of work experience on Snowflake and Azure storage.
Minimum 3 years of development experience in ETL Tool Experience.
Strong SQL database skills in other databases like Oracle, SQL Server, DB2 and Teradata
Good to have Hadoop and Spark experience.
Good conceptual knowledge on Data-Warehouse and various methodologies.
Working knowledge in any of the scripting like UNIX / Shell
Good Presentation and communication skills.
Should be flexible with the overlapping working hours.
Should be able to work independently and be proactive.
Good understanding of Agile development cycle.
Read more
Paisabazaar.com
at Paisabazaar.com
3 recruiters
Amit Gupta
Posted by Amit Gupta
NCR (Delhi | Gurgaon | Noida)
1 - 5 yrs
₹6L - ₹18L / yr
Spark
MapReduce
Hadoop
ETL
We are looking at a Big Data Engineer with at least 3-5 years of experience as a Big Data Developer/EngineerExperience with Big Data technologies and tools like Hadoop, Hive, MapR, Kafka, Spark, etc.,Experience in Architecting data ingestion, storage, consumption model.Experience with NoSQL Databases like MongoDB, HBase, Cassandra, etc.,Knowledge of various ETL tools & techniques
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos