Loading...

{{notif_text}}

SocialHelpouts is now CutShort! Read about it here
The next CutShort event, {{next_cs_event.name}}, will happen on {{next_cs_event.startDate | date: 'd MMMM'}}The next CutShort event, {{next_cs_event.name}}, will begin in a few hoursThe CutShort event, {{next_cs_event.name}}, is LIVE.Join now!

Locations

Bengaluru (Bangalore)

Experience

5 - 10 years

Salary

INR 10L - 20L

Skills

ETL
Data Warehouse (DWH)
DWH Cloud
Hadoop
Apache Hive
Spark
Mango DB
PostgreSQL

Job description

candidate will be responsible for all aspects of data acquisition, data transformation, and analytics scheduling and operationalization to drive high-visibility, cross-division outcomes. Expected deliverables will include the development of Big Data ELT jobs using a mix of technologies, stitching together complex and seemingly unrelated data sets for mass consumption, and automating and scaling analytics into the GRAND's Data Lake. Key Responsibilities : - Create a GRAND Data Lake and Warehouse which pools all the data from different regions and stores of GRAND in GCC - Ensure Source Data Quality Measurement, enrichment and reporting of Data Quality - Manage All ETL and Data Model Update Routines - Integrate new data sources into DWH - Manage DWH Cloud (AWS/AZURE/Google) and Infrastructure Skills Needed : - Very strong in SQL. Demonstrated experience with RDBMS, Unix Shell scripting preferred (e.g., SQL, Postgres, Mongo DB etc) - Experience with UNIX and comfortable working with the shell (bash or KRON preferred) - Good understanding of Data warehousing concepts. Big data systems : Hadoop, NoSQL, HBase, HDFS, MapReduce - Aligning with the systems engineering team to propose and deploy new hardware and software environments required for Hadoop and to expand existing environments. - Working with data delivery teams to set up new Hadoop users. This job includes setting up Linux users, setting up and testing HDFS, Hive, Pig and MapReduce access for the new users. - Cluster maintenance as well as creation and removal of nodes using tools like Ganglia, Nagios, Cloudera Manager Enterprise, and other tools. - Performance tuning of Hadoop clusters and Hadoop MapReduce routines. - Screen Hadoop cluster job performances and capacity planning - Monitor Hadoop cluster connectivity and security - File system management and monitoring. - HDFS support and maintenance. - Collaborating with application teams to install operating system and - Hadoop updates, patches, version upgrades when required. - Defines, develops, documents and maintains Hive based ETL mappings and scripts

About the company

The group currently operates Grand Shopping Malls, Grand Hypermarkets and Grand Xpress in Middle East and India.

Founded

2017

Type

Products & Services

Size

6-50 employees

Stage

Raised funding
View company

Similar jobs

ETL Developer

Founded 2012
Products and services
51-250 employees
Raised funding
ETL
Python
Data modeling
Python
R
Location icon
Bangalore
Experience icon
7 - 14 years
Experience icon
8 - 14 lacs/annum

· 7+ Years’ of relevant experience · Ensure delivery of high quality software-generated, data-driven reports. · Pursue ad-hoc analysis of new data sources and visualizations to support current customer needs. · Own data and analytics based product development projects – i.e. calibration, data based insights, etc. · Should have worked on ETL Processes. · Data Modelling, Data imports on databases. · Strong programing experience in Python and R. · Experience with PostgreSQL. · Data Scrapping, Web Data Extraction, Crawler and Scraping skills · Should be familiar with GitLab (Good to have). · Experience using SQL to extract data from well-structured data store. · Experience in Energy, Oil and Gas industry is added plus.

Job posted by
apply for job
Job poster profile picture - Thouseef Ahmed
Thouseef Ahmed
Job posted by
Job poster profile picture - Thouseef Ahmed
Thouseef Ahmed
apply for job
view job details

Enthusiastic Cloud-ML Engineers with a keen sense of curiosity

Founded 2012
Products and services
51-250 employees
Raised funding
Java
Python
Spark
Hadoop
MongoDB
Scala
Natural Language Processing (NLP)
Machine Learning
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 12 years
Experience icon
3 - 25 lacs/annum

We are a start-up in India seeking excellence in everything we do with an unwavering curiosity and enthusiasm. We build simplified new-age AI driven Big Data Analytics platform for Global Enterprises and solve their biggest business challenges. Our Engineers develop fresh intuitive solutions keeping the user in the center of everything. As a Cloud-ML Engineer, you will design and implement ML solutions for customer use cases and problem solve complex technical customer challenges. Expectations and Tasks - Total of 7+ years of experience with minimum of 2 years in Hadoop technologies like HDFS, Hive, MapReduce - Experience working with recommendation engines, data pipelines, or distributed machine learning and experience with data analytics and data visualization techniques and software. - Experience with core Data Science techniques such as regression, classification or clustering, and experience with deep learning frameworks - Experience in NLP, R and Python - Experience in performance tuning and optimization techniques to process big data from heterogeneous sources. - Ability to communicate clearly and concisely across technology and the business teams. - Excellent Problem solving and Technical troubleshooting skills. - Ability to handle multiple projects and prioritize tasks in a rapidly changing environment. Technical Skills Core Java, Multithreading, Collections, OOPS, Python, R, Apache Spark, MapReduce, Hive, HDFS, Hadoop, MongoDB, Scala We are a retained Search Firm employed by our client - Technology Start-up @ Bangalore. Interested candidates can share their resumes with me - Jia@TalentSculpt.com. I will respond to you within 24 hours. Online assessments and pre-employment screening are part of the selection process.

Job posted by
apply for job
Job poster profile picture - Blitzkrieg HR Consulting
Blitzkrieg HR Consulting
Job posted by
Job poster profile picture - Blitzkrieg HR Consulting
Blitzkrieg HR Consulting
apply for job
view job details

Java Developer

Founded 2006
Product
6-50 employees
Profitable
Java
J2EE
Spring
Hibernate (Java)
PostgreSQL
Location icon
Anywhere
Experience icon
2 - 5 years
Experience icon
9 - 13 lacs/annum

Who BlueOptima is the is the only company providing objective software development productivity metrics. The technology has been implemented by some of the world’s largest organisations including insurance companies, asset managers, telecoms and seven of the world’s top ten Universal Banks. This successful product uptake has led to rapid expansion of the company. What The role involves: ✓ Contributing to overall technical architecture ✓ Understanding, prioritizing and paying off technical debt ✓ Measuring, diagnosing and improving product performance ✓ Conceiving, developing, releasing and maintaining features. For us, a feature isn’t delivered until it’s in production and each team member is responsible for the features that they release Requirements ✓ Familiarity with Windows and Linux development environments ✓ Core Java, Advanced Java PostgreSQL, Hibernate, Spring, Angular.JS, REST APIs ✓ 2-5 years’ experience Why We have 12 full-time Software Developer positions to fill – and many reasons to work for us:  10 year-old company, now growing rapidly, offers both stability and rapid career progression  Distributed developer team means you can work from home: Save travel time / cost  International business travel  Work alongside other leading engineers, using a cutting-edge technology stack  Above market-rate salary  Potential stock options for outstanding performers The most eligible candidates may progress to further stages: In addition to testing and proving your Java skills, Technical Interview, Work Sample and Work Sample Discussion, and Final Interview with our COO.

Job posted by
apply for job
Job poster profile picture - Rashmi Anand
Rashmi Anand
Job posted by
Job poster profile picture - Rashmi Anand
Rashmi Anand
apply for job
view job details

Backend Developer - Intern

Founded 2017
Product
1-5 employees
Bootstrapped
PHP
Python
Ruby on Rails (ROR)
NodeJS (Node.js)
NOSQL Databases
MEAN stack
PostgreSQL
Yii
Location icon
Pune
Experience icon
0 - 2 years
Experience icon
0 - 0 lacs/annum

1. Understand the current system infrastructure and code design. 2. Design and develop new features in the product as the product evolves to the upcoming market requirements. 3. Study technological improvements and incorporate the same in the product.

Job posted by
apply for job
Job poster profile picture - Rohan Raheja
Rohan Raheja
Job posted by
Job poster profile picture - Rohan Raheja
Rohan Raheja
apply for job
view job details

Enterprise Architect

Founded 2012
Products and services
51-250 employees
Raised funding
Big Data
Hadoop
HDFS
HIVE
data streaming
IAAS
Azure
net
Location icon
Bengaluru (Bangalore)
Experience icon
10 - 15 years
Experience icon
35 - 50 lacs/annum

At least 10 years of hands-on experience in migration of complex software packages and products to Azure (Cloud Service Providers CSP) IaaS and PaaS  At least 7 years of hands-on experience on programming and scripting languages (.Net, C#, WCF, MVC Web API, SQL Server, SQL Azure, Powershell).  Good to have experience in IT systems, operations, automation and configuration tools to enable continuous-integration and deployment (Jenkins)  Solid understanding of database management systems–traditional RDBMS ( MS SQL)  Ability to wear multiple hats spanning the software-development- life-cycle across Requirements, Design, Code Development, QA, Testing, and Deployment –experience working in an Agile/Scrum methodology  Analytical and Communication skills

Job posted by
apply for job
Job poster profile picture - Thouseef Ahmed
Thouseef Ahmed
Job posted by
Job poster profile picture - Thouseef Ahmed
Thouseef Ahmed
apply for job
view job details

Backend Developer

Founded 2013
Products and services
6-50 employees
Profitable
Python
Django
Amazon Web Services (AWS)
PostgreSQL
RESTful APIs
Redis
Git
Elastic Search
Location icon
NCR (Delhi | Gurgaon | Noida)
Experience icon
3 - 7 years
Experience icon
4 - 7 lacs/annum

We are working on a fintech product and actively looking for increasing our work force. The candidate should have very good Logical & Analytical thinking a problem solver and a quick learner.

Job posted by
apply for job
Job poster profile picture - Ankit Sinha
Ankit Sinha
Job posted by
Job poster profile picture - Ankit Sinha
Ankit Sinha
apply for job
view job details

Backend Engineer

Founded
employees
Java
Big Data
influxdb
Apache Mesos
Machine Learning
Spark
Location icon
Bengaluru (Bangalore), NCR (Delhi | Gurgaon | Noida)
Experience icon
3 - 20 years
Experience icon
15 - 80 lacs/annum

Key responsibilities: Architect systems capable of serving as the brains of complex distributed products Building reusable code and libraries for future use Integration of user-facing elements with server side logic Thrives in a complex and ambiguous environment, continuously adapting for business & users Maintain, contribute and adhere to programming best practices and guidelines

Job posted by
apply for job
Job poster profile picture - Jibran Khan
Jibran Khan
Job posted by
Job poster profile picture - Jibran Khan
Jibran Khan
apply for job
view job details

Fullstack Developer

Founded 2016
Product
1-5 employees
Raised funding
NodeJS (Node.js)
PostgreSQL
Team Management
Delivery Management
MERN Stack
Flask (Python Framework)
Location icon
Bengaluru (Bangalore)
Experience icon
2 - 4 years
Experience icon
4 - 7 lacs/annum

Nudg Labs is an early stage start up. Our mission is to automate B2B commerce.  A great deal of human effort goes into B2B transactions. Besides the effort, there are many missed opportunities, errors in judgment and poor allocation of capital. These inefficiencies come at a significant cost to the society.  By automating we eliminate these inefficiencies while enhancing access to a wider population.

Job posted by
apply for job
Job poster profile picture - Balaji Venkatesan
Balaji Venkatesan
Job posted by
Job poster profile picture - Balaji Venkatesan
Balaji Venkatesan
apply for job
view job details

Data Engineering Manager

Founded 1991
Product
250+ employees
Raised funding
SQL
Datawarehousing
ETL
Location icon
Hyderabad
Experience icon
9 - 14 years
Experience icon
25 - 40 lacs/annum

The Last Mile Analytics & Quality Team in Hyderabad is looking for Transportation Quality Specialist who will act as first level support for address, geocode and static route management in Last Mile with multiple Transportation services along with other operational issues and activities related to Transportation process and optimization. Your solutions will impact our customers directly! This job requires you to constantly hit the ground running and your ability to learn quickly and work on disparate and overlapping tasks will define your success. High Impact production issues often require coordination between multiple Development, Operations and IT Support groups, so you get to experience a breadth of impact with various groups. Primary responsibilities include troubleshooting, diagnosing and fixing static route issues, developing monitoring solutions, performing software maintenance and configuration, implementing the fix for internally developed code, performing minor SQL queries, updating, tracking and resolving technical challenges. Responsibilities also include working alongside development on Amazon Corporate and Divisional Software projects, updating/enhancing our current tools, automation of support processes and documentation of our systems. The ideal candidate must be detail oriented, have superior verbal and written communication skills, strong organizational skills, able to juggle multiple tasks at once, able to work independently and can maintain professionalism under pressure. You must be able to identify problems before they happen and implement solutions that detect and prevent outages. You must be able to accurately prioritize projects, make sound judgments, work to improve the customer experience, and get the right things done. Internal job description Your solutions will impact our customers directly! This job requires you to constantly hit the ground running and your ability to learn quickly and work on disparate and overlapping tasks will define your success. High Impact production issues often require coordination between multiple Development, Operations and IT Support groups, so you get to experience a breadth of impact with various groups. Primary responsibilities include troubleshooting, diagnosing and fixing static route issues, developing monitoring solutions, performing software maintenance and configuration, implementing the fix for internally developed code, performing minor SQL queries, updating, tracking and resolving technical challenges. Responsibilities also include working alongside development on Amazon Corporate and Divisional Software projects, updating/enhancing our current tools, automation of support processes and documentation of our systems. The ideal candidate must be detail oriented, have superior verbal and written communication skills, strong organizational skills, able to juggle multiple tasks at once, able to work independently and can maintain professionalism under pressure. You must be able to identify problems before they happen and implement solutions that detect and prevent outages. You must be able to accurately prioritize projects, make sound judgments, work to improve the customer experience, and get the right things done. Basic qualifications - Bachelors degree in Computer Science or Engineering - Good communication skills- both verbal and written - Demonstrated ability to work in a team - Proficiency in MS Office, SQL, Excel. Preferred qualifications - Experience working with relational databases - Experience with Linux - Debugging and troubleshooting skills, with an enthusiastic attitude to support and resolve customer problems

Job posted by
apply for job
Job poster profile picture - Rakesh Kumar
Rakesh Kumar
Job posted by
Job poster profile picture - Rakesh Kumar
Rakesh Kumar
apply for job
view job details

Senior Backend Developer

Founded 2013
Product
51-250 employees
Raised funding
Java
NodeJS (Node.js)
PostgreSQL
Postman
Git
Apache
Apache Kafka
Apache Mesos
Location icon
NCR (Delhi | Gurgaon | Noida)
Experience icon
5 - 8 years
Experience icon
20 - 35 lacs/annum

Responsibilities: Lead backend team and mentor junior engineers Design and implement REST-based APIs and microservices in Node.js Write and maintain scalable, performant code that can be shared across platforms Work and communicate with our mobile client developers and project managers, managing priorities and giving input on future features About You : You attended a top university, studying computer science or similar You have experience or interest in writing applications in Node.js You have strong server-side development and experience with databases You understand the ins and outs of RESTful web services You know your way around the UNIX command line You have great communication skills and ability to work with others You are a strong team player, with a do-whatever-it-takes attitude

Job posted by
apply for job
Job poster profile picture - Gaurav Gunjan
Gaurav Gunjan
Job posted by
Job poster profile picture - Gaurav Gunjan
Gaurav Gunjan
apply for job
view job details
Want to apply for this role at Grand Hyper?
Hiring team responds within a day
apply for jobs
Why apply on CutShort?
Connect with actual hiring teams and get their fast response. No 3rd party recruiters. No spam.