Loading...

{{notif_text}}

The next CutShort event, {{next_cs_event.name}}, in partnership with UpGrad, will happen on {{next_cs_event.startDate | date: 'd MMMM'}}The next CutShort event, {{next_cs_event.name}}, in partnership with UpGrad, will begin in a few hoursThe CutShort event, {{next_cs_event.name}}, in partnership with UpGrad, is LIVE.Join now!
{{hours_remaining}}:{{minutes_remaining}}:{{seconds_remaining}}Want to save 90% of your recruiting time? Learn how in our next webinar on 22nd March at 3 pmLearn more

Locations

Bengaluru (Bangalore)

Experience

5 - 10 years

Salary

{{1000000 / ('' == 'MONTH' ? 12 : 100000) | number}} - {{2000000 / ('' == 'MONTH' ? 12 : 100000) | number}} {{'' == 'MONTH' ? '/mo' : 'lpa'}}

Skills

ETL
Data Warehouse (DWH)
DWH Cloud
Hadoop
Apache Hive
Spark
Mango DB
PostgreSQL

Job description

candidate will be responsible for all aspects of data acquisition, data transformation, and analytics scheduling and operationalization to drive high-visibility, cross-division outcomes. Expected deliverables will include the development of Big Data ELT jobs using a mix of technologies, stitching together complex and seemingly unrelated data sets for mass consumption, and automating and scaling analytics into the GRAND's Data Lake. Key Responsibilities : - Create a GRAND Data Lake and Warehouse which pools all the data from different regions and stores of GRAND in GCC - Ensure Source Data Quality Measurement, enrichment and reporting of Data Quality - Manage All ETL and Data Model Update Routines - Integrate new data sources into DWH - Manage DWH Cloud (AWS/AZURE/Google) and Infrastructure Skills Needed : - Very strong in SQL. Demonstrated experience with RDBMS, Unix Shell scripting preferred (e.g., SQL, Postgres, Mongo DB etc) - Experience with UNIX and comfortable working with the shell (bash or KRON preferred) - Good understanding of Data warehousing concepts. Big data systems : Hadoop, NoSQL, HBase, HDFS, MapReduce - Aligning with the systems engineering team to propose and deploy new hardware and software environments required for Hadoop and to expand existing environments. - Working with data delivery teams to set up new Hadoop users. This job includes setting up Linux users, setting up and testing HDFS, Hive, Pig and MapReduce access for the new users. - Cluster maintenance as well as creation and removal of nodes using tools like Ganglia, Nagios, Cloudera Manager Enterprise, and other tools. - Performance tuning of Hadoop clusters and Hadoop MapReduce routines. - Screen Hadoop cluster job performances and capacity planning - Monitor Hadoop cluster connectivity and security - File system management and monitoring. - HDFS support and maintenance. - Collaborating with application teams to install operating system and - Hadoop updates, patches, version upgrades when required. - Defines, develops, documents and maintains Hive based ETL mappings and scripts

About Grand Hyper

The group currently operates Grand Shopping Malls, Grand Hypermarkets and Grand Xpress in Middle East and India.

Founded

2017

Type

Products & Services

Size

6-50 employees

Stage

Raised funding
View company

Similar jobs

Senior Software Engineer - Backend

Founded 2015
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Pune
Experience icon
5 - 10 years
Experience icon
17 - 25 lacs/annum

Responsibilities Ensure timely and top-quality product delivery Ensure that the end product is fully and correctly defined and documented Ensure implementation/continuous improvement of formal processes to support product development activities Drive the architecture/design decisions needed to achieve cost-effective and high-performance results Conduct feasibility analysis, produce functional and design specifications of proposed new features. · Provide helpful and productive code reviews for peers and junior members of the team. Troubleshoot complex issues discovered in-house as well as in customer environments. Qualifications · Strong computer science fundamentals in algorithms, data structures, databases, operating systems, etc. · Expertise in Java, Object Oriented Programming, Design Patterns · Experience in coding and implementing scalable solutions in a large-scale distributed environment · Working experience in a Linux/UNIX environment is good to have · Experience with relational databases and database concepts, preferably MySQL · Experience with SQL and Java optimization for real-time systems · Familiarity with version control systems Git and build tools like Maven · Excellent interpersonal, written, and verbal communication skills · BE/B.Tech./M.Sc./MCS/MCA in Computers or equivalent

Job posted by
apply for job
apply for job
Sourabh Gandhe picture
Sourabh Gandhe
Job posted by
Sourabh Gandhe picture
Sourabh Gandhe
Apply for job
apply for job

Data Engineer

Founded 2018
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 14 years
Experience icon
10 - 28 lacs/annum

ITTStar global services is subsidiary unit in Bengaluru with head office in Atlanta, Georgia. We are primarily into data management and data life cycle solutions, which includes machine learning and artificial intelligence. For further info, visit ITTstar.com . As discussed over the call, I am forwarding the job description. We are looking for enthusiastic and experienced data engineers to be part of our bustling team of professionals for our Bengaluru location. JOB DESCRIPTION: 1. Experience in Spark & Big Data is mandatory. 2. Strong Programming Skills in Python / Java / Scala /Node.js. 3. Hands on experience handling multiple data types JSON/XML/Delimited/Unstructured. 4. Hands on experience working at least one Relational and/or NoSQL Databases. 5. Knowledge on SQL Queries and Data Modeling. 6. Hands on experience working in ETL Use cases either in On-premise or Cloud. 7. Experience in any Cloud Platform (AWS, Azure, GCP, Alibaba). 8. Knowledge in one or more AWS Services like Kinesis, EC2, EMR, Hive Integration, Athena, FireHose, Lambda, S3, Glue Crawler, Redshift, RDS is a plus. 9. Good Communication Skills and Self Driven - should be able to deliver the projects with minimum instructions from Client.

Job posted by
apply for job
apply for job
Thatchinamoorthy Arumugam picture
Thatchinamoorthy Arumugam
Job posted by
Thatchinamoorthy Arumugam picture
Thatchinamoorthy Arumugam
Apply for job
apply for job

SQL Developer

Founded 2014
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 9 years
Experience icon
6 - 15 lacs/annum

Roles and Responsibilities: Design Stored Procedures, Views, Functions, triggers, indexes, constraints etc. Develop complex SQL code using joins, subqueries and troubleshoot the SQL code. Integration of multiple data sources and databases into one system. Build clean and reusable SQL queries for future use. Improve the performance and optimise SQL queries to get best results. Work with team members on design of software systems and support to development team. Establish processes and best practices with defined database standards. Skills / competencies: Extensive experience working with enterprise applications using relational databases (Postgresql is preferred). Strong knowledge and experience in query optimisation, performance tuning, using SQL profiler, Performance monitor and related monitoring tools. Experience developing complex queries, Stored Procedures, Views, Functions, Triggers and troubleshooting the same. Familiarity with RDBMS principles, normalisation, database design is an added advantage. Understanding of the fundamental design principles behind a scalable application. Special Requirements: 5+ years experience in SQL development, performance tuning and query optimisation. Strong analytical, problem solving, English communication skills, both written and spoken. Knowledge of PHP/ Laravel will be an added advantage. Experience with enterprise-scale systems a major plus.

Job posted by
apply for job
apply for job
Ebin Zacharia Varghese picture
Ebin Zacharia Varghese
Job posted by
Ebin Zacharia Varghese picture
Ebin Zacharia Varghese
Apply for job
apply for job

Data Engineer

Founded 2017
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[1 - 1]}} employees
{{j_company_stages[1 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Hyderabad
Experience icon
0 - 3 years
Experience icon
2 - 13 lacs/annum

Job Description Mandatory: ☞ Deep knowledge in a programming language, ideally Python ☞ Deep knowledge selenium, numpy, openpyxl, beautifulsoup, pandas, and sklearn ☞ Knowledge of at least one web framework, ideally a Python-friendly one e.g. Django, Flask, Pyramid ☞ Ability to extract multiple sources and conduct heavy treatment to a wide array of tables ☞ Familiarity with event-driven and object-oriented programming ☞ Ability to integrate multiple data sources and databases into one system ☞ Understanding of fundamental design principles for scalable applications Preferred: ☞ High proficiency with at least one code repository ☞ Experience leading and managing deadlines and responsibilities for own work You will be expected to: ☞ Understand existing overall architecture and improve it according to the latest standards and best practices, e.g. existing Extract Treat Load scripts ☞ Using best practice such as code reviews, push – pull, agile development methods ☞ Quickly improve overall quality of code with the development of automated testing We are also working on some exciting Machine Learning applications that you can join if interested.

Job posted by
apply for job
apply for job
Usha V picture
Usha V
Job posted by
Usha V picture
Usha V
Apply for job
apply for job

Postgres Database Adminstrator

Founded 1998
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[1 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
4 - 10 years
Experience icon
5 - 30 lacs/annum

We are looking to hire outstanding individual to join our Engineering team to manage and support the database infrastructure. What is the job like? *Manage Postgres server database product lifecycle in production and pre-production environments. *Monitor system health and performance and ensure high performance, availability and security. *Apply data modelling techniques to ensure development and implementation support efforts meet integration and performance expectations. *Independently analyze, solve and correct issue on high priority. *Assist developer with complex query tuning and schema. *Provide 24x7 support for the production database and help in outage resolution. What do we look for? *4+ years of strong experience with Performance tuning and optimization. *Experience in High Availability, maintenance and Disaster Recovery plans. *Good understanding and experience in Backup, restore and recovery models.

Job posted by
apply for job
apply for job
Richa Pancholy picture
Richa Pancholy
Job posted by
Richa Pancholy picture
Richa Pancholy
Apply for job
apply for job
Want to apply for this role at Grand Hyper?
Hiring team responds within a day
apply for this job
Why apply via CutShort?
Connect with actual hiring teams and get their fast response. No 3rd party recruiters. No spam.