Cutshort logo
Grand Hyper logo
Database Architect
Database Architect
Grand Hyper's logo

Database Architect

Rahul Malani's profile picture
Posted by Rahul Malani
5 - 10 yrs
₹10L - ₹20L / yr
Full time
Bengaluru (Bangalore)
Skills
Data Warehouse (DWH)
Apache Hive
ETL
DWH Cloud
Hadoop
Spark
Mango DB
PostgreSQL
candidate will be responsible for all aspects of data acquisition, data transformation, and analytics scheduling and operationalization to drive high-visibility, cross-division outcomes. Expected deliverables will include the development of Big Data ELT jobs using a mix of technologies, stitching together complex and seemingly unrelated data sets for mass consumption, and automating and scaling analytics into the GRAND's Data Lake. Key Responsibilities : - Create a GRAND Data Lake and Warehouse which pools all the data from different regions and stores of GRAND in GCC - Ensure Source Data Quality Measurement, enrichment and reporting of Data Quality - Manage All ETL and Data Model Update Routines - Integrate new data sources into DWH - Manage DWH Cloud (AWS/AZURE/Google) and Infrastructure Skills Needed : - Very strong in SQL. Demonstrated experience with RDBMS, Unix Shell scripting preferred (e.g., SQL, Postgres, Mongo DB etc) - Experience with UNIX and comfortable working with the shell (bash or KRON preferred) - Good understanding of Data warehousing concepts. Big data systems : Hadoop, NoSQL, HBase, HDFS, MapReduce - Aligning with the systems engineering team to propose and deploy new hardware and software environments required for Hadoop and to expand existing environments. - Working with data delivery teams to set up new Hadoop users. This job includes setting up Linux users, setting up and testing HDFS, Hive, Pig and MapReduce access for the new users. - Cluster maintenance as well as creation and removal of nodes using tools like Ganglia, Nagios, Cloudera Manager Enterprise, and other tools. - Performance tuning of Hadoop clusters and Hadoop MapReduce routines. - Screen Hadoop cluster job performances and capacity planning - Monitor Hadoop cluster connectivity and security - File system management and monitoring. - HDFS support and maintenance. - Collaborating with application teams to install operating system and - Hadoop updates, patches, version upgrades when required. - Defines, develops, documents and maintains Hive based ETL mappings and scripts
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About Grand Hyper

Founded :
2017
Type
Size :
20-100
Stage :
Raised funding
About
The group currently operates Grand Shopping Malls, Grand Hypermarkets and Grand Xpress in Middle East and India.
Read more
Connect with the team
Profile picture
Rahul Malani
Company social profiles
bloginstagramtwitterfacebook

Similar jobs

Career Forge
at Career Forge
2 candid answers
Mohammad Faiz
Posted by Mohammad Faiz
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
5 - 7 yrs
₹12L - ₹15L / yr
Python
Apache Spark
PySpark
Data engineering
ETL
+10 more

🚀 Exciting Opportunity: Data Engineer Position in Gurugram 🌐


Hello 


We are actively seeking a talented and experienced Data Engineer to join our dynamic team at Reality Motivational Venture in Gurugram (Gurgaon). If you're passionate about data, thrive in a collaborative environment, and possess the skills we're looking for, we want to hear from you!


Position: Data Engineer  

Location: Gurugram (Gurgaon)  

Experience: 5+ years 


Key Skills:

- Python

- Spark, Pyspark

- Data Governance

- Cloud (AWS/Azure/GCP)


Main Responsibilities:

- Define and set up analytics environments for "Big Data" applications in collaboration with domain experts.

- Implement ETL processes for telemetry-based and stationary test data.

- Support in defining data governance, including data lifecycle management.

- Develop large-scale data processing engines and real-time search and analytics based on time series data.

- Ensure technical, methodological, and quality aspects.

- Support CI/CD processes.

- Foster know-how development and transfer, continuous improvement of leading technologies within Data Engineering.

- Collaborate with solution architects on the development of complex on-premise, hybrid, and cloud solution architectures.


Qualification Requirements:

- BSc, MSc, MEng, or PhD in Computer Science, Informatics/Telematics, Mathematics/Statistics, or a comparable engineering degree.

- Proficiency in Python and the PyData stack (Pandas/Numpy).

- Experience in high-level programming languages (C#/C++/Java).

- Familiarity with scalable processing environments like Dask (or Spark).

- Proficient in Linux and scripting languages (Bash Scripts).

- Experience in containerization and orchestration of containerized services (Kubernetes).

- Education in database technologies (SQL/OLAP and Non-SQL).

- Interest in Big Data storage technologies (Elastic, ClickHouse).

- Familiarity with Cloud technologies (Azure, AWS, GCP).

- Fluent English communication skills (speaking and writing).

- Ability to work constructively with a global team.

- Willingness to travel for business trips during development projects.


Preferable:

- Working knowledge of vehicle architectures, communication, and components.

- Experience in additional programming languages (C#/C++/Java, R, Scala, MATLAB).

- Experience in time-series processing.


How to Apply:

Interested candidates, please share your updated CV/resume with me.


Thank you for considering this exciting opportunity.

Read more
Emids Technologies
at Emids Technologies
2 candid answers
Rima Mishra
Posted by Rima Mishra
Bengaluru (Bangalore)
5 - 10 yrs
₹4L - ₹18L / yr
Jasper
JasperReports
ETL
JasperSoft
OLAP
+3 more

Job Description - Jasper 

  • Knowledge of Jasper report server administration, installation and configuration
  • Knowledge of report deployment and configuration
  • Knowledge of Jaspersoft Architecture and Deployment
  • Knowledge of User Management in Jaspersoft Server
  • Experience in developing Complex Reports using Jaspersoft Server and Jaspersoft Studio
  • Understand the Overall architecture of Jaspersoft BI
  • Experience in creating Ad Hoc Reports, OLAP, Views, Domains
  • Experience in report server (Jaspersoft) integration with web application
  • Experience in JasperReports Server web services API and Jaspersoft Visualise.JS Web service API
  • Experience in creating dashboards with visualizations
  • Experience in security and auditing, metadata layer
  • Experience in Interacting with stakeholders for requirement gathering and Analysis
  • Good knowledge ETL design and development, logical and physical data modeling (relational and dimensional)
  • Strong self- initiative to strive for both personal & technical excellence.
  • Coordinate efforts across Product development team and Business Analyst team.
  • Strong business and data analysis skills.
  • Domain knowledge of Healthcare an advantage.
  • Should be strong on Co- ordinate with onshore resources on development.
  • Data oriented professional with good communications skills and should have a great eye for detail.
  • Interpret data, analyze results and provide insightful inferences
  • Maintain relationship with Business Intelligence stakeholders
  • Strong Analytical and Problem Solving skills 


Read more
Ashnik
Agency job
via InvokHR by Sandeepa Kasala
Pune
5 - 15 yrs
₹10L - ₹17L / yr
PostgreSQL
DBA
Product support
Open Source Contribution
Relational Database (RDBMS)
Database Administrator 

ABOUT US.

Established in 2009, Ashnik is a leading open-source solutions and consulting company in South East Asia and India, headquartered in Singapore. We enable digital transformation for large enterprises through our design, architecting, and solution skills. Over 100 large enterprises in the region have acknowledged our expertise in delivering solutions using key open-source technologies. Our offerings form critical part of Digital transformation, Big Data platform, Cloud and Web acceleration and IT modernization. We represent EDB, Pentaho, Docker, Couchbase, MongoDB, Elastic, NGINX, Sysdig, Redis Labs, Confluent, and HashiCorp as their key partners in the region. Our team members bring decades of experience in delivering confidence to enterprises in adopting open source software and are known for their thought leadership.

As a team culture, Ashnik is a family for its team members. Each member brings in different perspective, new ideas and diverse background. Yet we all together strive for one goal – to deliver best solutions to our customers using open source software. We passionately believe in the power of collaboration. Through an open platform of idea exchange, we create vibrant environment for growth and excellence.

THE POSITION

Ashnik is looking for talented and passionate people to be part of the team for an upcoming project at client location.

QUALIFICATION AND EXPERIENCE

  • Preferably have a working experience of 4 Years and more , on production PostgreSQL DBs.
  • Experience of working in a production support environment
  • Engineering or Equivalent degree
  • Passion for open-source technologies is desired

 

ADDITIONAL SKILLS

  • Install & Configure PostgreSQL, Enterprise DB
  • Technical capabilities PostgreSQL 9.x, 10.x, 11.x
  • Server tuning
  • Troubleshooting of Database issues
  • Linux Shell Scripting
  • Install, Configure and maintain Fail Over mechanism
  • Backup - Restoration, Point in time database recovery
  • A demonstrable ability to articulate and sell the benefits of modern platforms, software and technologies.
  • A real passion for being curious and a continuous learner. You are someone that invests in yourself as much as you invest in your professional relationships.

RESPONSIBILITIES

  • Monitoring database performance
  • Optimizing Queries and handle escalations
  • Analyse and assess the impact and risk of low to medium risk changes on high profile production databases
  • Implement security features
  • DR implementation and switch over

LOCATION: Pune

Experience: 8 yrs plus

Package: upto 17 LPA

Read more
Zepto
at Zepto
1 recruiter
Agency job
via SUVI BUSINESS VENTURE by VINOTH KUMAR
Bengaluru (Bangalore)
4 - 9 yrs
₹10L - ₹15L / yr
Java
Python
Data Structures
Algorithms
PostgreSQL
+2 more
❖ Closely collaborate with product, design and business teams to
understand product ideas and business needs and help deliver these
as a series of ultra-fast experiments.
❖ Architect and implement backend services with high reliability and
scalability.
❖ Contribute to system architecture and database design.
❖ Set up best practices for development and champion their adoption.
❖ Write quality documentation and handle conflicts well to build
consensus.
❖ Learn about new technologies and incorporate them.
❖ Mentor young minds and foster team spirit.
We can provide upto 35LPA as per candidate's experience and knowledge.
Read more
Indium Software
at Indium Software
16 recruiters
Karunya P
Posted by Karunya P
Bengaluru (Bangalore), Hyderabad
1 - 9 yrs
₹1L - ₹15L / yr
SQL
Python
Hadoop
HiveQL
Spark
+1 more

Responsibilities:

 

* 3+ years of Data Engineering Experience - Design, develop, deliver and maintain data infrastructures.

SQL Specialist – Strong knowledge and Seasoned experience with SQL Queries

Languages: Python

* Good communicator, shows initiative, works well with stakeholders.

* Experience working closely with Data Analysts and provide the data they need and guide them on the issues.

* Solid ETL experience and Hadoop/Hive/Pyspark/Presto/ SparkSQL

* Solid communication and articulation skills

* Able to handle stakeholders independently with less interventions of reporting manager.

* Develop strategies to solve problems in logical yet creative ways.

* Create custom reports and presentations accompanied by strong data visualization and storytelling

 

We would be excited if you have:

 

* Excellent communication and interpersonal skills

* Ability to meet deadlines and manage project delivery

* Excellent report-writing and presentation skills

* Critical thinking and problem-solving capabilities

Read more
Remote, Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Bengaluru (Bangalore), Mumbai
2 - 8 yrs
₹9L - ₹15L / yr
DevOps
Kubernetes
Windows Azure
Google Cloud Platform (GCP)
Jenkins
+11 more
Tools you’ll need to succeed this role are as follows:
● Good experience with Continuous integration and deployment tools like
Jenkins, Spinnaker, etc.
● Ability to understand problems and craft maintainable solutions.
● Working cross-functionally with a broad set of business partners to understand
and integrate their API or data flow systems with Xeno, so a minimal
understanding of data and API integration is a must.
● Experience with docker and microservice based architecture using
orchestration platforms like Kubernetes.
● Understanding of Public Cloud, We use Azure and Google Cloud.
● Familiarity with web servers like Apache, nginx, etc.
● Possessing knowledge of monitoring tools such as Prometheus, Grafana, New
Relic, etc.
● Scripting in languages like Python, Golang, etc is required.
● Some knowledge of database technologies like MYSQL and Postgres is
required.
● Understanding Linux, specifically Ubuntu.
● Bonus points for knowledge and best practices related to security.
● Knowledge of Java or NodeJS would be a significant advantage.


Initially, when you join some of the projects you’d get to own are:
● Audit and improve overall security of the Infrastructure.
● Setting up different environments for different sets of teams like
QA,Development, Business etc.
Read more
MNC
Bengaluru (Bangalore)
3 - 9 yrs
₹8L - ₹16L / yr
Windows Azure
Hadoop
Spark
Data Structures
ADF
+1 more
  • Working knowledge of setting up and running HD insight applications
  • Hands on experience in Spark, Scala & Hive
  • Hands on experience in ADF – Azure Data Factory
  • Hands on experience in Big Data & Hadoop ECO Systems
  •  Exposure to Azure Service categories like PaaS components and IaaS subscriptions
  • Ability to Design, Develop ingestion & processing frame work for ETL applications
  • Hands on experience in powershell scripting, deployment on Azure
  • Experience in performance tuning and memory configuration
  • Should be adaptable to learn & work on new technologies
  • Should have Communication Good written and spoken
Read more
European MNC
Agency job
via Kavayah People Consulting by Kavita Singh
Pune
3 - 8 yrs
₹8L - ₹15L / yr
ETL
Data Warehouse (DWH)
SQL
Technical support
The Support Engineer (L2) will serve as a technical support champion for both internal and external customers. 
Key Responsibilities
Support mission critical applications and technologies
Adhere to agreed SLAs.
 
Required Experience, Skills and Qualifications
3-8 years of relevant experience
Proven track record of supporting ETL/Data Warehouse/Business Intelligence solutions
Strong SQL / Unix skills
Excellent written and verbal communication
High-degree of analytical and problem solving skills
Exposure to handling customers from various geographies
Strong debugging and troubleshooting skills
Ability to work with minimum supervision
Team player who shares ideas and resources
Tools and Technologies
ETL Tools: Talend or Informatica experience
BI Tools: Experience supporting Tableau or Jaspersoft or Pentaho or Qlikview
Database: Experience in Oracle or any RDBMS
Read more
NaviSite
at NaviSite
1 recruiter
Vijay Ganesh
Posted by Vijay Ganesh
Remote, NCR (Delhi | Gurgaon | Noida)
8 - 13 yrs
₹15L - ₹25L / yr
Python
Angular (2+)
Javascript
Fullstack Developer
Django
+8 more
Functional:
  • Should have experience in application architecture and design
  • Proficiency with developing APIs in Python
  • Proficiency with frontend web development (JavaScript, CSS, HTML)
  • Should have 2-4 years’ experience in leading the team
  • Experience object oriented programming (OOP) concepts using Python
  • Experienced with full software development life-cycle, programming, database design and agile methodologies
  • Experience in using Design Patterns such as MVC and frameworks such as Django, Flask
  • Ability to successfully multi-task and prioritize work.
Good to have experience:
  • Experience with developing applications with Angular (2+)
  • Experience with Docker and Kubernetes
  • Experience with SQL databases, especially PostgreSQL
Generic:
  • Good Communication Skills
  • Behavioural:
  • Good Leader
  • Adaptable / Flexible
Areas of Responsibility:
  • Candidate will be a part of Product engineering global delivery team working as a Technical Lead, leading a team highly talented engineers.
  • Candidate would be required (but not limited) to: 
  • Team management, mentoring, prioritization of work
  • Work with Architecture group for defining Product Roadmap
  • Requirement gathering with stake holder for Product enhancement
  • Designing, Developing, and documenting the solution
  • Providing support for any application issue like performance, availability
  • Contributing to actual development of user stories and features etc.
Security Related:
  • Ensuring conformity to corporate security and compliance objectives.
  • Identifying and implementing service improvement opportunities.
  • Responsible for informing the ‘business impact’ of security within the team
  • Promptly report security weakness or incidents to the Practice Managers/Leads
Read more
Securonix
at Securonix
1 recruiter
Ramakrishna Murthy
Posted by Ramakrishna Murthy
Bengaluru (Bangalore)
2 - 5 yrs
₹5L - ₹15L / yr
Hadoop
Cloudera
Hortonworks
Securonix is a security analytics product company. Our product provides real-time behavior analytics capabilities and uses the following Hadoop components - Kafka, Spark, Impala, HBase. We support very large customers for all our customers globally, with full access to the cluster. Cloudera Certification is a big plus.
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos