Database Architect |
5 - 6 Years |
Good Knowledge in Relation and Non-Relational Database |
To write Complex Queries and Identify problematic queries and provide a Solution |
Good Hands on database tools |
Experience in Both SQL and NON SQL Database like SQL Server, PostgreSQL, Mango DB, Maria DB. Etc. |
Worked on Data Model Preparation & Structuring Database etc. |
About For a leading manufacturing company
Similar jobs
Requirements:
- 2+ years of experience (4+ for Senior Data Engineer) with system/data integration, development or implementation of enterprise and/or cloud software Engineering degree in Computer Science, Engineering or related field.
- Extensive hands-on experience with data integration/EAI technologies (File, API, Queues, Streams), ETL Tools and building custom data pipelines.
- Demonstrated proficiency with Python, JavaScript and/or Java
- Familiarity with version control/SCM is a must (experience with git is a plus).
- Experience with relational and NoSQL databases (any vendor) Solid understanding of cloud computing concepts.
- Strong organisational and troubleshooting skills with attention to detail.
- Strong analytical ability, judgment and problem-solving techniques Interpersonal and communication skills with the ability to work effectively in a cross functional team.
1. Core Responsibilities
· Build, maintain and manage a team capable of delivering the data operations needs of the banks data teams and other stakeholders, ensuring the team is right sized, motivated and focused on key goals and SLAs.
· Maintain, develop and enhance the tools and environments used by the data teams to ensure availability of Development, Test and production environments
· Manage all data operations database changes observing industry standard software development life cycle approaches with development, testing and deployment supported by comprehensive documentation
· Manage releases of code from the engineering team to ensure separation of duty
· As subject matter expert take a key contributing role in data initiatives such as infrastructure development, software tool evaluation, intra company data integration
· Assess system performance and process efficiency making recommendations for change
· Identify gaps and technical issues affecting data flows and lead the development of solutions to these
· Work with internal and external stakeholders to ensure that data solutions are reliably populated on time
· Ensure that you fully understand and comply with the organisation’s Risk Management Policies as they relate to your area of responsibility and demonstrate in your day to day work that you put customers at the heart of everything you do.
· Ensure that you fully understand and comply with the organisation’s Data Governance Policies as they relate to your area of responsibility and demonstrate in your day to day work that you treat data as an important corporate asset which must be protected and managed.
· Maintain the company’s compliance standards and ensure timely completion of all mandatory on-line training modules and attestations.
2. Experience Requirements
· 5 years previous experience supporting datawarehousing, data lake solutions is essential
· 5 years previous experience of Microsoft SQL Server SSIS is desirable
· Experience with monitoring and incident management is essential
· Experience of managing a technical team is desirable
· Experience working in an IT environment
· Experience working within banking or other financial institutions is desirable
·
3. Knowledge Requirements
· Strong background of working in data teams
· Robust knowledge of RDBMS principles and how best to manage environments
· Strong knowledge of a standardised SDLC is desirable
Technical & Business Expertise:
-Hands on integration experience in SSIS/Mulesoft
- Hands on experience Azure Synapse
- Proven advanced level of writing database experience in SQL Server
- Proven advanced level of understanding about Data Lake
- Proven intermediate level of writing Python or similar programming language
- Intermediate understanding of Cloud Platforms (GCP)
- Intermediate understanding of Data Warehousing
- Advanced Understanding of Source Control (Github)
• Working Knowledge of XML, JSON, Shell and other DBMS scripts
• Hands on Experience on Oracle 11G,12c. Working knowledge of Oracle 18 and 19c
• Analysis, design, coding, testing, debugging and documentation. Complete knowledge of
Software Development Life Cycle (SDLC).
• Writing Complex Queries, stored procedures, functions and packages
• Knowledge of REST Services, UTL functions, DBMS functions and data integration is required
• Good knowledge on table level partitions, row locks and experience in OLTP.
• Should be aware about ETL tools, Data Migration, Data Mapping functionalities
• Understand the business requirement, transform/design the same into business solutions.
Perform data modelling and implement the business rules using Oracle database objects.
• Define source to target data mapping and data transformation logic as per the business
need.
• Should have worked on Materialised views creation and maintenance. Experience in
Performance tuning, impact analysis required
• Monitoring and optimizing the performance of the database. Planning for backup and
recovery of database information. Maintaining archived data. Backing up and restoring
databases.
• Hands on Experience on SQL Developer
1. Ability to work independently and to set priorities while managing several projects simultaneously; strong attention to detail is essential.
2.Collaborates with Business Systems Analysts and/or directly with key business users to ensure business requirements and report specifications are documented accurately and completely.
3.Develop data field mapping documentation.
4. Document data sources and processing flow.
5. Ability to design, refine and enhance existing reports from source systems or data warehouse.
6.Ability to analyze and optimize data including data deduplication required for reports.
7. Analysis and rationalization of reports.
8. Support QA and UAT teams in defining test scenarios and clarifying requirements.
9. Effectively communicate results of the data analysis to internal and external customers to support decision making.
10.Follows established SDLC, change control, release management and incident management processes.
11.Perform source data analysis and assessment.
12. Perform data profiling to capture business and technical rules.
13. Track and help to remediate issues and defects due to data quality exceptions.
● Good communication and collaboration skills with 4-7 years of experience.
● Ability to code and script with strong grasp of CS fundamentals, excellent problem solving abilities.
● Comfort with frequent, incremental code testing and deployment, Data management skills
● Good understanding of RDBMS
● Experience in building Data pipelines and processing large datasets .
● Knowledge of building Web Scraping and data mining is a plus.
● Working knowledge of open source tools such as mysql, Solr, ElasticSearch, Cassandra ( data stores )
would be a plus.
● Expert in Python programming
Role and responsibilities
● Inclined towards working in a start-up environment.
● Comfort with frequent, incremental code testing and deployment, Data management skills
● Design and Build robust and scalable data engineering solutions for structured and unstructured data for
delivering business insights, reporting and analytics.
● Expertise in troubleshooting, debugging, data completeness and quality issues and scaling overall
system performance.
● Build robust API ’s that powers our delivery points (Dashboards, Visualizations and other integrations).
• Drive the data engineering implementation
• Strong experience in building data pipelines
• AWS stack experience is must
• Deliver Conceptual, Logical and Physical data models for the implementation
teams.
• SQL stronghold is must. Advanced SQL working knowledge and experience
working with a variety of relational databases, SQL query authoring
• AWS Cloud data pipeline experience is must. Data pipelines and data centric
applications using distributed storage platforms like S3 and distributed processing
platforms like Spark, Airflow, Kafka
• Working knowledge of AWS technologies such as S3, EC2, EMR, RDS, Lambda,
Elasticsearch
• Ability to use a major programming (e.g. Python /Java) to process data for
modelling.