4+ Meta-data management Jobs in Bangalore (Bengaluru) | Meta-data management Job openings in Bangalore (Bengaluru)
Apply to 4+ Meta-data management Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest Meta-data management Job opportunities across top companies like Google, Amazon & Adobe.
Review Criteria
- Strong Dremio / Lakehouse Data Architect profile
- 5+ years of experience in Data Architecture / Data Engineering, with minimum 3+ years hands-on in Dremio
- Strong expertise in SQL optimization, data modeling, query performance tuning, and designing analytical schemas for large-scale systems
- Deep experience with cloud object storage (S3 / ADLS / GCS) and file formats such as Parquet, Delta, Iceberg along with distributed query planning concepts
- Hands-on experience integrating data via APIs, JDBC, Delta/Parquet, object storage, and coordinating with data engineering pipelines (Airflow, DBT, Kafka, Spark, etc.)
- Proven experience designing and implementing lakehouse architecture including ingestion, curation, semantic modeling, reflections/caching optimization, and enabling governed analytics
- Strong understanding of data governance, lineage, RBAC-based access control, and enterprise security best practices
- Excellent communication skills with ability to work closely with BI, data science, and engineering teams; strong documentation discipline
- Candidates must come from enterprise data modernization, cloud-native, or analytics-driven companies
Preferred
- Preferred (Nice-to-have) – Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) or data catalogs (Collibra, Alation, Purview); familiarity with Snowflake, Databricks, or BigQuery environments
Job Specific Criteria
- CV Attachment is mandatory
- How many years of experience you have with Dremio?
- Which is your preferred job location (Mumbai / Bengaluru / Hyderabad / Gurgaon)?
- Are you okay with 3 Days WFO?
- Virtual Interview requires video to be on, are you okay with it?
Role & Responsibilities
You will be responsible for architecting, implementing, and optimizing Dremio-based data lakehouse environments integrated with cloud storage, BI, and data engineering ecosystems. The role requires a strong balance of architecture design, data modeling, query optimization, and governance enablement in large-scale analytical environments.
- Design and implement Dremio lakehouse architecture on cloud (AWS/Azure/Snowflake/Databricks ecosystem).
- Define data ingestion, curation, and semantic modeling strategies to support analytics and AI workloads.
- Optimize Dremio reflections, caching, and query performance for diverse data consumption patterns.
- Collaborate with data engineering teams to integrate data sources via APIs, JDBC, Delta/Parquet, and object storage layers (S3/ADLS).
- Establish best practices for data security, lineage, and access control aligned with enterprise governance policies.
- Support self-service analytics by enabling governed data products and semantic layers.
- Develop reusable design patterns, documentation, and standards for Dremio deployment, monitoring, and scaling.
- Work closely with BI and data science teams to ensure fast, reliable, and well-modeled access to enterprise data.
Ideal Candidate
- Bachelor’s or master’s in computer science, Information Systems, or related field.
- 5+ years in data architecture and engineering, with 3+ years in Dremio or modern lakehouse platforms.
- Strong expertise in SQL optimization, data modeling, and performance tuning within Dremio or similar query engines (Presto, Trino, Athena).
- Hands-on experience with cloud storage (S3, ADLS, GCS), Parquet/Delta/Iceberg formats, and distributed query planning.
- Knowledge of data integration tools and pipelines (Airflow, DBT, Kafka, Spark, etc.).
- Familiarity with enterprise data governance, metadata management, and role-based access control (RBAC).
- Excellent problem-solving, documentation, and stakeholder communication skills.
About Us: We are not just an AD agency or a creative agency, we are a Communication Company. Founded in 2014, Moshi Moshi is a young, creative, gutsy, and committed communication company that wants its clients to always Expect the EXTRA from it. Our primary clientele consists of Startups and corporations like Ola, Zoomcar, Mercedes Benz, ITC, Aditya Birla Group, TATA Group, MTV, IHCL, Jaquar, Sobha, Simple Energy, and Godrej amongst others. We have a huge team of creative folks, marketers, learners, developers, and coders who believe Moshi Moshi is an experience rather than a company.
Job Role: Paid Media Executive
Experience Level: 1.5 + Years
Location: Bangalore, Karnataka (On-site)..
The ideal candidate will:
● Demonstrate an ability and willingness to learn new skills independently
● Possess the ability to communicate directly with clients, both verbally and in writing
● Have a strong analytical background
● Be detail oriented, highly organized, with a keen eye for consistency
● Be able to work effectively in a collaborative team environment, and independently as required
● Have a strong desire to learn and add value to the team
● Be solutions oriented
● Have worked within platforms including Google Ads, Google Analytics(GA4), Google Tag Manager, Bing Ads, Facebook Business Manager, Instagram Ads, LinkedIn Ads, Twitter Ads
Responsibilities:
● Develop and execute strategic marketing campaigns for clients across multiple media – paid
search, display, video, and social platforms
● Manage all aspects of campaign configuration, launch, and ongoing optimization – including
strategy, ad copywriting, data-based optimization, budget/billing management, and ad trafficking
● Troubleshoot, problem-solve, and find creative solutions to client-specific needs
● Assist creative team with ad creation through copywriting and strategic direction
● Identify optimization opportunities, including continuous testing of ad copy and landing pages,
including A/B testing
● Ensure campaigns are meeting clearly defined conversion objectives
● Create and deliver meaningful analytics and reporting to monitor and show progress
● Maintain knowledge of industry best practices and new technologies
● Maintaining the team strength at all times
Desired Competencies:
Ø Expertise in Azure Data Factory V2
Ø Expertise in other Azure components like Data lake Store, SQL Database, Databricks
Ø Must have working knowledge of spark programming
Ø Good exposure to Data Projects dealing with Data Design and Source to Target documentation including defining transformation rules
Ø Strong knowledge of CICD Process
Ø Experience in building power BI reports
Ø Understanding of different components like Pipelines, activities, datasets & linked services
Ø Exposure to dynamic configuration of pipelines using data sets and linked Services
Ø Experience in designing, developing and deploying pipelines to higher environments
Ø Good knowledge on File formats for flexible usage, File location Objects (SFTP, FTP, local, HDFS, ADLS, BLOB, Amazon S3 etc.)
Ø Strong knowledge in SQL queries
Ø Must have worked in full life-cycle development from functional design to deployment
Ø Should have working knowledge of GIT, SVN
Ø Good experience in establishing connection with heterogeneous sources like Hadoop, Hive, Amazon, Azure, Salesforce, SAP, HANA, API’s, various Databases etc.
Ø Should have working knowledge of different resources available in Azure like Storage Account, Synapse, Azure SQL Server, Azure Data Bricks, Azure Purview
Ø Any experience related to metadata management, data modelling, and related tools (Erwin or ER Studio or others) would be preferred
Preferred Qualifications:
Ø Bachelor's degree in Computer Science or Technology
Ø Proven success in contributing to a team-oriented environment
Ø Proven ability to work creatively and analytically in a problem-solving environment
Ø Excellent communication (written and oral) and interpersonal skills
Qualifications
BE/BTECH
KEY RESPONSIBILITIES :
|
You will join a team designing and building a data warehouse covering both relational and dimensional models, developing reports, data marts and other extracts and delivering these via SSIS, SSRS, SSAS, and PowerBI. It is seen as playing a vital role in delivering a single version of the truth on Client’s data and delivering MI & BI that will feature in enabling both operational and strategic decision making. You will be able to take responsibility for projects over the entire software lifecycle and work with minimum supervision. This would include technical analysis, design, development, and test support as well as managing the delivery to production. The initial project being resourced is around the development and implementation of a Data Warehouse and associated MI/BI functions. |
|
Principal Activities: 1. Interpret written business requirements documents 2. Specify (High Level Design and Tech Spec), code and write automated unit tests for new aspects of MI/BI Service. 3. Write clear and concise supporting documentation for deliverable items. 4. Become a member of the skilled development team willing to contribute and share experiences and learn as appropriate. 5. Review and contribute to requirements documentation. 6. Provide third line support for internally developed software. 7. Create and maintain continuous deployment pipelines. 8. Help maintain Development Team standards and principles. 9. Contribute and share learning and experiences with the greater Development team. 10. Work within the company’s approved processes, including design and service transition. 11. Collaborate with other teams and departments across the firm. 12. Be willing to travel to other offices when required. |
Location – Bangalore
- Setting KPIs, monitoring key trends, and helping stakeholders by generating insights from the data delivered.
- Understanding user behaviour and performing root-cause analysis of changes in data trends across different verticals.
- Get answers to business questions, identify areas of improvement, and identify opportunities for growth.
- Work on ad-hoc requests for data and analysis.
- Work with Cross functional Teams as when required to automate reports and create informative dashboards based on problem statements.
WHO COULD BE A GREAT FIT:
Functional Experience
- 1-2 years of experience working in Analytics as a Business or Data Analyst.
- Analytical mind with a problem-solving aptitude.
- Familiarity with Microsoft Azure & AWS PySpark, Python, Data Bricks, Metabase, Understanding of APIs, data warehouse and ETL etc.
- Proficient in writing Complex Queries in SQL.
- Experience in Performing hands-on analysis on data and across multiple datasets and databases primarily using Excel, Google Sheets and R.
- Ability to work across cross-functional teams proactively.


