3+ Data security Jobs in Hyderabad | Data security Job openings in Hyderabad
Apply to 3+ Data security Jobs in Hyderabad on CutShort.io. Explore the latest Data security Job opportunities across top companies like Google, Amazon & Adobe.
ROLES AND RESPONSIBILITIES:
You will be responsible for architecting, implementing, and optimizing Dremio-based data Lakehouse environments integrated with cloud storage, BI, and data engineering ecosystems. The role requires a strong balance of architecture design, data modeling, query optimization, and governance enablement in large-scale analytical environments.
- Design and implement Dremio lakehouse architecture on cloud (AWS/Azure/Snowflake/Databricks ecosystem).
- Define data ingestion, curation, and semantic modeling strategies to support analytics and AI workloads.
- Optimize Dremio reflections, caching, and query performance for diverse data consumption patterns.
- Collaborate with data engineering teams to integrate data sources via APIs, JDBC, Delta/Parquet, and object storage layers (S3/ADLS).
- Establish best practices for data security, lineage, and access control aligned with enterprise governance policies.
- Support self-service analytics by enabling governed data products and semantic layers.
- Develop reusable design patterns, documentation, and standards for Dremio deployment, monitoring, and scaling.
- Work closely with BI and data science teams to ensure fast, reliable, and well-modeled access to enterprise data.
IDEAL CANDIDATE:
- Bachelor’s or Master’s in Computer Science, Information Systems, or related field.
- 5+ years in data architecture and engineering, with 3+ years in Dremio or modern lakehouse platforms.
- Strong expertise in SQL optimization, data modeling, and performance tuning within Dremio or similar query engines (Presto, Trino, Athena).
- Hands-on experience with cloud storage (S3, ADLS, GCS), Parquet/Delta/Iceberg formats, and distributed query planning.
- Knowledge of data integration tools and pipelines (Airflow, DBT, Kafka, Spark, etc.).
- Familiarity with enterprise data governance, metadata management, and role-based access control (RBAC).
- Excellent problem-solving, documentation, and stakeholder communication skills.
PREFERRED:
- Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) and data catalogs (Collibra, Alation, Purview).
- Exposure to Snowflake, Databricks, or BigQuery environments.
- Experience in high-tech, manufacturing, or enterprise data modernization programs.
Review Criteria
- Strong Dremio / Lakehouse Data Architect profile
- 5+ years of experience in Data Architecture / Data Engineering, with minimum 3+ years hands-on in Dremio
- Strong expertise in SQL optimization, data modeling, query performance tuning, and designing analytical schemas for large-scale systems
- Deep experience with cloud object storage (S3 / ADLS / GCS) and file formats such as Parquet, Delta, Iceberg along with distributed query planning concepts
- Hands-on experience integrating data via APIs, JDBC, Delta/Parquet, object storage, and coordinating with data engineering pipelines (Airflow, DBT, Kafka, Spark, etc.)
- Proven experience designing and implementing lakehouse architecture including ingestion, curation, semantic modeling, reflections/caching optimization, and enabling governed analytics
- Strong understanding of data governance, lineage, RBAC-based access control, and enterprise security best practices
- Excellent communication skills with ability to work closely with BI, data science, and engineering teams; strong documentation discipline
- Candidates must come from enterprise data modernization, cloud-native, or analytics-driven companies
Preferred
- Preferred (Nice-to-have) – Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) or data catalogs (Collibra, Alation, Purview); familiarity with Snowflake, Databricks, or BigQuery environments
Job Specific Criteria
- CV Attachment is mandatory
- How many years of experience you have with Dremio?
- Which is your preferred job location (Mumbai / Bengaluru / Hyderabad / Gurgaon)?
- Are you okay with 3 Days WFO?
- Virtual Interview requires video to be on, are you okay with it?
Role & Responsibilities
You will be responsible for architecting, implementing, and optimizing Dremio-based data lakehouse environments integrated with cloud storage, BI, and data engineering ecosystems. The role requires a strong balance of architecture design, data modeling, query optimization, and governance enablement in large-scale analytical environments.
- Design and implement Dremio lakehouse architecture on cloud (AWS/Azure/Snowflake/Databricks ecosystem).
- Define data ingestion, curation, and semantic modeling strategies to support analytics and AI workloads.
- Optimize Dremio reflections, caching, and query performance for diverse data consumption patterns.
- Collaborate with data engineering teams to integrate data sources via APIs, JDBC, Delta/Parquet, and object storage layers (S3/ADLS).
- Establish best practices for data security, lineage, and access control aligned with enterprise governance policies.
- Support self-service analytics by enabling governed data products and semantic layers.
- Develop reusable design patterns, documentation, and standards for Dremio deployment, monitoring, and scaling.
- Work closely with BI and data science teams to ensure fast, reliable, and well-modeled access to enterprise data.
Ideal Candidate
- Bachelor’s or master’s in computer science, Information Systems, or related field.
- 5+ years in data architecture and engineering, with 3+ years in Dremio or modern lakehouse platforms.
- Strong expertise in SQL optimization, data modeling, and performance tuning within Dremio or similar query engines (Presto, Trino, Athena).
- Hands-on experience with cloud storage (S3, ADLS, GCS), Parquet/Delta/Iceberg formats, and distributed query planning.
- Knowledge of data integration tools and pipelines (Airflow, DBT, Kafka, Spark, etc.).
- Familiarity with enterprise data governance, metadata management, and role-based access control (RBAC).
- Excellent problem-solving, documentation, and stakeholder communication skills.
What You Will Do :
As a Data Governance Lead at Kanerika, you will be responsible for defining, leading, and operationalizing the data governance framework, ensuring enterprise-wide alignment and regulatory compliance.
Required Qualifications :
- 7+ years of experience in data governance and data management.
- Proficient in Microsoft Purview and Informatica data governance tools.
- Strong in metadata management, lineage mapping, classification, and security.
- Experience with ADF, REST APIs, Talend, dbt, and automation via Azure tools.
- Knowledge of GDPR, CCPA, HIPAA, SOX and related compliance needs.
- Skilled in bridging technical governance with business and compliance goals.
Tools & Technologies :
- Microsoft Purview, Collibra, Atlan, Informatica Axon, IBM IG Catalog
- Microsoft Purview capabilities :
1. Label creation & policy setup
2. Auto-labeling & DLP
3. Compliance Manager, Insider Risk, Records & Lifecycle Management
4. Unified Catalog, eDiscovery, Data Map, Audit, Compliance alerts, DSPM.
Key Responsibilities :
1. Governance Strategy & Stakeholder Alignment :
- Develop and maintain enterprise data governance strategies, policies, and standards.
- Align governance with business goals : compliance, analytics, and decision-making.
- Collaborate across business, IT, legal, and compliance teams for role alignment.
- Drive governance training, awareness, and change management programs.
2. Microsoft Purview Administration & Implementation :
- Manage Microsoft Purview accounts, collections, and RBAC aligned to org structure.
- Optimize Purview setup for large-scale environments (50TB+).
- Integrate with Azure Data Lake, Synapse, SQL DB, Power BI, Snowflake.
- Schedule scans, set classification jobs, and maintain collection hierarchies.
3. Metadata & Lineage Management :
- Design metadata repositories and maintain business glossaries and data dictionaries.
- Implement ingestion workflows via ADF, REST APIs, PowerShell, Azure Functions.
- Ensure lineage mapping (ADF ? Synapse ? Power BI) and impact analysis.
4. Data Classification & Security Governance :
- Define classification rules and sensitivity labels (PII, PCI, PHI).
- Integrate with MIP, DLP, Insider Risk Management, and Compliance Manager.
- Enforce records management, lifecycle policies, and information barriers.
5. Data Quality & Policy Management :
- Define KPIs and dashboards to monitor data quality across domains.
- Collaborate on rule design, remediation workflows, and exception handling.
- Ensure policy compliance (GDPR, HIPAA, CCPA, etc.) and risk management.
6. Business Glossary & Stewardship :
- Maintain business glossary with domain owners and stewards in Purview.
- Enforce approval workflows, standard naming, and steward responsibilities.
- Conduct metadata audits for glossary and asset documentation quality.
7. Automation & Integration :
- Automate governance processes using PowerShell, Azure Functions, Logic Apps.
- Create pipelines for ingestion, lineage, glossary updates, tagging.
- Integrate with Power BI, Azure Monitor, Synapse Link, Collibra, BigID, etc.
8. Monitoring, Auditing & Compliance :
- Set up dashboards for audit logs, compliance reporting, metadata coverage.
- Oversee data lifecycle management across its phases.
- Support internal and external audit readiness with proper documentation.


