Cutshort logo
Data architecture Jobs in Mumbai

4+ Data architecture Jobs in Mumbai | Data architecture Job openings in Mumbai

Apply to 4+ Data architecture Jobs in Mumbai on CutShort.io. Explore the latest Data architecture Job opportunities across top companies like Google, Amazon & Adobe.

icon
AI-First Company

AI-First Company

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore), Mumbai, Hyderabad, Gurugram
5 - 17 yrs
₹30L - ₹45L / yr
Data engineering
Data architecture
SQL
Data modeling
GCS
+47 more

ROLES AND RESPONSIBILITIES:

You will be responsible for architecting, implementing, and optimizing Dremio-based data Lakehouse environments integrated with cloud storage, BI, and data engineering ecosystems. The role requires a strong balance of architecture design, data modeling, query optimization, and governance enablement in large-scale analytical environments.


  • Design and implement Dremio lakehouse architecture on cloud (AWS/Azure/Snowflake/Databricks ecosystem).
  • Define data ingestion, curation, and semantic modeling strategies to support analytics and AI workloads.
  • Optimize Dremio reflections, caching, and query performance for diverse data consumption patterns.
  • Collaborate with data engineering teams to integrate data sources via APIs, JDBC, Delta/Parquet, and object storage layers (S3/ADLS).
  • Establish best practices for data security, lineage, and access control aligned with enterprise governance policies.
  • Support self-service analytics by enabling governed data products and semantic layers.
  • Develop reusable design patterns, documentation, and standards for Dremio deployment, monitoring, and scaling.
  • Work closely with BI and data science teams to ensure fast, reliable, and well-modeled access to enterprise data.


IDEAL CANDIDATE:

  • Bachelor’s or Master’s in Computer Science, Information Systems, or related field.
  • 5+ years in data architecture and engineering, with 3+ years in Dremio or modern lakehouse platforms.
  • Strong expertise in SQL optimization, data modeling, and performance tuning within Dremio or similar query engines (Presto, Trino, Athena).
  • Hands-on experience with cloud storage (S3, ADLS, GCS), Parquet/Delta/Iceberg formats, and distributed query planning.
  • Knowledge of data integration tools and pipelines (Airflow, DBT, Kafka, Spark, etc.).
  • Familiarity with enterprise data governance, metadata management, and role-based access control (RBAC).
  • Excellent problem-solving, documentation, and stakeholder communication skills.


PREFERRED:

  • Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) and data catalogs (Collibra, Alation, Purview).
  • Exposure to Snowflake, Databricks, or BigQuery environments.
  • Experience in high-tech, manufacturing, or enterprise data modernization programs.
Read more
Oneture Technologies

at Oneture Technologies

1 recruiter
Eman Khan
Posted by Eman Khan
Mumbai
4 - 7 yrs
Upto ₹21L / yr (Varies
)
Data architecture
Data modeling
ETL
ELT
Spark
+3 more

About The Role

  • As a Data Platform Lead, you will utilize your strong technical background and hands-on development skills to design, develop, and maintain data platforms.
  • Leading a team of skilled data engineers, you will create scalable and robust data solutions that enhance business intelligence and decision-making.
  • You will ensure the reliability, efficiency, and scalability of data systems while mentoring your team to achieve excellence.
  • Collaborating closely with our client’s CXO-level stakeholders, you will oversee pre-sales activities, solution architecture, and project execution.
  • Your ability to stay ahead of industry trends and integrate the latest technologies will be crucial in maintaining our competitive edge.

Key Responsibilities

  • Client-Centric Approach: Understand client requirements deeply and translate them into robust technical specifications, ensuring solutions meet their business needs.
  • Architect for Success: Design scalable, reliable, and high-performance systems that exceed client expectations and drive business success.
  • Lead with Innovation: Provide technical guidance, support, and mentorship to the development team, driving the adoption of cutting-edge technologies and best practices.
  • Champion Best Practices: Ensure excellence in software development and IT service delivery, constantly assessing and evaluating new technologies, tools, and platforms for project suitability.
  • Be the Go-To Expert: Serve as the primary point of contact for clients throughout the project lifecycle, ensuring clear communication and high levels of satisfaction.
  • Build Strong Relationships: Cultivate and manage relationships with CxO/VP level stakeholders, positioning yourself as a trusted advisor.
  • Deliver Excellence: Manage end-to-end delivery of multiple projects, ensuring timely and high-quality outcomes that align with business goals.
  • Report with Clarity: Prepare and present regular project status reports to stakeholders, ensuring transparency and alignment.
  • Collaborate Seamlessly: Coordinate with cross-functional teams to ensure smooth and efficient project execution, breaking down silos and fostering collaboration.
  • Grow the Team: Provide timely and constructive feedback to support the professional growth of team members, creating a high-performance culture.

Qualifications

  • Master’s (M. Tech., M.S.) in Computer Science or equivalent from reputed institutes like IIT, NIT preferred
  • Overall 6–8 years of experience with minimum 2 years of relevant experience and a strong technical background
  • Experience working in mid size IT Services company is preferred

Preferred Certification

  • AWS Certified Data Analytics Specialty
  • AWS Solution Architect Professional
  • Azure Data Engineer + Solution Architect
  • Databricks Certified Data Engineer / ML Professional

Technical Expertise

  • Advanced knowledge of distributed architectures and data modeling practices.
  • Extensive experience with Data Lakehouse systems like Databricks and data warehousing solutions such as Redshift and Snowflake.
  • Hands-on experience with data technologies such as Apache Spark, SQL, Airflow, Kafka, Jenkins, Hadoop, Flink, Hive, Pig, HBase, Presto, and Cassandra.
  • Knowledge in BI tools including PowerBi, Tableau, Quicksight and open source equivalent like Superset and Metabase is good to have.
  • Strong knowledge of data storage formats including Iceberg, Hudi, and Delta.
  • Proficient programming skills in Python, Scala, Go, or Java.
  • Ability to architect end-to-end solutions from data ingestion to insights, including designing data integrations using ETL and other data integration patterns.
  • Experience working with multi-cloud environments, particularly AWS and Azure.
  • Excellent teamwork and communication skills, with the ability to thrive in a fast-paced, agile environment.


Read more
AI company

AI company

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore), Mumbai, Hyderabad, Gurugram
5 - 17 yrs
₹30L - ₹45L / yr
Data architecture
Data engineering
SQL
Data modeling
GCS
+21 more

Review Criteria

  • Strong Dremio / Lakehouse Data Architect profile
  • 5+ years of experience in Data Architecture / Data Engineering, with minimum 3+ years hands-on in Dremio
  • Strong expertise in SQL optimization, data modeling, query performance tuning, and designing analytical schemas for large-scale systems
  • Deep experience with cloud object storage (S3 / ADLS / GCS) and file formats such as Parquet, Delta, Iceberg along with distributed query planning concepts
  • Hands-on experience integrating data via APIs, JDBC, Delta/Parquet, object storage, and coordinating with data engineering pipelines (Airflow, DBT, Kafka, Spark, etc.)
  • Proven experience designing and implementing lakehouse architecture including ingestion, curation, semantic modeling, reflections/caching optimization, and enabling governed analytics
  • Strong understanding of data governance, lineage, RBAC-based access control, and enterprise security best practices
  • Excellent communication skills with ability to work closely with BI, data science, and engineering teams; strong documentation discipline
  • Candidates must come from enterprise data modernization, cloud-native, or analytics-driven companies


Preferred

  • Preferred (Nice-to-have) – Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) or data catalogs (Collibra, Alation, Purview); familiarity with Snowflake, Databricks, or BigQuery environments


Job Specific Criteria

  • CV Attachment is mandatory
  • How many years of experience you have with Dremio?
  • Which is your preferred job location (Mumbai / Bengaluru / Hyderabad / Gurgaon)?
  • Are you okay with 3 Days WFO?
  • Virtual Interview requires video to be on, are you okay with it?


Role & Responsibilities

You will be responsible for architecting, implementing, and optimizing Dremio-based data lakehouse environments integrated with cloud storage, BI, and data engineering ecosystems. The role requires a strong balance of architecture design, data modeling, query optimization, and governance enablement in large-scale analytical environments.

  • Design and implement Dremio lakehouse architecture on cloud (AWS/Azure/Snowflake/Databricks ecosystem).
  • Define data ingestion, curation, and semantic modeling strategies to support analytics and AI workloads.
  • Optimize Dremio reflections, caching, and query performance for diverse data consumption patterns.
  • Collaborate with data engineering teams to integrate data sources via APIs, JDBC, Delta/Parquet, and object storage layers (S3/ADLS).
  • Establish best practices for data security, lineage, and access control aligned with enterprise governance policies.
  • Support self-service analytics by enabling governed data products and semantic layers.
  • Develop reusable design patterns, documentation, and standards for Dremio deployment, monitoring, and scaling.
  • Work closely with BI and data science teams to ensure fast, reliable, and well-modeled access to enterprise data.


Ideal Candidate

  • Bachelor’s or master’s in computer science, Information Systems, or related field.
  • 5+ years in data architecture and engineering, with 3+ years in Dremio or modern lakehouse platforms.
  • Strong expertise in SQL optimization, data modeling, and performance tuning within Dremio or similar query engines (Presto, Trino, Athena).
  • Hands-on experience with cloud storage (S3, ADLS, GCS), Parquet/Delta/Iceberg formats, and distributed query planning.
  • Knowledge of data integration tools and pipelines (Airflow, DBT, Kafka, Spark, etc.).
  • Familiarity with enterprise data governance, metadata management, and role-based access control (RBAC).
  • Excellent problem-solving, documentation, and stakeholder communication skills.
Read more
Pluginlive

at Pluginlive

1 recruiter
Harsha Saggi
Posted by Harsha Saggi
Chennai, Mumbai
4 - 6 yrs
₹10L - ₹20L / yr
skill iconPython
SQL
NOSQL Databases
Data architecture
Data modeling
+7 more

Role Overview:

We are seeking a talented and experienced Data Architect with strong data visualization capabilities to join our dynamic team in Mumbai. As a Data Architect, you will be responsible for designing, building, and managing our data infrastructure, ensuring its reliability, scalability, and performance. You will also play a crucial role in transforming complex data into insightful visualizations that drive business decisions. This role requires a deep understanding of data modeling, database technologies (particularly Oracle Cloud), data warehousing principles, and proficiency in data manipulation and visualization tools, including Python and SQL.


Responsibilities:

  • Design and implement robust and scalable data architectures, including data warehouses, data lakes, and operational data stores, primarily leveraging Oracle Cloud services.
  • Develop and maintain data models (conceptual, logical, and physical) that align with business requirements and ensure data integrity and consistency.
  • Define data governance policies and procedures to ensure data quality, security, and compliance.
  • Collaborate with data engineers to build and optimize ETL/ELT pipelines for efficient data ingestion, transformation, and loading.
  • Develop and execute data migration strategies to Oracle Cloud.
  • Utilize strong SQL skills to query, manipulate, and analyze large datasets from various sources.
  • Leverage Python and relevant libraries (e.g., Pandas, NumPy) for data cleaning, transformation, and analysis.
  • Design and develop interactive and insightful data visualizations using tools like [Specify Visualization Tools - e.g., Tableau, Power BI, Matplotlib, Seaborn, Plotly] to communicate data-driven insights to both technical and non-technical stakeholders.
  • Work closely with business analysts and stakeholders to understand their data needs and translate them into effective data models and visualizations.
  • Ensure the performance and reliability of data visualization dashboards and reports.
  • Stay up-to-date with the latest trends and technologies in data architecture, cloud computing (especially Oracle Cloud), and data visualization.
  • Troubleshoot data-related issues and provide timely resolutions.
  • Document data architectures, data flows, and data visualization solutions.
  • Participate in the evaluation and selection of new data technologies and tools.


Qualifications:

  • Bachelor's or Master's degree in Computer Science, Data Science, Information Systems, or a related field.
  • Proven experience (typically 5+ years) as a Data Architect, Data Modeler, or similar role. 

  • Deep understanding of data warehousing concepts, dimensional modeling (e.g., star schema, snowflake schema), and ETL/ELT processes.
  • Extensive experience working with relational databases, particularly Oracle, and proficiency in SQL.
  • Hands-on experience with Oracle Cloud data services (e.g., Autonomous Data Warehouse, Object Storage, Data Integration).
  • Strong programming skills in Python and experience with data manipulation and analysis libraries (e.g., Pandas, NumPy).
  • Demonstrated ability to create compelling and effective data visualizations using industry-standard tools (e.g., Tableau, Power BI, Matplotlib, Seaborn, Plotly).
  • Excellent analytical and problem-solving skills with the ability to interpret complex data and translate it into actionable insights. 
  • Strong communication and presentation skills, with the ability to effectively communicate technical concepts to non-technical audiences. 
  • Experience with data governance and data quality principles.
  • Familiarity with agile development methodologies.
  • Ability to work independently and collaboratively within a team environment.

Application Link- https://forms.gle/km7n2WipJhC2Lj2r5

Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort