Cutshort logo
Data modeling jobs

50+ Data modeling Jobs in India

Apply to 50+ Data modeling Jobs on CutShort.io. Find your next job, effortlessly. Browse Data modeling Jobs and apply today!

icon
AI Industry

AI Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Remote only
5 - 15 yrs
₹18L - ₹25L / yr
PowerBI
SQL
Mobile App Development
Windows App Development
Scripting
+12 more

Description

As a Power Apps Developer, you will be at the forefront of crafting innovative, low‑code solutions that streamline business processes and empower end‑users across the organization. You will collaborate closely with functional analysts, business stakeholders, and fellow developers to translate complex requirements into intuitive, scalable applications on the Microsoft Power Platform. The role offers a dynamic environment where continuous learning is encouraged, providing access to the latest Power Apps features, Azure services, and integration techniques. You will contribute to a culture of knowledge sharing, participate in code reviews, and mentor junior team members, ensuring high‑quality deliverables that drive operational efficiency and measurable business impact.


Requirements:

  • 5–15 years of experience developing enterprise‑grade solutions using Microsoft Power Apps, Power Automate, and Power BI.
  • Strong proficiency in Canvas and Model‑Driven apps, Common Data Service (Dataverse), and integration with Azure services (e.g., Azure Functions, Logic Apps).
  • Solid understanding of relational databases, SQL, and data modeling concepts.
  • Experience with JavaScript, TypeScript, and RESTful APIs for extending Power Apps functionality.
  • Excellent problem‑solving abilities, strong communication skills, and a collaborative mindset.
  • Relevant certifications such as Microsoft Power Platform Developer Associate (PL‑400) are a plus.


Roles and Responsibilities:

  • Design, develop, and deploy custom Power Apps solutions that meet business requirements and adhere to best practices.
  • Create and maintain automated workflows using Power Automate to streamline repetitive tasks and improve efficiency.
  • Integrate Power Apps with external systems via connectors, APIs, and Azure services to ensure seamless data flow.
  • Perform performance tuning, debugging, and troubleshooting of applications to ensure optimal user experience.
  • Collaborate with business analysts and stakeholders to gather requirements, provide technical guidance, and deliver prototypes.
  • Conduct code reviews, enforce governance standards, and contribute to the development of a reusable component library.
  • Stay updated with the latest Power Platform releases, evaluate new features, and recommend adoption strategies.
  • Provide training and mentorship to junior developers and end‑users to foster platform adoption.


Must have skills

Power apps - 5 years

Microsoft Power Automate - 1 years


Nice to have skills

Canvas App Development and Scripting - 4 years

Canvas Apps Development - 4 years

SQL - 2 years

SharePoint APIs - 1 years

Power Fx - 2 years

C Sharp - 3 years

RESTful API - 2 years


Read more
AI Industry

AI Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Mumbai, Bengaluru (Bangalore), Hyderabad, Gurugram
6 - 10 yrs
₹32L - ₹42L / yr
ETL
SQL
Google Cloud Platform (GCP)
Data engineering
ELT
+17 more

Role & Responsibilities:

We are looking for a strong Data Engineer to join our growing team. The ideal candidate brings solid ETL fundamentals, hands-on pipeline experience, and cloud platform proficiency — with a preference for GCP / BigQuery expertise.


Responsibilities:

  • Design, build, and maintain scalable data pipelines and ETL/ELT workflows
  • Work with Dataform or DBT to implement transformation logic and data models
  • Develop and optimize data solutions on GCP (BigQuery, GCS) or AWS/Azure
  • Support data migration initiatives and data mesh architecture patterns
  • Collaborate with analysts, scientists, and business stakeholders to deliver reliable data products
  • Apply data governance and quality best practices across the data lifecycle
  • Troubleshoot pipeline issues and drive proactive monitoring and resolution


Ideal Candidate:

  • Strong Data Engineer Profile
  • Must have 6+ years of hands-on experience in Data Engineering, with strong ownership of end-to-end data pipeline development.
  • Must have strong experience in ETL/ELT pipeline design, transformation logic, and data workflow orchestration.
  • Must have hands-on experience with any one of the following: Dataform, dbt, or BigQuery, with practical exposure to data transformation, modeling, or cloud data warehousing.
  • Must have working experience on any cloud platform: GCP (preferred), AWS, or Azure, including object storage (GCS, S3, ADLS).
  • Must have strong SQL skills with experience in writing complex queries and optimizing performance.
  • Must have programming experience in Python and/or SQL for data processing.
  • Must have experience in building and maintaining scalable data pipelines and troubleshooting data issues.
  • Exposure to data migration projects and/or data mesh architecture concepts.
  • Experience with Spark / PySpark or large-scale data processing frameworks.
  • Experience working in product-based companies or data-driven environments.
  • Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.


NOTE:

  • There will be an interview drive scheduled on 28th and 29th March 2026, and if shortlisted, they will be expected to be available on these Interview dates. Only Immediate joiners are considered.
Read more
Hospitality Industry

Hospitality Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Gurugram
1 - 2 yrs
₹4L - ₹5L / yr
Budget
Forecasting
Analytical Skills
Numerical analysis
Financial planning
+19 more

Industry: Hospitality

Preferred Skills: Budgeting, Forecasting, Analytical and Numerical Ability, Financial Planning, Microsoft Excel

Functions: Accounting/Finance

Working Days: 5


Role Overview:

We are looking for a junior FP&A Analyst who can support the finance team in planning, budgeting, forecasting, and business analysis. This role is ideal for someone with 1+ year of relevant FP&A experience who understands business numbers and can translate data into insights.


Note: 

This is not an accounting-focused role. We are looking for candidates with a business finance / FP&A mindset, not core accounting or audit profiles.


Key Responsibilities:

1. Budgeting & Forecasting

  • Assist in preparation of annual budgets and periodic forecasts
  • Track actuals vs budget and highlight key variances


2. Business Analysis

  • Analyze financial and operational data to support decision-making
  • Work closely with restaurant operations to understand cost drivers and revenue trends


3. MIS & Reporting

  • Prepare monthly MIS reports, dashboards, and performance summaries
  • Ensure timely and accurate reporting for internal stakeholders


4. Variance Analysis

  • Identify deviations in cost, revenue, and profitability
  • Provide actionable insights to improve business performance


5. Data Handling & Modeling

  • Work on Excel-based financial models and reports
  • Maintain and update financial datasets


6. Stakeholder Coordination

  • Collaborate with cross-functional teams including operations and finance
  • Support ad-hoc analysis and business reviews


Eligibility Criteria:

Experience:

  • Minimum 1 year of experience in FP&A / Business Finance / Financial Analysis
  • Experience in hospitality, retail, QSR, FMCG, or similar fast-paced industries is preferred

 

Required Skills:

  • Strong analytical and numerical ability
  • Good understanding of financial planning, budgeting, and forecasting concepts
  • Proficiency in Microsoft Excel (mandatory)
  • Basic understanding of business finance and performance metrics
  • Ability to work with large datasets and generate insights
  • Good communication skills (no constraints)
Read more
Public Listed - Product Based company

Public Listed - Product Based company

Agency job
via Recruiting Bond by Pavan Kumar
Bengaluru (Bangalore)
4 - 8 yrs
₹25L - ₹70L / yr
skill iconData Science
data platforms
Data-flow analysis
Data pipelines
AI Infrastructure
+28 more

🤖 Data Scientist – Frontier AI for Data Platforms & Distributed Systems (4–8 Years)

Experience: 4–8 Years

Location: Bengaluru (On-site / Hybrid)

Company: Publicly Listed, Global Product Platform


🧠 About the Mission

We are building a Top 1% AI-Native Engineering & Data Organization — from first principles.

This is not incremental improvement.

This is a full-stack transformation of a large-scale enterprise into an AI-native data platform company.

We are re-architecting:

  • Legacy systems → AI-native architectures
  • Static pipelines → autonomous, self-healing systems
  • Data platforms → intelligent, learning systems
  • Software workflows → agentic execution layers

This is the kind of shift you would expect from companies like Google or Microsoft —

Except here, you will build it from day zero and scale it globally.


🧠 The Opportunity: This role sits at the intersection of three high-impact domains:

1. Frontier AI Systems: Large Language Models (LLMs), Small Language Models (SLMs), and Agentic AI

2. Data Platforms: Warehouses, Lakehouses, Streaming Systems, Query Engines

3. Distributed Systems: High-throughput, low-latency, multi-region infrastructure


We are building systems where:

  • Data platforms optimize themselves using ML/LLMs
  • Pipelines are autonomous, self-healing, and adaptive
  • Queries are generated, optimized, and executed intelligently
  • Infrastructure learns from usage and evolves continuously

This is: AI as the control plane for data infrastructure


🧩 What You’ll Work On

You will design and build AI-native systems deeply embedded inside data infrastructure.

1. AI-Native Data Platforms

  • Build LLM-powered interfaces:
  • Natural language → SQL / pipelines / transformations
  • Design semantic data layers:
  • Embeddings, vector search, knowledge graphs
  • Develop AI copilots:
  • For data engineers, analysts, and platform users

2. Autonomous Data Pipelines

  • Build self-healing ETL/ELT systems using AI agents
  • Create pipelines that:
  • Detect anomalies in real time
  • Automatically debug failures
  • Dynamically optimize transformations

3. Intelligent Query & Compute Optimization

  • Apply ML/LLMs to:
  • Query planning and execution
  • Cost-based optimization using learned models
  • Workload prediction and scheduling
  • Build systems that:
  • Learn from query patterns
  • Continuously improve performance and cost efficiency

4. Distributed Data + AI Infrastructure

  • Architect systems operating at:
  • Billions of events per day
  • Petabyte-scale data
  • Work with:
  • Distributed compute engines (Spark / Flink / Ray class systems)
  • Streaming systems (Kafka-class infra)
  • Vector databases and hybrid retrieval systems

5. Learning Systems & Feedback Loops

  • Build closed-loop AI systems:
  • Execution → feedback → model updates
  • Develop:
  • Continual learning pipelines
  • Online learning systems for infra optimization
  • Experimentation frameworks (A/B, bandits, eval pipelines)

6. LLM & Agentic Systems (Infra-Aware)

  • Build agents that understand data systems
  • Enable:
  • Autonomous pipeline debugging
  • Root cause analysis for infra failures
  • Intelligent orchestration of data workflows


🧠 What We’re Looking For

Core Foundations

  • Strong grounding in:
  • Machine Learning, Deep Learning, NLP
  • Statistics, optimization, probabilistic systems
  • Distributed systems fundamentals
  • Deep understanding of:
  • Transformer architectures
  • Modern LLM ecosystems

Hands-On Expertise

  • Experience building:
  • LLM / GenAI systems (RAG, fine-tuning, embeddings)
  • Data platforms (warehouse, lake, lakehouse architectures)
  • Distributed pipelines and compute systems
  • Strong programming skills:
  • Python (ML/AI stack)
  • SQL (deep understanding — query planning, optimization mindset)


Systems Thinking (Critical)

You think in systems, not components.

  • Built or worked on:
  • Large-scale data pipelines
  • High-throughput distributed systems
  • Low-latency, high-concurrency architectures
  • Understand:
  • Query optimization and execution
  • Data partitioning, indexing, caching
  • Trade-offs in distributed systems


🔥 What Sets You Apart (Top 1%)

  • Built AI-powered data platforms or infra systems in production
  • Designed or contributed to:
  • Query engines / optimizers
  • Data observability / lineage systems
  • AI-driven infra or AIOps platforms
  • Experience with:
  • Multi-modal AI (logs, metrics, traces, text)
  • Agentic AI systems
  • Autonomous infrastructure
  • Worked on systems at scale comparable to:
  • Google (BigQuery-like systems)
  • Meta (real-time analytics infra)
  • Snowflake / Databricks (lakehouse architectures)


🧬 Ideal Background (Not Mandatory)

We often see strong candidates from:

  • Data infrastructure or platform engineering teams
  • AI-first startups or research-driven environments
  • High-scale product companies

Experience building:

  • Internal platforms used by 1000s of engineers
  • Systems serving millions of users / high throughput workloads
  • Multi-region, distributed cloud systems


🧠 The Kind of Problems You’ll Solve

  • Can LLMs replace traditional query optimizers?
  • How do we build self-healing data pipelines at scale?
  • Can data systems learn from every query and improve automatically?
  • How do we embed reasoning and planning into infrastructure layers?
  • What does a fully autonomous data platform look like?


Background: We Commonly See (But Not Limited To)

Our team often includes engineers from top-tier institutions and strong research or product backgrounds, including:

  • Leading engineering schools in India and globally
  • Engineers with experience in top product companies, AI startups, or research-driven environments
  • That said, we care far more about demonstrated ability, depth, and impact than pedigree alone.


Read more
Software and consulting company

Software and consulting company

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
5 - 8 yrs
₹14L - ₹17L / yr
PowerBI
Business Intelligence (BI)
Business Analysis
skill iconData Analytics
Data Visualization
+15 more

Description

Power BI JD


Mandatory:

• 5+ years of Power BI Report development experience.

• Building Analysis Services reporting models.

• Developing visual reports, KPI scorecards, and dashboards using Power BI desktop.

• Connecting data sources, importing data, and transforming data for Business intelligence.

• Analytical thinking for translating data into informative reports and visuals.

• Capable of implementing row-level security on data along with an understanding of application security layer models in Power BI.

• Should have an edge over making DAX queries in Power BI desktop.

• Expert in using advanced-level calculations on the data set.

• Responsible for design methodology and project documentaries.

• Should be able to develop tabular and multidimensional models that are compatible with data warehouse standards.

• Very good communication skills must be able to discuss the requirements effectively with the client teams, and with internal teams.

• Experience working with Microsoft Business Intelligence Stack having Power BI, SSAS, SSRS, and SSIS

• Mandate to have experience with BI tools and systems such as Power BI, Tableau, and SAP.

• Must have 3-4years of experience in data-specific roles.

• Have knowledge of database fundamentals such as multidimensional database design, relational database design, and more

• Knowledge of all the Power BI products (Power Bi premium, Power BI server, Power BI services, Powerquery etc)

• Grip over data analytics

• Interact with customers to understand their business problems and provide best-in-class analytics solutions

• Proficient in SQL and Query performance tuning skills

• Understand data governance, quality and security and integrate analytics with these corporate platforms

• Attention to detail and ability to deliver accurate client outputs

• Experience of working with large and multiple datasets / data warehouses

• Ability to derive insights from data and analysis and create presentations for client teams

• Experience with performance optimization of the dashboards

• Interact with UX/UI designers to create best in class visualization for business harnessing all product capabilities.

• Resilience under pressure and against deadlines.

• Proactive attitude and an open outlook.

• Strong analytical problem-solving skills

• Skill in identifying data issues and anomalies during the analysis

• Strong business acumen demonstrated an aptitude for analytics that incite action

• Ability to execute on design requirements defined by business

• Ability to understand required Power BI functionality from wireframes/ requirement documents

• Ability to architect and design reporting solutions based on client needs.

• Being able to communicate with internal/external customers, desire to develop communication and client-facing skills.

• Ability to seamlessly work with MS Excel working knowledge of pivot table and related functions


Good to have:

• Experience in working with Azure and connecting synapse with Tableau

• Demonstrate strength in data modelling, ETL development, and data warehousing

• Knowledge of leading large-scale data warehousing and analytics projects using Azure, Synapse, MS SQL DB

• Good knowledge of building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets

• Good to have knowledge of Supply Chain Domain.

Read more
Talent Pro
Mayank choudhary
Posted by Mayank choudhary
Bengaluru (Bangalore), Gurugram, Mumbai, Hyderabad
7 - 14 yrs
₹30L - ₹50L / yr
Data modeling

Strong Enterprise Data Modeller profile (Modern Data Platforms)

Mandatory (Experience 1) – Must have 7+ years of experience in Data Modeling or Enterprise Data Architecture, with strong hands-on expertise in designing conceptual, logical, and physical data models for enterprise data platforms

Mandatory (Experience 2) – Must have Strong hands-on experience with enterprise data modeling tools such as Erwin, ER/Studio, PowerDesigner, SQLDBM, or similar enterprise data modeling tools

Mandatory (Experience 3) – Must have Deep understanding of dimensional modeling (Kimball / Inmon methodologies), normalization techniques, and schema design for modern data warehouse environments.

Mandatory (Experience 4) – Proven experience designing data models for modern data platforms such as Snowflake, Databricks, Redshift, Dremio, or similar cloud data warehouse / lakehouse systems.

Mandatory (Experience 5) – Must have strong SQL expertise and schema design skills, with the ability to validate data model implementations and collaborate closely with data engineering teams

Mandatory (Education) – Bachelor’s or Master’s degree in Computer Science, Information Systems, or a related technical field.

Mandatory (Note) – Total experience should not be greater than 14 years

Read more
Neuvamacro Technology Pvt Ltd
Remote only
5 - 15 yrs
₹12L - ₹15L / yr
Tableau
Snow flake schema
SQL
ETL
Data modeling
+4 more

Job Description:

Position Type: Full-Time Contract (with potential to convert to Permanent)

Location: Remote (Australian Time Zone)

Availability: Immediate Joiners Preferred

About the Role

We are seeking an experienced Tableau and Snowflake Specialist with 5+ years of hands‑on expertise to join our team as a full‑time contractor for the next few months. Based on performance and business requirements, this role has a strong potential to transition into a permanent position.

The ideal candidate is highly proficient in designing scalable dashboards, managing Snowflake data warehousing environments, and collaborating with cross-functional teams to drive data‑driven insights.

Key Responsibilities

  • Develop, design, and optimize advanced Tableau dashboards, reports, and visual analytics.
  • Build, maintain, and optimize datasets and data models in Snowflake Cloud Data Warehouse.
  • Collaborate with business stakeholders to gather requirements and translate them into analytics solutions.
  • Write efficient SQL queries, stored procedures, and data pipelines to support reporting needs.
  • Perform data profiling, data validation, and ensure data quality across systems.
  • Work closely with data engineering teams to improve data structures for better reporting efficiency.
  • Troubleshoot performance issues and implement best practices for both Snowflake and Tableau.
  • Support deployment, version control, and documentation of BI solutions.
  • Ensure availability of dashboards during Australian business hours.

Required Skills & Experience

  • 5+ years of strong hands-on experience with Tableau development (Dashboards, Storyboards, Calculated Fields, LOD Expressions).
  • 5+ years of experience working with Snowflake including schema design, warehouse configuration, and query optimization.
  • Advanced knowledge of SQL and performance tuning.
  • Strong understanding of data modeling, ETL processes, and cloud data platforms.
  • Experience working in fast-paced environments with tight delivery timelines.
  • Excellent communication and stakeholder management skills.
  • Ability to work independently and deliver high‑quality outputs aligned with business objectives.

Nice-to-Have Skills

  • Knowledge of Python or any ETL tool.
  • Experience with Snowflake integrations (Fivetran, DBT, Azure/AWS/GCP).
  • Tableau Server/Prep experience.

Contract Details

  • Full-Time Contract for several months.
  • High possibility of conversion to permanent, based on performance.
  • Must be available to work on the Australian Time Zone.
  • Immediate joiners are highly encouraged.


Read more
Talent Pro
Mayank choudhary
Posted by Mayank choudhary
Bengaluru (Bangalore)
8 - 14 yrs
₹40L - ₹47L / yr
Data modeling

Strong Enterprise Data Modeller profile (Modern Data Platforms)

Mandatory (Experience 1) – Must have 7+ years of experience in Data Modeling or Enterprise Data Architecture, with strong hands-on expertise in designing conceptual, logical, and physical data models for enterprise data platforms

Mandatory (Experience 2) – Must have Strong hands-on experience with enterprise data modeling tools such as Erwin, ER/Studio, PowerDesigner, SQLDBM, or similar enterprise data modeling tools

Mandatory (Experience 3) – Must have Deep understanding of dimensional modeling (Kimball / Inmon methodologies), normalization techniques, and schema design for modern data warehouse environments.

Mandatory (Experience 4) – Proven experience designing data models for modern data platforms such as Snowflake, Databricks, Redshift, Dremio, or similar cloud data warehouse / lakehouse systems.

Mandatory (Experience 5) – Must have strong SQL expertise and schema design skills, with the ability to validate data model implementations and collaborate closely with data engineering teams

Mandatory (Education) – Bachelor’s or Master’s degree in Computer Science, Information Systems, or a related technical field.

Read more
Kavi India
Jinsha Vijayan
Posted by Jinsha Vijayan
ASV Chandilya Towers, 6 th floor, 5, 397, Rajiv Gandhi Salai, Nehru Nagar, Thoraipakkam, Chennai 600097
7 - 12 yrs
₹15L - ₹25L / yr
PowerBI
DAX
SQL
Data modeling

We are looking for an experienced Power BI developer with 7+yrs years of experience to join our Business Intelligence team. The ideal candidate will be responsible for transforming raw data into actionable insights using Microsoft Power BI. This role encompasses developing, maintaining, and optimizing interactive dashboards, reports, and data models to support strategic decision-making across the organization.


Key Responsibilities:

  • Understand business requirements in the BI context and design data models to convert raw data to meaningful insights.
  • Create complex DAX queries and functions in Power BI.
  • Create dashboards and visual interactive reports using Power BI. Deploy creative visual interaction tools for different metrics.
  • Proven expertise in the entire Microsoft Power Platform like Power Apps, Power Automate, Microsoft DataVerse, AI Builder etc.
  • Provide architecture recommendations to manage Power BI workspaces.
  • Design, develop, and deploy Power BI scripts and perform efficient detailed analysis.
  • Create charts and document data with algorithms, parameters, models, and relations explanations.
  • Make technical changes to existing BI systems in order to enhance their functioning.


Required Qualifications & Skills:

  • 7+ years relevant experience with the above job duties for Intermediate Power BI Developer bachelor’s or master's degree in computer science or related fields.
  • Certification in MS Power BI and Power Apps Suite is needed.
  • Good creative and communication skills – Ability to influence and recommend visualizations to the senior leadership teams.
  • Ability to create complex SQL queries to join multiple tables / data is absolutely Needed.
  • Understanding of JavaScript, CSS, and other JavaScript libraries is preferred.
Read more
Remote only
8 - 18 yrs
₹24L - ₹36L / yr
Hybris
B2B Marketing
Business Analysis
Data modeling

Job Description:

 

The Hybris eCommerce Business Analyst role is responsible for facilitating the successful delivery of consulting engagements. Business Analysts work closely with client stakeholders and business users and cross-functional delivery teams over the lifecycle of a project to gather and document solution requirements; document solution design; model and configure eCommerce data structures and business processes; collect, manipulate and load product data, define test scripts and acceptance criteria, manage & communicate requirement, design and priority changes and facilitate and manage defect tracking and prioritization.

 

Responsibilities:

 

Conduct one-on-one or small group interview sessions to gather required use cases, functional requirements, and technical requirements in order to develop requirement documentation.


Provide analytical expertise in identifying, evaluating, and documenting Hybris Commerce system requirements and procedures that are cost effective and meet business objectives and user needs.


Develop user requirements, functional specifications, technical specifications, data models, and process flow diagrams to communicate engagement scope to client and delivery team members.


Accountable to complete assigned work within time specified and communicate status if task delivery dates are at risk.


Work with other project delivery team members towards the resolution of business or systems issues.


Understand Hybris Commerce data structures, collect product catalogue data from client business users, manipulate data for import into eCommerce system and identify data gaps, and perform product data imports into eCommerce system. (Project dependent)


Provide system demonstrations and training to client business users.


Develop test plans, write manual/automated test scripts, and define acceptance criteria to ensure high-quality project deliverables. (Project dependent)


 Qualifications:


8+ years’ experience in general IT business analysis which includes 2-3 years’ experience in Hybris Commerce business analysis


Bachelor’s degree


Able to Multi-task and work under tight deadlines


Must be able to work independently with little to no supervision


Experience with global service delivery model 


Ability to work independently and collaboratively 


Excellent communication skills

Read more
The Blue Owls Solutions

at The Blue Owls Solutions

2 candid answers
Apoorvo Chakraborty
Posted by Apoorvo Chakraborty
Pune
2 - 5 yrs
₹10L - ₹18L / yr
PowerBI
Data modeling
Star schema
skill iconData Analytics

Tech Stack: Power BI, DAX, SQL, Microsoft Fabric


The Role

You are the bridge between raw data and business decisions.You will be responsible for taking processed data and transforming it into high-performance, scalable semantic models. Your goal is to ensure that the data is structured so perfectly that building a report becomes the easiest part of the job.


Key Responsibilities

  • Semantic Architecture: Design and build Star Schemas that serve as the single source of truth.
  • Gold Layer Ownership: Transform Silver-layer tables into business-ready dimensions and facts.
  • Advanced DAX: Write clean, optimized DAX for complex business logic (not just basic sums).
  • Visual Storytelling: Build intuitive Power BI reports that focus on clarity and actionable insights.
  • Performance Tuning: Optimize models for fast load times, query folding, and efficient refreshes.
  • Stakeholder Sync: Work with clients to define KPIs and ensure the model actually answers their questions.


What We’re Looking For

  • Modeling DNA: You think in Star Schemas and understand why a flat table is a bad idea for Power BI.
  • DAX Proficiency: Deep understanding of Filter vs. Row context and time-intelligence functions.
  • SQL Fluency: You can write complex queries to validate your models against the source data.
  • Visual Logic: You have a sharp eye for UI/UX—no cluttered dashboards or "chart junk."
  • Fabric Familiarity: Experience with (or a strong desire to learn) the Microsoft Fabric ecosystem.


Why Work With Us?

  • Microsoft Partner Edge: Hands-on with the latest Fabric and Power BI features before they go mainstream.
  • Flat Hierarchy
  • Technical Autonomy: You own the Gold layer architecture; you set the modeling standards.
  • High-Stakes Projects: Solve real data problems for enterprise clients in Healthcare and Government.
  • Career Growth: Structured learning paths to master the wider Azure Data & AI stack.
  • Competitive Pay
  • Flexible Hours
Read more
AI Industry

AI Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Mumbai, Bengaluru (Bangalore), Hyderabad, Gurugram
5 - 17 yrs
₹34L - ₹45L / yr
Dremio
Data engineering
Business Intelligence (BI)
Tableau
PowerBI
+51 more

Review Criteria:

  • Strong Dremio / Lakehouse Data Architect profile
  • 5+ years of experience in Data Architecture / Data Engineering, with minimum 3+ years hands-on in Dremio
  • Strong expertise in SQL optimization, data modeling, query performance tuning, and designing analytical schemas for large-scale systems
  • Deep experience with cloud object storage (S3 / ADLS / GCS) and file formats such as Parquet, Delta, Iceberg along with distributed query planning concepts
  • Hands-on experience integrating data via APIs, JDBC, Delta/Parquet, object storage, and coordinating with data engineering pipelines (Airflow, DBT, Kafka, Spark, etc.)
  • Proven experience designing and implementing lakehouse architecture including ingestion, curation, semantic modeling, reflections/caching optimization, and enabling governed analytics
  • Strong understanding of data governance, lineage, RBAC-based access control, and enterprise security best practices
  • Excellent communication skills with ability to work closely with BI, data science, and engineering teams; strong documentation discipline
  • Candidates must come from enterprise data modernization, cloud-native, or analytics-driven companies


Preferred:

  • Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) or data catalogs (Collibra, Alation, Purview); familiarity with Snowflake, Databricks, or BigQuery environments


Role & Responsibilities:

You will be responsible for architecting, implementing, and optimizing Dremio-based data lakehouse environments integrated with cloud storage, BI, and data engineering ecosystems. The role requires a strong balance of architecture design, data modeling, query optimization, and governance enablement in large-scale analytical environments.


  • Design and implement Dremio lakehouse architecture on cloud (AWS/Azure/Snowflake/Databricks ecosystem).
  • Define data ingestion, curation, and semantic modeling strategies to support analytics and AI workloads.
  • Optimize Dremio reflections, caching, and query performance for diverse data consumption patterns.
  • Collaborate with data engineering teams to integrate data sources via APIs, JDBC, Delta/Parquet, and object storage layers (S3/ADLS).
  • Establish best practices for data security, lineage, and access control aligned with enterprise governance policies.
  • Support self-service analytics by enabling governed data products and semantic layers.
  • Develop reusable design patterns, documentation, and standards for Dremio deployment, monitoring, and scaling.
  • Work closely with BI and data science teams to ensure fast, reliable, and well-modeled access to enterprise data.


Ideal Candidate:

  • Bachelor’s or Master’s in Computer Science, Information Systems, or related field.
  • 5+ years in data architecture and engineering, with 3+ years in Dremio or modern lakehouse platforms.
  • Strong expertise in SQL optimization, data modeling, and performance tuning within Dremio or similar query engines (Presto, Trino, Athena).
  • Hands-on experience with cloud storage (S3, ADLS, GCS), Parquet/Delta/Iceberg formats, and distributed query planning.
  • Knowledge of data integration tools and pipelines (Airflow, DBT, Kafka, Spark, etc.).
  • Familiarity with enterprise data governance, metadata management, and role-based access control (RBAC).
  • Excellent problem-solving, documentation, and stakeholder communication skills.


Preferred:

  • Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) and data catalogs (Collibra, Alation, Purview).
  • Exposure to Snowflake, Databricks, or BigQuery environments.
  • Experience in high-tech, manufacturing, or enterprise data modernization programs.
Read more
Global digital transformation solutions provider

Global digital transformation solutions provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Kochi (Cochin), Trivandrum
4 - 6 yrs
₹11L - ₹17L / yr
skill iconAmazon Web Services (AWS)
skill iconPython
Data engineering
SQL
ETL
+22 more

JOB DETAILS:

* Job Title: Associate III - Data Engineering

* Industry: Global digital transformation solutions provide

* Salary: Best in Industry

* Experience: 4-6 years

* Location: Trivandrum, Kochi

Job Description

Job Title:

Data Services Engineer – AWS & Snowflake

 

Job Summary:

As a Data Services Engineer, you will be responsible for designing, developing, and maintaining robust data solutions using AWS cloud services and Snowflake.

You will work closely with cross-functional teams to ensure data is accessible, secure, and optimized for performance.

Your role will involve implementing scalable data pipelines, managing data integration, and supporting analytics initiatives.

 

Responsibilities:

• Design and implement scalable and secure data pipelines on AWS and Snowflake (Star/Snowflake schema)

• Optimize query performance using clustering keys, materialized views, and caching

• Develop and maintain Snowflake data warehouses and data marts.

• Build and maintain ETL/ELT workflows using Snowflake-native features (Snowpipe, Streams, Tasks).

• Integrate Snowflake with cloud platforms (AWS, Azure, GCP) and third-party tools (Airflow, dbt, Informatica)

• Utilize Snowpark and Python/Java for complex transformations

• Implement RBAC, data masking, and row-level security.

• Optimize data storage and retrieval for performance and cost-efficiency.

• Collaborate with stakeholders to gather data requirements and deliver solutions.

• Ensure data quality, governance, and compliance with industry standards.

• Monitor, troubleshoot, and resolve data pipeline and performance issues.

• Document data architecture, processes, and best practices.

• Support data migration and integration from various sources.

 

Qualifications:

• Bachelor’s degree in Computer Science, Information Technology, or a related field.

• 3 to 4 years of hands-on experience in data engineering or data services.

• Proven experience with AWS data services (e.g., S3, Glue, Redshift, Lambda).

• Strong expertise in Snowflake architecture, development, and optimization.

• Proficiency in SQL and Python for data manipulation and scripting.

• Solid understanding of ETL/ELT processes and data modeling.

• Experience with data integration tools and orchestration frameworks.

• Excellent analytical, problem-solving, and communication skills.

 

Preferred Skills:

• AWS Glue, AWS Lambda, Amazon Redshift

• Snowflake Data Warehouse

• SQL & Python

 

Skills: Aws Lambda, AWS Glue, Amazon Redshift, Snowflake Data Warehouse

 

Must-Haves

AWS data services (4-6 years), Snowflake architecture (4-6 years), SQL (proficient), Python (proficient), ETL/ELT processes (solid understanding)

Skills: AWS, AWS lambda, Snowflake, Data engineering, Snowpipe, Data integration tools, orchestration framework

Relevant 4 - 6 Years

python is mandatory

 

******

Notice period - 0 to 15 days only (Feb joiners’ profiles only)

Location: Kochi

F2F Interview 7th Feb

 

 

Read more
QAgile Services

at QAgile Services

1 recruiter
Radhika Chotai
Posted by Radhika Chotai
Remote only
2 - 4 yrs
₹3L - ₹5L / yr
PowerBI
Data modeling
ETL
Spark
SQL
+1 more

Microsoft Fabric, Power BI, Data modelling, ETL, Spark SQL

Remote work- 5-7 hours

450 Rs hourly charges

Read more
The Blue Owls Solutions

at The Blue Owls Solutions

2 candid answers
Apoorvo Chakraborty
Posted by Apoorvo Chakraborty
Pune
6 - 10 yrs
₹25L - ₹40L / yr
Data governance
Data engineering
Team leadership
Data modeling
Synapse
+3 more

The Role


We are looking for a Azure Data Architect to join our team in Pune. You will be responsible for the end-to-end lifecycle of data solutions, from initial client requirement gathering and solution architecture design to leading the data engineering team through implementation. You will be the technical anchor for the project, ensuring that our data estates are scalable, governed, and high-performing.


Key Responsibilities

  • Architecture & Design: Design robust data architectures using Microsoft Fabric and Azure Synapse, focusing on Medallion architecture and metadata-driven frameworks.
  • End-to-End Delivery: Translate complex client business requirements into technical roadmaps and lead the team to deliver them on time.
  • Data Governance: Implement and manage enterprise-grade governance, data discovery, and lineage using Microsoft Purview.
  • Team Leadership: Act as the technical lead for the team, performing code reviews, mentoring junior engineers, and ensuring best practices in PySpark and SQL.
  • Client Management: Interface directly with stakeholders to define project scope and provide technical consultancy.


What We’re Looking For

  • 6+ Years in Data Engineering with at least 3+ years leading technical teams or designing architectures.
  • Expertise in Microsoft Fabric/Synapse: Deep experience with Lakehouses, Warehouses, and Spark-based processing.
  • Governance Specialist: Proven experience implementing Microsoft Purview for data cataloging, sensitivity labeling, and lineage.
  • Technical Breadth: Strong proficiency in PySpark, SQL, and Data Factory. Familiarity with Infrastructure as Code (Bicep/Terraform) is a major plus.

Why Work with Us?

  • Competitive Pay
  • Flexible Hours
  • Work on Microsoft’s latest (Fabric, Purview, Foundry) as a Designated Solutions Partner.
  • High-Stakes Impact: Solve complex, client-facing problems for enterprise leaders
  • Structured learning paths to help you master AI automation and Agentic AI.
Read more
venanalytics

at venanalytics

2 candid answers
Rincy jain
Posted by Rincy jain
Mumbai
3 - 5 yrs
₹8L - ₹12L / yr
DAX
skill iconPython
SQL
Data modeling

About the Role:


We are looking for a highly skilled Data Engineer with a strong foundation in Power BI, SQL, Python, and Big Data ecosystems to help design, build, and optimize end-to-end data solutions. The ideal candidate is passionate about solving complex data problems, transforming raw data into actionable insights, and contributing to data-driven decision-making across the organization.


Key Responsibilities:


Data Modelling & Visualization

  • Build scalable and high-quality data models in Power BI using best practices.
  • Define relationships, hierarchies, and measures to support effective storytelling.
  • Ensure dashboards meet standards in accuracy, visualization principles, and timelines.


Data Transformation & ETL

  • Perform advanced data transformation using Power Query (M Language) beyond UI-based steps.
  • Design and optimize ETL pipelines using SQL, Python, and Big Data tools.
  • Manage and process large-scale datasets from various sources and formats.


Business Problem Translation

  • Collaborate with cross-functional teams to translate complex business problems into scalable, data-centric solutions.
  • Decompose business questions into testable hypotheses and identify relevant datasets for validation.


Performance & Troubleshooting

  • Continuously optimize performance of dashboards and pipelines for latency, reliability, and scalability.
  • Troubleshoot and resolve issues related to data access, quality, security, and latency, adhering to SLAs.


Analytical Storytelling

  • Apply analytical thinking to design insightful dashboards—prioritizing clarity and usability over aesthetics.
  • Develop data narratives that drive business impact.


Solution Design

  • Deliver wireframes, POCs, and final solutions aligned with business requirements and technical feasibility.


Required Skills & Experience:


  • Minimum 3+ years of experience as a Data Engineer or in a similar data-focused role.
  • Strong expertise in Power BI: data modeling, DAX, Power Query (M Language), and visualization best practices.
  • Hands-on with Python and SQL for data analysis, automation, and backend data transformation.
  • Deep understanding of data storytelling, visual best practices, and dashboard performance tuning.
  • Familiarity with DAX Studio and Tabular Editor.
  • Experience in handling high-volume data in production environments.


Preferred (Good to Have):


  • Exposure to Big Data technologies such as:
  • PySpark
  • Hadoop
  • Hive / HDFS
  • Spark Streaming (optional but preferred)


Why Join Us?


  • Work with a team that's passionate about data innovation.
  • Exposure to modern data stack and tools.
  • Flat structure and collaborative culture.
  • Opportunity to influence data strategy and architecture decisions.

 

Read more
Lower Parel
2 - 4 yrs
₹6L - ₹7.2L / yr
skill iconReact.js
skill iconNextJs (Next.js)
skill iconNodeJS (Node.js)
GraphQL
RESTful APIs
+22 more

Senior Full Stack Developer – Analytics Dashboard

Job Summary

We are seeking an experienced Full Stack Developer to design and build a scalable, data-driven analytics dashboard platform. The role involves developing a modern web application that integrates with multiple external data sources, processes large datasets, and presents actionable insights through interactive dashboards.

The ideal candidate should be comfortable working across the full stack and have strong experience in building analytical or reporting systems.

Key Responsibilities

  • Design and develop a full-stack web application using modern technologies.
  • Build scalable backend APIs to handle data ingestion, processing, and storage.
  • Develop interactive dashboards and data visualisations for business reporting.
  • Implement secure user authentication and role-based access.
  • Integrate with third-party APIs using OAuth and REST protocols.
  • Design efficient database schemas for analytical workloads.
  • Implement background jobs and scheduled tasks for data syncing.
  • Ensure performance, scalability, and reliability of the system.
  • Write clean, maintainable, and well-documented code.
  • Collaborate with product and design teams to translate requirements into features.

Required Technical Skills

Frontend

  • Strong experience with React.js
  • Experience with Next.js
  • Knowledge of modern UI frameworks (Tailwind, MUI, Ant Design, etc.)
  • Experience building dashboards using chart libraries (Recharts, Chart.js, D3, etc.)

Backend

  • Strong experience with Node.js (Express or NestJS)
  • REST and/or GraphQL API development
  • Background job systems (cron, queues, schedulers)
  • Experience with OAuth-based integrations

Database

  • Strong experience with PostgreSQL
  • Data modelling and performance optimisation
  • Writing complex analytical SQL queries

DevOps / Infrastructure

  • Cloud platforms (AWS)
  • Docker and basic containerisation
  • CI/CD pipelines
  • Git-based workflows

Experience & Qualifications

  • 5+ years of professional full stack development experience.
  • Proven experience building production-grade web applications.
  • Prior experience with analytics, dashboards, or data platforms is highly preferred.
  • Strong problem-solving and system design skills.
  • Comfortable working in a fast-paced, product-oriented environment.

Nice to Have (Bonus Skills)

  • Experience with data pipelines or ETL systems.
  • Knowledge of Redis or caching systems.
  • Experience with SaaS products or B2B platforms.
  • Basic understanding of data science or machine learning concepts.
  • Familiarity with time-series data and reporting systems.
  • Familiarity with meta ads/Google ads API

Soft Skills

  • Strong communication skills.
  • Ability to work independently and take ownership.
  • Attention to detail and focus on code quality.
  • Comfortable working with ambiguous requirements.

Ideal Candidate Profile (Summary)

A senior-level full stack engineer who has built complex web applications, understands data-heavy systems, and enjoys creating analytical products with a strong focus on performance, scalability, and user experience.

Read more
Euphoric Thought Technologies
Noida
2 - 4 yrs
₹8L - ₹15L / yr
SQL
ETL
Data modeling
Business Intelligence (BI)

Position Overview:

As a BI (Business Intelligence) Developer, they will be responsible for designing,

developing, and maintaining the business intelligence solutions that support data

analysis and reporting. They will collaborate with business stakeholders, analysts, and

data engineers to understand requirements and translate them into efficient and

effective BI solutions. Their role will involve working with various data sources,

designing data models, assisting ETL (Extract, Transform, Load) processes, and

developing interactive dashboards and reports.

Key Responsibilities:

1. Requirement Gathering: Collaborate with business stakeholders to understand

their data analysis and reporting needs. Translate these requirements into

technical specifications and develop appropriate BI solutions.

2. Data Modelling: Design and develop data models that effectively represent

the underlying business processes and facilitate data analysis and reporting.

Ensure data integrity, accuracy, and consistency within the data models.

3. Dashboard and Report Development: Design, develop, and deploy interactive

dashboards and reports using Sigma computing.

4. Data Integration: Integrate data from various systems and sources to provide a

comprehensive view of business performance. Ensure data consistency and

accuracy across different data sets.

5. Performance Optimization: Identify performance bottlenecks in BI solutions and

optimize query performance, data processing, and report rendering. Continuously

monitor and fine-tune the performance of BI applications.

6. Data Governance: Ensure compliance with data governance policies and

standards. Implement appropriate security measures to protect sensitive data.

7. Documentation and Training: Document the technical specifications, data

models, ETL processes, and BI solution configurations.

8. Ensuring that the proposed solutions meet business needs and requirements.

9. Should be able to create and own Business/ Functional Requirement

Documents.

10. Monitor or track project milestones and deliverables.

11. Submit project deliverables, ensuring adherence to quality standards.

Qualifications and Skills:

1. Master/ Bachelor’s degree in IT or relevant and having a minimum of 2-4 years of

experience in Business Analysis or a related field

2. Proven experience as a BI Developer or similar role.

3. Fundamental analytical and conceptual thinking skills with demonstrated skills in

managing projects on implementation of Platform Solutions

4. Excellent planning, organizational and time management skills.

5. Strong understanding of data warehousing concepts, dimensional modelling, and

ETL processes.

6. Proficiency in SQL, Snowflake for data extraction, manipulation, and analysis.

7. Experience with one or more BI tools such as Sigma computing.

8. Knowledge of data visualization best practices and ability to create compelling

data visualizations.

9. Solid problem-solving and analytical skills with a detail-oriented mindset.

10. Strong communication and interpersonal skills to collaborate effectively with

different stakeholders.

11. Ability to work independently and manage multiple priorities in a fast-paced

environment.

12. Knowledge of data governance principles and security best practices.

13. Candidate having experience in managing implementation project of platform

solutions to the U.S. clients would be preferable.

14. Exposure to U.S debt collection industry is a plus.

Read more
leading digital testing boutique firm

leading digital testing boutique firm

Agency job
via Peak Hire Solutions by Dhara Thakkar
Delhi
5 - 8 yrs
₹11L - ₹15L / yr
SQL
Software Testing (QA)
Data modeling
ETL
Data extraction
+14 more

Review Criteria

  • Strong Data / ETL Test Engineer
  • 5+ years of overall experience in Testing/QA
  • 3+ years of hands-on end-to-end data testing/ETL testing experience, covering data extraction, transformation, loading validation, reconciliation, working across BI / Analytics / Data Warehouse / e-Governance platforms
  • Must have strong understanding and hands-on exposure to Data Warehouse concepts and processes, including fact & dimension tables, data models, data flows, aggregations, and historical data handling.
  • Must have experience in Data Migration Testing, including validation of completeness, correctness, reconciliation, and post-migration verification from legacy platforms to upgraded/cloud-based data platforms.
  • Must have independently handled test strategy, test planning, test case design, execution, defect management, and regression cycles for ETL and BI testing
  • Hands-on experience with ETL tools and SQL-based data validation is mandatory (Working knowledge or hands-on exposure to Redshift and/or Qlik will be considered sufficient)
  • Must hold a Bachelor’s degree B.E./B.Tech else should have master's in M.Tech/MCA/M.Sc/MS
  • Must demonstrate strong verbal and written communication skills, with the ability to work closely with business stakeholders, data teams, and QA leadership
  • Mandatory Location: Candidate must be based within Delhi NCR (100 km radius)


Preferred

  • Relevant certifications such as ISTQB or Data Analytics / BI certifications (Power BI, Snowflake, AWS, etc.)


Job Specific Criteria

  • CV Attachment is mandatory
  • Do you have experience working on Government projects/companies, mention brief about project?
  • Do you have experience working on enterprise projects/companies, mention brief about project?
  • Please mention the names of 2 key projects you have worked on related to Data Warehouse / ETL / BI testing?
  • Do you hold any ISTQB or Data / BI certifications (Power BI, Snowflake, AWS, etc.)?
  • Do you have exposure to BI tools such as Qlik?
  • Are you willing to relocate to Delhi and why (if not from Delhi)?
  • Are you available for a face-to-face round?


Role & Responsibilities

  • 5 years’ experience in Data Testing across BI Analytics platforms with at least 2 largescale enterprise Data Warehouse Analytics eGovernance programs
  • Proficiency in ETL Data Warehouse and BI report dashboard validation including test planning data reconciliation acceptance criteria definition defect triage and regression cycle management for BI landscapes
  • Proficient in analyzing business requirements and data mapping specifications BRDs Data Models Source to Target Mappings User Stories Reports Dashboards to define comprehensive test scenarios and test cases
  • Ability to review high level and low-level data models ETL workflows API specifications and business logic implementations to design test strategies ensuring accuracy consistency and performance of data pipelines
  • Ability to test and validate the migrated data from old platform to an upgraded platform and ensure the completeness and correctness of migration
  • Experience of conducting test of migrated data and defining test scenarios and test cases for the same
  • Experience with BI tools like Qlik ETL platforms Data Lake platforms Redshift to support end to end validation
  • Exposure to Data Quality Metadata Management and Data Governance frameworks ensuring KPIs metrics and dashboards align with business expectations


Ideal Candidate

  • 5 years’ experience in Data Testing across BI Analytics platforms with at least 2 largescale enterprise Data Warehouse Analytics eGovernance programs
  • Proficiency in ETL Data Warehouse and BI report dashboard validation including test planning data reconciliation acceptance criteria definition defect triage and regression cycle management for BI landscapes
  • Proficient in analyzing business requirements and data mapping specifications BRDs Data Models Source to Target Mappings User Stories Reports Dashboards to define comprehensive test scenarios and test cases
  • Ability to review high level and low-level data models ETL workflows API specifications and business logic implementations to design test strategies ensuring accuracy consistency and performance of data pipelines
  • Ability to test and validate the migrated data from old platform to an upgraded platform and ensure the completeness and correctness of migration
  • Experience of conducting test of migrated data and defining test scenarios and test cases for the same
  • Experience with BI tools like Qlik ETL platforms Data Lake platforms Redshift to support end to end validation
  • Exposure to Data Quality Metadata Management and Data Governance frameworks ensuring KPIs metrics and dashboards align with business expectations



Read more
AI-First Company

AI-First Company

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore), Mumbai, Hyderabad, Gurugram
5 - 17 yrs
₹30L - ₹45L / yr
Data engineering
Data architecture
SQL
Data modeling
GCS
+47 more

ROLES AND RESPONSIBILITIES:

You will be responsible for architecting, implementing, and optimizing Dremio-based data Lakehouse environments integrated with cloud storage, BI, and data engineering ecosystems. The role requires a strong balance of architecture design, data modeling, query optimization, and governance enablement in large-scale analytical environments.


  • Design and implement Dremio lakehouse architecture on cloud (AWS/Azure/Snowflake/Databricks ecosystem).
  • Define data ingestion, curation, and semantic modeling strategies to support analytics and AI workloads.
  • Optimize Dremio reflections, caching, and query performance for diverse data consumption patterns.
  • Collaborate with data engineering teams to integrate data sources via APIs, JDBC, Delta/Parquet, and object storage layers (S3/ADLS).
  • Establish best practices for data security, lineage, and access control aligned with enterprise governance policies.
  • Support self-service analytics by enabling governed data products and semantic layers.
  • Develop reusable design patterns, documentation, and standards for Dremio deployment, monitoring, and scaling.
  • Work closely with BI and data science teams to ensure fast, reliable, and well-modeled access to enterprise data.


IDEAL CANDIDATE:

  • Bachelor’s or Master’s in Computer Science, Information Systems, or related field.
  • 5+ years in data architecture and engineering, with 3+ years in Dremio or modern lakehouse platforms.
  • Strong expertise in SQL optimization, data modeling, and performance tuning within Dremio or similar query engines (Presto, Trino, Athena).
  • Hands-on experience with cloud storage (S3, ADLS, GCS), Parquet/Delta/Iceberg formats, and distributed query planning.
  • Knowledge of data integration tools and pipelines (Airflow, DBT, Kafka, Spark, etc.).
  • Familiarity with enterprise data governance, metadata management, and role-based access control (RBAC).
  • Excellent problem-solving, documentation, and stakeholder communication skills.


PREFERRED:

  • Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) and data catalogs (Collibra, Alation, Purview).
  • Exposure to Snowflake, Databricks, or BigQuery environments.
  • Experience in high-tech, manufacturing, or enterprise data modernization programs.
Read more
Global digital transformation solutions provider.

Global digital transformation solutions provider.

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
7 - 9 yrs
₹15L - ₹28L / yr
databricks
skill iconPython
SQL
PySpark
skill iconAmazon Web Services (AWS)
+9 more

Role Proficiency:

This role requires proficiency in developing data pipelines including coding and testing for ingesting wrangling transforming and joining data from various sources. The ideal candidate should be adept in ETL tools like Informatica Glue Databricks and DataProc with strong coding skills in Python PySpark and SQL. This position demands independence and proficiency across various data domains. Expertise in data warehousing solutions such as Snowflake BigQuery Lakehouse and Delta Lake is essential including the ability to calculate processing costs and address performance issues. A solid understanding of DevOps and infrastructure needs is also required.


Skill Examples:

  1. Proficiency in SQL Python or other programming languages used for data manipulation.
  2. Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF.
  3. Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery).
  4. Conduct tests on data pipelines and evaluate results against data quality and performance specifications.
  5. Experience in performance tuning.
  6. Experience in data warehouse design and cost improvements.
  7. Apply and optimize data models for efficient storage retrieval and processing of large datasets.
  8. Communicate and explain design/development aspects to customers.
  9. Estimate time and resource requirements for developing/debugging features/components.
  10. Participate in RFP responses and solutioning.
  11. Mentor team members and guide them in relevant upskilling and certification.

 

Knowledge Examples:

  1. Knowledge of various ETL services used by cloud providers including Apache PySpark AWS Glue GCP DataProc/Dataflow Azure ADF and ADLF.
  2. Proficient in SQL for analytics and windowing functions.
  3. Understanding of data schemas and models.
  4. Familiarity with domain-related data.
  5. Knowledge of data warehouse optimization techniques.
  6. Understanding of data security concepts.
  7. Awareness of patterns frameworks and automation practices.


 

Additional Comments:

# of Resources: 22 Role(s): Technical Role Location(s): India Planned Start Date: 1/1/2026 Planned End Date: 6/30/2026

Project Overview:

Role Scope / Deliverables: We are seeking highly skilled Data Engineer with strong experience in Databricks, PySpark, Python, SQL, and AWS to join our data engineering team on or before 1st week of Dec, 2025.

The candidate will be responsible for designing, developing, and optimizing large-scale data pipelines and analytics solutions that drive business insights and operational efficiency.

Design, build, and maintain scalable data pipelines using Databricks and PySpark.

Develop and optimize complex SQL queries for data extraction, transformation, and analysis.

Implement data integration solutions across multiple AWS services (S3, Glue, Lambda, Redshift, EMR, etc.).

Collaborate with analytics, data science, and business teams to deliver clean, reliable, and timely datasets.

Ensure data quality, performance, and reliability across data workflows.

Participate in code reviews, data architecture discussions, and performance optimization initiatives.

Support migration and modernization efforts for legacy data systems to modern cloud-based solutions.


Key Skills:

Hands-on experience with Databricks, PySpark & Python for building ETL/ELT pipelines.

Proficiency in SQL (performance tuning, complex joins, CTEs, window functions).

Strong understanding of AWS services (S3, Glue, Lambda, Redshift, CloudWatch, etc.).

Experience with data modeling, schema design, and performance optimization.

Familiarity with CI/CD pipelines, version control (Git), and workflow orchestration (Airflow preferred).

Excellent problem-solving, communication, and collaboration skills.

 

Skills: Databricks, Pyspark & Python, Sql, Aws Services

 

Must-Haves

Python/PySpark (5+ years), SQL (5+ years), Databricks (3+ years), AWS Services (3+ years), ETL tools (Informatica, Glue, DataProc) (3+ years)

Hands-on experience with Databricks, PySpark & Python for ETL/ELT pipelines.

Proficiency in SQL (performance tuning, complex joins, CTEs, window functions).

Strong understanding of AWS services (S3, Glue, Lambda, Redshift, CloudWatch, etc.).

Experience with data modeling, schema design, and performance optimization.

Familiarity with CI/CD pipelines, Git, and workflow orchestration (Airflow preferred).


******

Notice period - Immediate to 15 days

Location: Bangalore

Read more
bootcoding
Shruti Choubey
Posted by Shruti Choubey
Remote, Nagpur
3 - 8 yrs
₹6L - ₹15L / yr
OutSystems
RESTful APIs
skill iconGit
CI/CD
Data modeling
+2 more

Key Responsibilities

  • Design, develop, and maintain scalable applications using the OutSystems platform.
  • Build modern Reactive Web and Mobile applications aligned with business and technical requirements.
  • Implement integrations with REST APIs, databases, and external systems.
  • Collaborate with architects, tech leads, and cross-functional teams for smooth deployments.
  • Create reusable, maintainable components following OutSystems best practices.
  • Participate in code reviews, unit testing, debugging, and performance optimization.
  • Ensure adherence to scalability, security, and deployment automation guidelines.
  • Stay updated on new OutSystems capabilities and contribute to continuous improvement.


Read more
Remote only
0 - 1 yrs
₹0 / mo
Power Systems
Power tools
Microsoft Office
Office 365
Data modeling
+4 more

We are looking for motivated Power Platform Interns with an interest in Power Apps and Power Automate to join our team remotely for a 3-month internship. This role is ideal for students or graduates who want hands-on experience in building low-code applications, workflow automation, and process optimization. While this is an unpaid internship, interns who successfully complete the program will receive a Completion Certificate and a Letter of Recommendation

Read more
AI company

AI company

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore), Mumbai, Hyderabad, Gurugram
5 - 17 yrs
₹30L - ₹45L / yr
Data architecture
Data engineering
SQL
Data modeling
GCS
+21 more

Review Criteria

  • Strong Dremio / Lakehouse Data Architect profile
  • 5+ years of experience in Data Architecture / Data Engineering, with minimum 3+ years hands-on in Dremio
  • Strong expertise in SQL optimization, data modeling, query performance tuning, and designing analytical schemas for large-scale systems
  • Deep experience with cloud object storage (S3 / ADLS / GCS) and file formats such as Parquet, Delta, Iceberg along with distributed query planning concepts
  • Hands-on experience integrating data via APIs, JDBC, Delta/Parquet, object storage, and coordinating with data engineering pipelines (Airflow, DBT, Kafka, Spark, etc.)
  • Proven experience designing and implementing lakehouse architecture including ingestion, curation, semantic modeling, reflections/caching optimization, and enabling governed analytics
  • Strong understanding of data governance, lineage, RBAC-based access control, and enterprise security best practices
  • Excellent communication skills with ability to work closely with BI, data science, and engineering teams; strong documentation discipline
  • Candidates must come from enterprise data modernization, cloud-native, or analytics-driven companies


Preferred

  • Preferred (Nice-to-have) – Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) or data catalogs (Collibra, Alation, Purview); familiarity with Snowflake, Databricks, or BigQuery environments


Job Specific Criteria

  • CV Attachment is mandatory
  • How many years of experience you have with Dremio?
  • Which is your preferred job location (Mumbai / Bengaluru / Hyderabad / Gurgaon)?
  • Are you okay with 3 Days WFO?
  • Virtual Interview requires video to be on, are you okay with it?


Role & Responsibilities

You will be responsible for architecting, implementing, and optimizing Dremio-based data lakehouse environments integrated with cloud storage, BI, and data engineering ecosystems. The role requires a strong balance of architecture design, data modeling, query optimization, and governance enablement in large-scale analytical environments.

  • Design and implement Dremio lakehouse architecture on cloud (AWS/Azure/Snowflake/Databricks ecosystem).
  • Define data ingestion, curation, and semantic modeling strategies to support analytics and AI workloads.
  • Optimize Dremio reflections, caching, and query performance for diverse data consumption patterns.
  • Collaborate with data engineering teams to integrate data sources via APIs, JDBC, Delta/Parquet, and object storage layers (S3/ADLS).
  • Establish best practices for data security, lineage, and access control aligned with enterprise governance policies.
  • Support self-service analytics by enabling governed data products and semantic layers.
  • Develop reusable design patterns, documentation, and standards for Dremio deployment, monitoring, and scaling.
  • Work closely with BI and data science teams to ensure fast, reliable, and well-modeled access to enterprise data.


Ideal Candidate

  • Bachelor’s or master’s in computer science, Information Systems, or related field.
  • 5+ years in data architecture and engineering, with 3+ years in Dremio or modern lakehouse platforms.
  • Strong expertise in SQL optimization, data modeling, and performance tuning within Dremio or similar query engines (Presto, Trino, Athena).
  • Hands-on experience with cloud storage (S3, ADLS, GCS), Parquet/Delta/Iceberg formats, and distributed query planning.
  • Knowledge of data integration tools and pipelines (Airflow, DBT, Kafka, Spark, etc.).
  • Familiarity with enterprise data governance, metadata management, and role-based access control (RBAC).
  • Excellent problem-solving, documentation, and stakeholder communication skills.
Read more
Tecblic Private LImited
Ahmedabad
5 - 6 yrs
₹5L - ₹15L / yr
Windows Azure
skill iconPython
SQL
Data Warehouse (DWH)
Data modeling
+5 more

Job Description: Data Engineer

Location: Ahmedabad

Experience: 5 to 6 years

Employment Type: Full-Time



We are looking for a highly motivated and experienced Data Engineer to join our  team. As a Data Engineer, you will play a critical role in designing, building, and optimizing data pipelines that ensure the availability, reliability, and performance of our data infrastructure. You will collaborate closely with data scientists, analysts, and cross-functional teams to provide timely and efficient data solutions.



Responsibilities


● Design and optimize data pipelines for various data sources


● Design and implement efficient data storage and retrieval mechanisms


● Develop data modelling solutions and data validation mechanisms


● Troubleshoot data-related issues and recommend process improvements


● Collaborate with data scientists and stakeholders to provide data-driven insights and solutions


● Coach and mentor junior data engineers in the team




Skills Required: 


● Minimum 4 years of experience in data engineering or related field


● Proficient in designing and optimizing data pipelines and data modeling


● Strong programming expertise in Python


● Hands-on experience with big data technologies such as Hadoop, Spark, and Hive


● Extensive experience with cloud data services such as AWS, Azure, and GCP


● Advanced knowledge of database technologies like SQL, NoSQL, and data warehousing


● Knowledge of distributed computing and storage systems


● Familiarity with DevOps practices and power automate and Microsoft Fabric will be an added advantage


● Strong analytical and problem-solving skills with outstanding communication and collaboration abilities




Qualifications


  • Bachelor's degree in Computer Science, Data Science, or a Computer related field


Read more
Financial Services Company

Financial Services Company

Agency job
via Peak Hire Solutions by Dhara Thakkar
Pune
2 - 5 yrs
₹8L - ₹10.7L / yr
SQL Azure
databricks
ETL
SQL
Data modeling
+4 more

ROLES AND RESPONSIBILITIES:

We are seeking a skilled Data Engineer who can work independently on data pipeline development, troubleshooting, and optimisation tasks. The ideal candidate will have strong SQL skills, hands-on experience with Databricks, and familiarity with cloud platforms such as AWS and Azure. You will be responsible for building and maintaining reliable data workflows, supporting analytical teams, and ensuring high-quality, secure, and accessible data across the organisation.


KEY RESPONSIBILITIES:

  • Design, develop, and maintain scalable data pipelines and ETL/ELT workflows.
  • Build, optimise, and troubleshoot SQL queries, transformations, and Databricks data processes.
  • Work with large datasets to deliver efficient, reliable, and high-performing data solutions.
  • Collaborate closely with analysts, data scientists, and business teams to support data requirements.
  • Ensure data quality, availability, and security across systems and workflows.
  • Monitor pipeline performance, diagnose issues, and implement improvements.
  • Contribute to documentation, standards, and best practices for data engineering processes.


IDEAL CANDIDATE:

  • Proven experience as a Data Engineer or in a similar data-focused role (3+ years).
  • Strong SQL skills with experience writing and optimising complex queries.
  • Hands-on experience with Databricks for data engineering tasks.
  • Experience with cloud platforms such as AWS and Azure.
  • Understanding of ETL/ELT concepts, data modelling, and pipeline orchestration.
  • Familiarity with Power BI and data integration with BI tools.
  • Strong analytical and troubleshooting skills, with the ability to work independently.
  • Experience working end-to-end on data engineering workflows and solutions.


PERKS, BENEFITS AND WORK CULTURE:

Our people define our passion and our audacious, incredibly rewarding achievements. The company is one of India’s most diversified Non-banking financial companies, and among Asia’s top 10 Large workplaces. If you have the drive to get ahead, we can help find you an opportunity at any of the 500+ locations we’re present in India.

Read more
Talent Pro
Mayank choudhary
Posted by Mayank choudhary
Pune
2 - 5 yrs
₹8L - ₹11L / yr
Data modeling
ETL

Strong Data engineer profile

Mandatory (Experience 1): Must have 2+ years of hands-on Data Engineering experience.

Mandatory (Experience 2): Must have end-to-end experience in building & maintaining ETL/ELT pipelines (not just BI/reporting).

Mandatory (Technical 1): Must have strong SQL capability (complex queries + optimization).

Mandatory (Technical 2): Must have hands-on Databricks experience.

Mandatory (Role Requirement): Must be able to work independently, troubleshoot data issues, and manage large datasets.

Read more
Neuvamacro Technology Pvt Ltd
Remote only
5 - 10 yrs
₹13L - ₹18L / yr
PowerBI
Office 365
Microsoft Dynamics
skill iconAmazon Web Services (AWS)
skill iconJavascript
+10 more

We are seeking a highly skilled Power Platform Developer with deep expertise in designing, developing, and deploying solutions using Microsoft Power Platform. The ideal candidate will have strong knowledge of Power Apps, Power Automate, Power BI, Power Pages, and Dataverse, along with integration capabilities across Microsoft 365, Azure, and third-party systems.


Key Responsibilities

  • Solution Development:
  • Design and build custom applications using Power Apps (Canvas & Model-Driven).
  • Develop automated workflows using Power Automate for business process optimization.
  • Create interactive dashboards and reports using Power BI for data visualization and analytics.
  • Configure and manage Dataverse for secure data storage and modelling.
  • Develop and maintain Power Pages for external-facing portals.
  • Integration & Customization:
  • Integrate Power Platform solutions with Microsoft 365, Dynamics 365, Azure services, and external APIs.
  • Implement custom connectors and leverage Power Platform SDK for advanced scenarios.
  • Utilize Azure Functions, Logic Apps, and REST APIs for extended functionality.
  • Governance & Security:
  • Apply best practices for environment management, ALM (Application Lifecycle Management), and solution deployment.
  • Ensure compliance with security, data governance, and licensing guidelines.
  • Implement role-based access control and manage user permissions.
  • Performance & Optimization:
  • Monitor and optimize app performance, workflow efficiency, and data refresh strategies.
  • Troubleshoot and resolve technical issues promptly.
  • Collaboration & Documentation:
  • Work closely with business stakeholders to gather requirements and translate them into technical solutions.
  • Document architecture, workflows, and processes for maintainability.


Required Skills & Qualifications

  • Technical Expertise:
  • Strong proficiency in Power Apps (Canvas & Model-Driven)Power AutomatePower BIPower Pages, and Dataverse.
  • Experience with Microsoft 365, Dynamics 365, and Azure services.
  • Knowledge of JavaScript, TypeScript, C#, .NET, and Power Fx for custom development.
  • Familiarity with SQL, DAX, and data modeling.
  • Additional Skills:
  • Understanding of ALM practicessolution packaging, and deployment pipelines.
  • Experience with Git, Azure DevOps, or similar tools for version control and CI/CD.
  • Strong problem-solving and analytical skills.
  • Certifications (Preferred):
  • Microsoft Certified: Power Platform Developer Associate.
  • Microsoft Certified: Power Platform Solution Architect Expert.


Soft Skills

  • Excellent communication and collaboration skills.
  • Ability to work in agile environments and manage multiple priorities.
  • Strong documentation and presentation abilities.

 

Read more
venanalytics

at venanalytics

2 candid answers
Rincy jain
Posted by Rincy jain
Remote, Mumbai
3 - 4 yrs
₹7L - ₹10L / yr
skill iconPython
SQL
PowerBI
Client Servicing
Team Management
+6 more

About Ven Analytics


At Ven Analytics, we don’t just crunch numbers — we decode them to uncover insights that drive real business impact. We’re a data-driven analytics company that partners with high-growth startups and enterprises to build powerful data products, business intelligence systems, and scalable reporting solutions. With a focus on innovation, collaboration, and continuous learning, we empower our teams to solve real-world business problems using the power of data.


Role Overview


We’re looking for a Power BI Data Analyst who is not just proficient in tools but passionate about building insightful, scalable, and high-performing dashboards. The ideal candidate should have strong fundamentals in data modeling, a flair for storytelling through data, and the technical skills to implement robust data solutions using Power BI, Python, and SQL..


Key Responsibilities


  • Technical Expertise: Develop scalable, accurate, and maintainable data models using Power BI, with a clear understanding of Data Modeling, DAX, Power Query, and visualization principles.


  • Programming Proficiency: Use SQL and Python for complex data manipulation, automation, and analysis.


  • Business Problem Translation: Collaborate with stakeholders to convert business problems into structured data-centric solutions considering performance, scalability, and commercial goals.


  • Hypothesis Development: Break down complex use-cases into testable hypotheses and define relevant datasets required for evaluation.


  • Solution Design: Create wireframes, proof-of-concepts (POC), and final dashboards in line with business requirements.


  • Dashboard Quality: Ensure dashboards meet high standards of data accuracy, visual clarity, performance, and support SLAs.


  • Performance Optimization: Continuously enhance user experience by improving performance, maintainability, and scalability of Power BI solutions.


  • Troubleshooting & Support: Quick resolution of access, latency, and data issues as per defined SLAs.


  • Power BI Development: Use power BI desktop for report building and service for distribution 


  • Backend development: Develop optimized SQL queries that are easy to consume, maintain and debug.


  • Version Control: Strict control on versions by tracking CRs and Bugfixes. Ensuring the maintenance of Prod and Dev dashboards. 


  • Client Servicing : Engage with clients to understand their data needs, gather requirements, present insights, and ensure timely, clear communication throughout project cycles.


  • Team Management : Lead and mentor a small team by assigning tasks, reviewing work quality, guiding technical problem-solving, and ensuring timely delivery of dashboards and reports..


Must-Have Skills


  • Strong experience building robust data models in Power BI
  • Hands-on expertise with DAX (complex measures and calculated columns)
  • Proficiency in M Language (Power Query) beyond drag-and-drop UI
  • Clear understanding of data visualization best practices (less fluff, more insight)
  • Solid grasp of SQL and Python for data processing
  • Strong analytical thinking and ability to craft compelling data stories
  • Client Servicing Background.


Good-to-Have (Bonus Points)


  • Experience using DAX Studio and Tabular Editor
  • Prior work in a high-volume data processing production environment
  • Exposure to modern CI/CD practices or version control with BI tools

 

Why Join Ven Analytics?


  • Be part of a fast-growing startup that puts data at the heart of every decision.
  • Opportunity to work on high-impact, real-world business challenges.
  • Collaborative, transparent, and learning-oriented work environment.
  • Flexible work culture and focus on career development.


Read more
Kanerika Software

at Kanerika Software

3 candid answers
2 recruiters
Soyam Gupta
Posted by Soyam Gupta
Hyderabad, Indore, Ahmedabad
4 - 6 yrs
₹15L - ₹25L / yr
Data management
Meta-data management
Data-flow analysis
Windows Azure
Data modeling
+7 more

Who we are :


Kanerika Inc. is a premier global software products and services firm that specializes in providing innovative solutions and services for data-driven enterprises. Our focus is to empower businesses to achieve their digital transformation goals and maximize their business impact through the effective use of data and AI. We leverage cutting-edge technologies in data analytics, data governance, AI-ML, GenAI/ LLM and industry best practices to deliver custom solutions that help organizations optimize their operations, enhance customer experiences, and drive growth.


What You Will Do :


As a Data Governance Developer at Kanerika, you will be responsible for building and managing metadata, lineage, and compliance frameworks across the organizations data ecosystem.


Required Qualifications :


- 4 to 6 years of experience in data governance or data management.


- Strong experience in Microsoft Purview and Informatica governance tools.


- Proficient in tracking and visualizing data lineage across systems.


- Familiar with Azure Data Factory, Talend, dbt, and other integration tools.


- Understanding of data regulations : GDPR, CCPA, SOX, HIPAA.


- Ability to translate technical data governance concepts for business stakeholders.


Tools & Technologies :


- Microsoft Purview, Collibra, Atlan, Informatica Axon, IBM IG Catalog


- Experience in Microsoft Purview areas :


1. Label creation and policy management


2. Publish/Auto-labeling


3. Data Loss Prevention & Compliance handling


4. Compliance Manager, Communication Compliance, Insider Risk Management


5. Records Management, Unified Catalog, Information Barriers


6. eDiscovery, Data Map, Lifecycle Management, Compliance Alerts, Audit


7. DSPM, Data Policy


Key Responsibilities :


- Set up and manage Microsoft Purview accounts, collections, and access controls (RBAC).


- Integrate Purview with data sources : Azure Data Lake, Synapse, SQL DB, Power BI, Snowflake.


- Schedule and monitor metadata scanning and classification jobs.


- Implement and maintain collection hierarchies aligned with data ownership.


- Design metadata ingestion workflows for technical, business, and operational metadata.


- Enrich data assets with business context : descriptions, glossary terms, tags.


- Synchronize metadata across tools using REST APIs, PowerShell, or ADF.


- Validate end-to-end lineage for datasets and reports (ADF ? Synapse ? Power BI).


- Resolve lineage gaps or failures using mapping corrections or scripts.


- Perform impact analysis to support downstream data consumers.


- Create custom classification rules for sensitive data (PII, PCI, PHI).


- Apply and manage Microsoft Purview sensitivity labels and policies.


- Integrate with Microsoft Information Protection (MIP) for DLP.


- Manage business glossary in collaboration with domain owners and stewards.


- Implement approval workflows and term governance.


- Conduct audits for glossary and metadata quality and consistency.


- Automate Purview operations using :


- PowerShell, Azure Functions, Logic Apps, REST APIs


- Build pipelines for dynamic source registration and scanning.


- Automate tagging, lineage, and glossary term mapping.


- Enable operational insights using Power BI, Synapse Link, Azure Monitor, and governance APIs.


Read more
Data Havn

Data Havn

Agency job
via Infinium Associate by Toshi Srivastava
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
2.5 - 4.5 yrs
₹10L - ₹20L / yr
skill iconPython
SQL
Google Cloud Platform (GCP)
SQL server
ETL
+9 more

About the Role:


We are seeking a talented Data Engineer to join our team and play a pivotal role in transforming raw data into valuable insights. As a Data Engineer, you will design, develop, and maintain robust data pipelines and infrastructure to support our organization's analytics and decision-making processes.

Responsibilities:

  • Data Pipeline Development: Build and maintain scalable data pipelines to extract, transform, and load (ETL) data from various sources (e.g., databases, APIs, files) into data warehouses or data lakes.
  • Data Infrastructure: Design, implement, and manage data infrastructure components, including data warehouses, data lakes, and data marts.
  • Data Quality: Ensure data quality by implementing data validation, cleansing, and standardization processes.
  • Team Management: Able to handle team.
  • Performance Optimization: Optimize data pipelines and infrastructure for performance and efficiency.
  • Collaboration: Collaborate with data analysts, scientists, and business stakeholders to understand their data needs and translate them into technical requirements.
  • Tool and Technology Selection: Evaluate and select appropriate data engineering tools and technologies (e.g., SQL, Python, Spark, Hadoop, cloud platforms).
  • Documentation: Create and maintain clear and comprehensive documentation for data pipelines, infrastructure, and processes.

 

 Skills:

  • Strong proficiency in SQL and at least one programming language (e.g., Python, Java).
  • Experience with data warehousing and data lake technologies (e.g., Snowflake, AWS Redshift, Databricks).
  • Knowledge of cloud platforms (e.g., AWS, GCP, Azure) and cloud-based data services.
  • Understanding of data modeling and data architecture concepts.
  • Experience with ETL/ELT tools and frameworks.
  • Excellent problem-solving and analytical skills.
  • Ability to work independently and as part of a team.

Preferred Qualifications:

  • Experience with real-time data processing and streaming technologies (e.g., Kafka, Flink).
  • Knowledge of machine learning and artificial intelligence concepts.
  • Experience with data visualization tools (e.g., Tableau, Power BI).
  • Certification in cloud platforms or data engineering.
Read more
 Global Digital Transformation Solutions Provider

Global Digital Transformation Solutions Provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
6 - 8 yrs
₹20L - ₹26L / yr
Large Language Models (LLM)
Prompt engineering
Knowledge base
Data modeling
databricks
+2 more

MUST-HAVES:

  • LLM, AI, Prompt Engineering LLM Integration & Prompt Engineering
  • Context & Knowledge Base Design.
  • Context & Knowledge Base Design.
  • Experience running LLM evals


NOTICE PERIOD: Immediate – 30 Days


SKILLS: LLM, AI, PROMPT ENGINEERING


NICE TO HAVES:

Data Literacy & Modelling Awareness Familiarity with Databricks, AWS, and ChatGPT Environments



ROLE PROFICIENCY:

Role Scope / Deliverables:

  • Scope of Role Serve as the link between business intelligence, data engineering, and AI application teams, ensuring the Large Language Model (LLM) interacts effectively with the modeled dataset.
  • Define and curate the context and knowledge base that enables GPT to provide accurate, relevant, and compliant business insights.
  • Collaborate with Data Analysts and System SMEs to identify, structure, and tag data elements that feed the LLM environment.
  • Design, test, and refine prompt strategies and context frameworks that align GPT outputs with business objectives.
  • Conduct evaluation and performance testing (evals) to validate LLM responses for accuracy, completeness, and relevance.
  • Partner with IT and governance stakeholders to ensure secure, ethical, and controlled AI behavior within enterprise boundaries.



KEY DELIVERABLES:

  • LLM Interaction Design Framework: Documentation of how GPT connects to the modeled dataset, including context injection, prompt templates, and retrieval logic.
  • Knowledge Base Configuration: Curated and structured domain knowledge to enable precise and useful GPT responses (e.g., commercial definitions, data context, business rules).
  • Evaluation Scripts & Test Results: Defined eval sets, scoring criteria, and output analysis to measure GPT accuracy and quality over time.
  • Prompt Library & Usage Guidelines: Standardized prompts and design patterns to ensure consistent business interactions and outcomes.
  • AI Performance Dashboard / Reporting: Visualizations or reports summarizing GPT response quality, usage trends, and continuous improvement metrics.
  • Governance & Compliance Documentation: Inputs to data security, bias prevention, and responsible AI practices in collaboration with IT and compliance teams.



KEY SKILLS:

Technical & Analytical Skills:

  • LLM Integration & Prompt Engineering – Understanding of how GPT models interact with structured and unstructured data to generate business-relevant insights.
  • Context & Knowledge Base Design – Skilled in curating, structuring, and managing contextual data to optimize GPT accuracy and reliability.
  • Evaluation & Testing Methods – Experience running LLM evals, defining scoring criteria, and assessing model quality across use cases.
  • Data Literacy & Modeling Awareness – Familiar with relational and analytical data models to ensure alignment between data structures and AI responses.
  • Familiarity with Databricks, AWS, and ChatGPT Environments – Capable of working in cloud-based analytics and AI environments for development, testing, and deployment.
  • Scripting & Query Skills (e.g., SQL, Python) – Ability to extract, transform, and validate data for model training and evaluation workflows.
  • Business & Collaboration Skills Cross-Functional Collaboration – Works effectively with business, data, and IT teams to align GPT capabilities with business objectives.
  • Analytical Thinking & Problem Solving – Evaluates LLM outputs critically, identifies improvement opportunities, and translates findings into actionable refinements.
  • Commercial Context Awareness – Understands how sales and marketing intelligence data should be represented and leveraged by GPT.
  • Governance & Responsible AI Mindset – Applies enterprise AI standards for data security, privacy, and ethical use.
  • Communication & Documentation – Clearly articulates AI logic, context structures, and testing results for both technical and non-technical audiences.


Read more
Gyansys Infotech
Bengaluru (Bangalore)
5 - 8 yrs
₹10L - ₹15L / yr
Tableau
PowerBI
Data Visualization
Dashboard
DAX
+1 more

Location: Bangalore

Experience: 5–8 Years

CTC: Up to 22 LPA


Required Skills:

  • Strong expertise in Power BI (dashboarding, DAX, data modeling, report building).
  • Basic to working knowledge of Tableau.
  • Solid understanding of SQL and relational databases.
  • Experience in connecting to various data sources (Excel, SQL Server, APIs, etc.).
  • Strong analytical, problem-solving, and communication skills.


Nice to Have:

  • Experience with cloud data platforms (Azure, AWS, GCP).
  • Knowledge of data warehousing concepts and ETL processes.


Read more
Nyx Wolves
Remote only
5 - 7 yrs
₹11L - ₹13L / yr
SQL
Data modeling
Web performance optimization
Data engineering

Now Hiring: Tableau Developer (Banking Domain) 🚀

We’re looking for a 6+ years experienced Tableau pro to design and optimize dashboards for Banking & Financial Services.


🔹 Design & optimize interactive Tableau dashboards for large banking datasets

🔹 Translate KPIs into scalable reporting solutions

🔹 Ensure compliance with regulations like KYC, AML, Basel III, PCI-DSS

🔹 Collaborate with business analysts, data engineers, and banking experts

🔹 Bring deep knowledge of SQL, data modeling, and performance optimization


🌍 Location: Remote

📊 Domain Expertise: Banking / Financial Services


✨ Preferred experience with cloud data platforms (AWS, Azure, GCP) & certifications in Tableau are a big plus!


Bring your data visualization skills to transform banking intelligence & compliance reporting.


Read more
MindCrew Technologies

at MindCrew Technologies

3 recruiters
Agency job
Pune
8 - 12 yrs
₹10L - ₹15L / yr
Data engineering
Data modeling
Snow flake schema
ETL
ETL architecture
+3 more

Job Title: Lead Data Engineer

📍 Location: Pune

🧾 Experience: 10+ Years

💰 Budget: Up to 1.7 LPM


Responsibilities

  • Collaborate with Data & ETL teams to review, optimize, and scale data architectures within Snowflake.
  • Design, develop, and maintain efficient ETL/ELT pipelines and robust data models.
  • Optimize SQL queries for performance and cost efficiency.
  • Ensure data quality, reliability, and security across pipelines and datasets.
  • Implement Snowflake best practices for performance, scaling, and governance.
  • Participate in code reviews, knowledge sharing, and mentoring within the data engineering team.
  • Support BI and analytics initiatives by enabling high-quality, well-modeled datasets.


Read more
MindCrew Technologies

at MindCrew Technologies

3 recruiters
Agency job
via TIGI HR Solution Pvt. Ltd. by Vaidehi Sarkar
Pune
10 - 14 yrs
₹10L - ₹15L / yr
Snowflake
ETL
SQL
Snow flake schema
Data modeling
+3 more

Exp: 10+ Years

CTC: 1.7 LPM

Location: Pune

SnowFlake Expertise Profile


Should hold 10 + years of experience with strong skills with core understanding of cloud data warehouse principles and extensive experience in designing, building, optimizing, and maintaining robust and scalable data solutions on the Snowflake platform.

Possesses a strong background in data modelling, ETL/ELT, SQL development, performance tuning, scaling, monitoring and security handling.


Responsibilities:

* Collaboration with Data and ETL team to review code, understand current architecture and help improve it based on Snowflake offerings and experience

* Review and implement best practices to design, develop, maintain, scale, efficiently monitor data pipelines and data models on the Snowflake platform for

ETL or BI.

* Optimize complex SQL queries for data extraction, transformation, and loading within Snowflake.

* Ensure data quality, integrity, and security within the Snowflake environment.

* Participate in code reviews and contribute to the team's development standards.

Education:

* Bachelor’s degree in computer science, Data Science, Information Technology, or anything equivalent.

* Relevant Snowflake certifications are a plus (e.g., Snowflake certified Pro / Architecture / Advanced).

Read more
QAgile Services

at QAgile Services

1 recruiter
Radhika Chotai
Posted by Radhika Chotai
Mumbai
2 - 4 yrs
₹7L - ₹12L / yr
Oracle SQL Developer
PowerBI
Data modeling
Performance tuning
skill iconXML
+2 more

Job Description: Oracle BI Publisher Developer

Position Type

• Work Type: Full-time

• Employment Type: Contract


Experience Required

• Minimum 1 year of hands-on experience in:

o Oracle Database: SQL development, performance tuning, data modelling

o Oracle BI Publisher: Report design, template customization, data source integration

Technical Skills

• Mandatory:

o Oracle SQL & PL/SQL

o BI Publisher report development and deployment

o XML and XSLT for template customization

• Preferred:

o Experience with Oracle E-Business Suite or Fusion Applications

o Familiarity with data visualization principles

o Basic understanding of performance metrics and report optimization

Responsibilities

• Design, develop, and maintain BI Publisher reports based on business requirements

• Write and optimize SQL queries for data extraction and transformation

• Collaborate with stakeholders to ensure report accuracy and usability

• Troubleshoot and resolve issues related to data and report performance

• Document technical specifications and maintain version control

Read more
Pluginlive

at Pluginlive

1 recruiter
Harsha Saggi
Posted by Harsha Saggi
Chennai, Mumbai
4 - 6 yrs
₹10L - ₹20L / yr
skill iconPython
SQL
NOSQL Databases
Data architecture
Data modeling
+7 more

Role Overview:

We are seeking a talented and experienced Data Architect with strong data visualization capabilities to join our dynamic team in Mumbai. As a Data Architect, you will be responsible for designing, building, and managing our data infrastructure, ensuring its reliability, scalability, and performance. You will also play a crucial role in transforming complex data into insightful visualizations that drive business decisions. This role requires a deep understanding of data modeling, database technologies (particularly Oracle Cloud), data warehousing principles, and proficiency in data manipulation and visualization tools, including Python and SQL.


Responsibilities:

  • Design and implement robust and scalable data architectures, including data warehouses, data lakes, and operational data stores, primarily leveraging Oracle Cloud services.
  • Develop and maintain data models (conceptual, logical, and physical) that align with business requirements and ensure data integrity and consistency.
  • Define data governance policies and procedures to ensure data quality, security, and compliance.
  • Collaborate with data engineers to build and optimize ETL/ELT pipelines for efficient data ingestion, transformation, and loading.
  • Develop and execute data migration strategies to Oracle Cloud.
  • Utilize strong SQL skills to query, manipulate, and analyze large datasets from various sources.
  • Leverage Python and relevant libraries (e.g., Pandas, NumPy) for data cleaning, transformation, and analysis.
  • Design and develop interactive and insightful data visualizations using tools like [Specify Visualization Tools - e.g., Tableau, Power BI, Matplotlib, Seaborn, Plotly] to communicate data-driven insights to both technical and non-technical stakeholders.
  • Work closely with business analysts and stakeholders to understand their data needs and translate them into effective data models and visualizations.
  • Ensure the performance and reliability of data visualization dashboards and reports.
  • Stay up-to-date with the latest trends and technologies in data architecture, cloud computing (especially Oracle Cloud), and data visualization.
  • Troubleshoot data-related issues and provide timely resolutions.
  • Document data architectures, data flows, and data visualization solutions.
  • Participate in the evaluation and selection of new data technologies and tools.


Qualifications:

  • Bachelor's or Master's degree in Computer Science, Data Science, Information Systems, or a related field.
  • Proven experience (typically 5+ years) as a Data Architect, Data Modeler, or similar role. 

  • Deep understanding of data warehousing concepts, dimensional modeling (e.g., star schema, snowflake schema), and ETL/ELT processes.
  • Extensive experience working with relational databases, particularly Oracle, and proficiency in SQL.
  • Hands-on experience with Oracle Cloud data services (e.g., Autonomous Data Warehouse, Object Storage, Data Integration).
  • Strong programming skills in Python and experience with data manipulation and analysis libraries (e.g., Pandas, NumPy).
  • Demonstrated ability to create compelling and effective data visualizations using industry-standard tools (e.g., Tableau, Power BI, Matplotlib, Seaborn, Plotly).
  • Excellent analytical and problem-solving skills with the ability to interpret complex data and translate it into actionable insights. 
  • Strong communication and presentation skills, with the ability to effectively communicate technical concepts to non-technical audiences. 
  • Experience with data governance and data quality principles.
  • Familiarity with agile development methodologies.
  • Ability to work independently and collaboratively within a team environment.

Application Link- https://forms.gle/km7n2WipJhC2Lj2r5

Read more
Cymetrix Software

at Cymetrix Software

2 candid answers
Netra Shettigar
Posted by Netra Shettigar
Remote only
4 - 8 yrs
₹12L - ₹20L / yr
Data modeling
Dimensional modeling
Google Cloud Platform (GCP)

Advanced SQL, data modeling skills - designing Dimensional Layer, 3NF, denormalized views & semantic layer, Expertise in GCP services



Role & Responsibilities:

● Design and implement robust semantic layers for data systems on Google Cloud Platform (GCP)

● Develop and maintain complex data models, including dimensional models, 3NF structures, and denormalized views

● Write and optimize advanced SQL queries for data extraction, transformation, and analysis

● Utilize GCP services to create scalable and efficient data architectures

● Collaborate with cross-functional teams to translate business requirements(specified in mapping sheets or Legacy

Datastage jobs) into effective data models

● Implement and maintain data warehouses and data lakes on GCP

● Design and optimize ETL/ELT processes for large-scale data integration

● Ensure data quality, consistency, and integrity across all data models and semantic layers

● Develop and maintain documentation for data models, semantic layers, and data flows

● Participate in code reviews and implement best practices for data modeling and database design

● Optimize database performance and query execution on GCP

● Provide technical guidance and mentorship to junior team members

● Stay updated with the latest trends and advancements in data modeling, GCP services, and big data technologies

● Collaborate with data scientists and analysts to enable efficient data access and analysis

● Implement data governance and security measures within the semantic layer and data model

Read more
Springer Capital
Remote only
0 - 1 yrs
₹5000 - ₹7000 / mo
PowerBI
Microsoft Excel
SQL
Attention to detail
Troubleshooting
+13 more

Springer Capital is a cross-border asset management firm specializing in real estate investment banking between China and the USA. We are offering a remote internship for aspiring data engineers interested in data pipeline development, data integration, and business intelligence.

The internship offers flexible start and end dates. A short quiz or technical task may be required as part of the selection process.

Responsibilities:

  • Design, build, and maintain scalable data pipelines for structured and unstructured data sources
  • Develop ETL processes to collect, clean, and transform data from internal and external systems
  • Support integration of data into dashboards, analytics tools, and reporting systems
  • Collaborate with data analysts and software developers to improve data accessibility and performance
  • Document workflows and maintain data infrastructure best practices
  • Assist in identifying opportunities to automate repetitive data tasks


Read more
Springer Capital
Andrew Rose
Posted by Andrew Rose
Remote only
0 - 1 yrs
₹5000 - ₹7000 / mo
Attention to detail
Troubleshooting
Data modeling
Warehousing concepts
Google Cloud Platform (GCP)
+15 more

Springer Capital is a cross-border asset management firm specializing in real estate investment banking between China and the USA. We are offering a remote internship for aspiring data engineers interested in data pipeline development, data integration, and business intelligence. The internship offers flexible start and end dates. A short quiz or technical task may be required as part of the selection process. 

 

Responsibilities: 

▪ Design, build, and maintain scalable data pipelines for structured and unstructured data sources 

▪ Develop ETL processes to collect, clean, and transform data from internal and external systems 

▪ Support integration of data into dashboards, analytics tools, and reporting systems 

▪ Collaborate with data analysts and software developers to improve data accessibility and performance 

▪ Document workflows and maintain data infrastructure best practices 

▪ Assist in identifying opportunities to automate repetitive data tasks 

Read more
NeoGenCode Technologies Pvt Ltd
Akshay Patil
Posted by Akshay Patil
Bengaluru (Bangalore)
7 - 12 yrs
₹15L - ₹20L / yr
Informatica MDM
Services Integration Framework (SIF)
Informatica MDM Hub Console
Provisioning Tool
ActiveVOS
+6 more

Job Title : Informatica MDM Developer

Experience : 7 to 10 Years

Location : Bangalore (3 Days Work From Office – ITPL Main Road, Mahadevapura)

Job Type : Full-time / Contract


Job Overview :

We are hiring an experienced Informatica MDM Developer to join our team in Bangalore. The ideal candidate will play a key role in implementing and customizing Master Data Management (MDM) solutions using Informatica MDM (Multi-Domain Edition), ensuring a trusted, unified view of enterprise data.


Mandatory Skills :

Informatica MDM (Multi-Domain Edition), ActiveVOS workflows, Java (User Exits), Services Integration Framework (SIF) APIs, SQL/PLSQL, Data Modeling, Informatica Data Quality (IDQ), MDM concepts (golden record, survivorship, trust, hierarchy).


Key Responsibilities :

  • Configure Informatica MDM Hub : subject area models, base objects, relationships.
  • Develop match/merge rules, trust/survivorship logic to create golden records.
  • Design workflows using ActiveVOS for data stewardship and exception handling.
  • Integrate with source/target systems (ERP, CRM, Data Lakes, APIs).
  • Customize user exits (Java), SIF APIs, and business entity services.
  • Implement and maintain data quality validations using IDQ.
  • Collaborate with cross-functional teams for governance alignment.
  • Support MDM jobs, synchronization, batch groups, and performance tuning.

Must-Have Skills :

  • 7 to 10 years of experience in Data Engineering or MDM.
  • 5+ years hands-on with Informatica MDM (Multi-Domain Edition).
  • Strong in MDM concepts : golden record, trust, survivorship, hierarchy.

Proficient in :

  • Informatica MDM Hub Console, Provisioning Tool, SIF.
  • ActiveVOS workflows, Java-based user exits.
  • SQL, PL/SQL, and data modeling.
  • Experience with system integration and Informatica Data Quality (IDQ).

Nice-to-Have :

  • Knowledge of Informatica EDC, Axon, cloud MDM (AWS/GCP/Azure).
  • Understanding of data lineage, GDPR/HIPAA compliance, and DevOps tools.
Read more
NeoGenCode Technologies Pvt Ltd
Ritika Verma
Posted by Ritika Verma
Bengaluru (Bangalore)
6 - 10 yrs
₹10L - ₹17L / yr
OLTP
OLAP
Data modeling

Required Skills:


● 6+ years of experience with hybrid data environments that leverage both distributed and relational database technologies to support analytics services (Oracle, IMB DB2, GCP)

● Solid understanding of data warehousing principles, architecture, and its implementation in complex environments.

● Good experience in OLTP and OLAP systems

● Excellent Data Analysis skills

● Good understanding of one or more ETL tools and data ingestion frameworks.

● Experience as a designer of complex Dimensional data models for analytics services

● Experience with various testing methodologies and user acceptance testing.

● Experience on one or more cloud platforms (e.g. AWS, Azure, GCP) ● Understanding of Data Quality and Data Governance

● Understanding of Industry Data Models

● Experience in leading the large teams

● Experience with processing large datasets from multiple sources.

● Ability to operate effectively and independently in a dynamic, fluid environment.

● Good understanding of agile methodology

● Strong verbal and written communications skills with experience in relating complex concepts to non-technical users.

● Demonstrated ability to exchange ideas and convey complex information clearly and concisely

● Proven ability to lead and drive projects and assignments to completion

● Exposure to Data Modeling Tools

○ ERwin ○ Power Designer ○ Business Glossary ○ ER/Studio ○ Enterprise Architect, ○ MagicDraw

Read more
Tops Infosolutions
Ahmedabad
4 - 9 yrs
₹6L - ₹15L / yr
PowerBI
DAX
Looker
MySQL
Data modeling
+1 more

Job Summary:


Position : Senior Power BI Developer

Experience : 4+Years

Location : Ahmedabad - WFO


Key Responsibilities:

  • Design, develop, and maintain interactive and user-friendly Power BI dashboards and
  • reports.
  • Translate business requirements into functional and technical specifications.
  • Perform data modeling, DAX calculations, and Power Query transformations.
  • Integrate data from multiple sources including SQL Server, Excel, SharePoint, and APIs.
  • Optimize Power BI datasets, reports, and dashboards for performance and usability.
  • Collaborate with business analysts, data engineers, and stakeholders to ensure data accuracy
  • and relevance.
  • Ensure security and governance best practices in Power BI workspaces and datasets.
  • Provide ongoing support and troubleshooting for existing Power BI solutions.
  • Stay updated with Power BI updates, best practices, and industry trends.


Required Skills & Qualifications:

  • Bachelor’s degree in Computer Science, Information Technology, Data Analytics, or a
  • related field.
  • 4+ years of professional experience in data analytics or business intelligence.
  • 3+ years of hands-on experience with Power BI (Power BI Desktop, Power BI Service).
  • Strong expertise in DAX, Power Query (M Language), and data modeling (star/snowflake schema).
  • Proficiency in writing complex SQL queries and optimizing them for performance.
  • Experience in working with large and complex datasets.
  • Experience in BigQuery, MySql, Looker Studio is a plus.
  • Ecommerce Industry Experience will be an added advantage.
  • Solid understanding of data warehousing concepts and ETL processes.
  • Experience with version control tools such as Power Apps & Power Automate would be a plus.


Preferred Qualifications:

  • Microsoft Power BI Certification (PL-300 or equivalent is a plus). Experience with Azure Data Services (Azure Data Factory, Azure SQL, Synapse).
  • Knowledge of other BI tools (Tableau, Qlik) is a plus.
  • Familiarity with scripting languages (Python, R) for data analysis is a bonus.
  • Experience integrating Power BI into web portals using Power BI Embedded.


Read more
NeoGenCode Technologies Pvt Ltd
Akshay Patil
Posted by Akshay Patil
Gurugram
6 - 10 yrs
₹12L - ₹22L / yr
Data engineering
Azure Data Factory (ADF)
Azure Cloud Services
SQL
Data modeling
+10 more

Job Title : Senior Data Engineer

Experience : 6 to 10 Years

Location : Gurgaon (Hybrid – 3 days office / 2 days WFH)

Notice Period : Immediate to 30 days (Buyout option available)


About the Role :

We are looking for an experienced Senior Data Engineer to join our Digital IT team in Gurgaon.

This role involves building scalable data pipelines, managing data architecture, and ensuring smooth data flow across the organization while maintaining high standards of security and compliance.


Mandatory Skills :

Azure Data Factory (ADF), Azure Cloud Services, SQL, Data Modelling, CI/CD tools, Git, Data Governance, RDBMS & NoSQL databases (e.g., SQL Server, PostgreSQL, Redis, ElasticSearch), Data Lake migration.


Key Responsibilities :

  • Design and develop secure, scalable end-to-end data pipelines using Azure Data Factory (ADF) and Azure services.
  • Build and optimize data architectures (including Medallion Architecture).
  • Collaborate with cross-functional teams on cybersecurity, data privacy (e.g., GDPR), and governance.
  • Manage structured/unstructured data migration to Data Lake.
  • Ensure CI/CD integration for data workflows and version control using Git.
  • Identify and integrate data sources (internal/external) in line with business needs.
  • Proactively highlight gaps and risks related to data compliance and integrity.

Required Skills :

  • Azure Data Factory (ADF)Mandatory
  • Strong SQL and Data Modelling expertise.
  • Hands-on with Azure Cloud Services and data architecture.
  • Experience with CI/CD tools and version control (Git).
  • Good understanding of Data Governance practices.
  • Exposure to ETL/ELT pipelines and Data Lake migration.
  • Working knowledge of RDBMS and NoSQL databases (e.g., SQL Server, PostgreSQL, Redis, ElasticSearch).
  • Understanding of RESTful APIs, deployment on cloud/on-prem infrastructure.
  • Strong problem-solving, communication, and collaboration skills.

Additional Info :

  • Work Mode : Hybrid (No remote); relocation to Gurgaon required for non-NCR candidates.
  • Communication : Above-average verbal and written English skills.

Perks & Benefits :

  • 5 Days work week
  • Global exposure and leadership collaboration.
  • Health insurance, employee-friendly policies, training and development.
Read more
Bengaluru (Bangalore)
4 - 8 yrs
₹18L - ₹22L / yr
skill iconAmazon Web Services (AWS)
SQL
Datawarehousing
Data modeling
ERwin
+1 more

Job Description :

We are seeking a highly experienced Sr Data Modeler / Solution Architect to join the Data Architecture team at Corporate Office in Bangalore. The ideal candidate will have 4 to 8 years of experience in data modeling and architecture with deep expertise in AWS cloud stackdata warehousing, and enterprise data modeling tools. This individual will be responsible for designing and creating enterprise-grade data models and driving the implementation of Layered Scalable Architecture or Medallion Architecture to support robust, scalable, and high-quality data marts across multiple business units.

This role will involve managing complex datasets from systems like PoS, ERP, CRM, and external sources, while optimizing performance and cost. You will also provide strategic leadership on data modeling standards, governance, and best practices, ensuring the foundation for analytics and reporting is solid and future-ready.


Key Responsibilities:

·               Design and deliver conceptual, logical, and physical data models using tools like ERWin.

·               Implement Layered Scalable Architecture / Medallion Architecture for building scalable, standardized data marts.

·               Optimize performance and cost of AWS-based data infrastructure (Redshift, S3, Glue, Lambda, etc.).

·               Collaborate with cross-functional teams (IT, business, analysts) to gather data requirements and ensure model alignment with KPIs and business logic.

·               Develop and optimize SQL code, materialized views, stored procedures in AWS Redshift.

·               Ensure data governance, lineage, and quality mechanisms are established across systems.

·               Lead and mentor technical teams in an Agile project delivery model.

·               Manage data layer creation and documentation: data dictionary, ER diagrams, purpose mapping.

·               Identify data gaps and availability issues with respect to source systems.


Required Skills & Qualifications:

·               Bachelor’s or Master’s degree in Computer Science, IT, or related field (B.E./B.Tech/M.E./M.Tech/MCA).

·               Minimum 4 years of experience in data modeling and architecture.

·               Proficiency with data modeling tools such as ERWin, with strong knowledge of forward and reverse engineering.

·               Deep expertise in SQL (including advanced SQL, stored procedures, performance tuning).

·               Strong experience in data warehousingRDBMS, and ETL tools like AWS GlueIBM DataStage, or SAP Data Services.

·               Hands-on experience with AWS services: Redshift, S3, Glue, RDS, Lambda, Bedrock, and Q.

·               Good understanding of reporting tools such as TableauPower BI, or AWS QuickSight.

·               Exposure to DevOps/CI-CD pipelinesAI/MLGen AINLP, and polyglot programming is a plus.

·               Familiarity with data governance tools (e.g., ORION/EIIG).

·               Domain knowledge in RetailManufacturingHR, or Finance preferred.

·               Excellent written and verbal communication skills.


Certifications (Preferred):

·               AWS Certification (e.g., AWS Certified Solutions Architect or Data Analytics – Specialty)

·               Data Governance or Data Modeling Certifications (e.g., CDMPDatabricks, or TOGAF)

 

Mandatory Skills

aws, Technical Architecture, Aiml, SQL, Data Warehousing, Data Modelling

Read more
Risosu Consulting LLP
Vandana Saxena
Posted by Vandana Saxena
Mumbai Goregaon
3 - 7 yrs
₹8L - ₹12L / yr
PowerBI
DAX
Data modeling

Job Description:

We are seeking a skilled Power BI Developer with a strong understanding of Capital Markets to join our data analytics team. The ideal candidate will be responsible for designing, developing, and maintaining interactive dashboards and reports that provide insights into trading, risk, and financial performance. This role requires experience working with capital market data sets and a solid grasp of financial instruments and market operations.


Key Responsibilities:

  • Develop interactive Power BI dashboards and reports tailored to capital markets (e.g., equities, derivatives, fixed income).
  • Connect to and integrate data from various sources such as Bloomberg, Reuters, SQL databases, and Excel.
  • Translate business requirements into data models and visualizations that provide actionable insights.
  • Optimize Power BI reports for performance, usability, and scalability.
  • Work closely with business stakeholders (trading, risk, compliance) to understand KPIs and analytics needs.
  • Implement row-level security and data access controls.
  • Maintain data quality, lineage, and versioning documentation.


Required Skills & Qualifications:

  • 3+ years of experience with Power BI (Power Query, DAX, data modeling).
  • Strong understanding of capital markets: trading workflows, market data, instruments (equities, bonds, derivatives, etc.).
  • Experience with SQL and working with large financial datasets.
  • Familiarity with risk metrics, trade lifecycle, and financial statement analysis.
  • Knowledge of data governance, security, and performance tuning in BI environments.
  • Excellent communication skills and ability to work with cross-functional teams.


Preferred Qualifications:

  • Experience with Python or R for data analysis.
  • Knowledge of investment banking or asset management reporting frameworks.
  • Exposure to cloud platforms like Azure, AWS, or GCP.
  • Certifications in Power BI or Capital Markets.


Read more
NeoGenCode Technologies Pvt Ltd
Akshay Patil
Posted by Akshay Patil
Remote only
10 - 15 yrs
₹10L - ₹18L / yr
Solution architecture
Denodo
Data Virtualization
Data architecture
SQL
+5 more

Job Title : Solution Architect – Denodo

Experience : 10+ Years

Location : Remote / Work from Home

Notice Period : Immediate joiners preferred


Job Overview :

We are looking for an experienced Solution Architect – Denodo to lead the design and implementation of data virtualization solutions. In this role, you will work closely with cross-functional teams to ensure our data architecture aligns with strategic business goals. The ideal candidate will bring deep expertise in Denodo, strong technical leadership, and a passion for driving data-driven decisions.


Mandatory Skills : Denodo, Data Virtualization, Data Architecture, SQL, Data Modeling, ETL, Data Integration, Performance Optimization, Communication Skills.


Key Responsibilities :

  • Architect and design scalable data virtualization solutions using Denodo.
  • Collaborate with business analysts and engineering teams to understand requirements and define technical specifications.
  • Ensure adherence to best practices in data governance, performance, and security.
  • Integrate Denodo with diverse data sources and optimize system performance.
  • Mentor and train team members on Denodo platform capabilities.
  • Lead tool evaluations and recommend suitable data integration technologies.
  • Stay updated with emerging trends in data virtualization and integration.

Required Qualifications :

  • Bachelor’s degree in Computer Science, IT, or a related field.
  • 10+ Years of experience in data architecture and integration.
  • Proven expertise in Denodo and data virtualization frameworks.
  • Strong proficiency in SQL and data modeling.
  • Hands-on experience with ETL processes and data integration tools.
  • Excellent communication, presentation, and stakeholder management skills.
  • Ability to lead technical discussions and influence architectural decisions.
  • Denodo or data architecture certifications are a strong plus.
Read more
Tops Infosolutions
Ahmedabad
4 - 10 yrs
₹9L - ₹15L / yr
PowerBI
DAX
Data modeling
DevOps
MySQL



Job Summary:


Position : Senior Power BI Developer

Experience : 4+Years

Location : Ahmedabad - WFO


Key Responsibilities:

  • Design, develop, and maintain interactive and user-friendly Power BI dashboards and
  • reports.
  • Translate business requirements into functional and technical specifications.
  • Perform data modeling, DAX calculations, and Power Query transformations.
  • Integrate data from multiple sources including SQL Server, Excel, SharePoint, and APIs.
  • Optimize Power BI datasets, reports, and dashboards for performance and usability.
  • Collaborate with business analysts, data engineers, and stakeholders to ensure data accuracy
  • and relevance.
  • Ensure security and governance best practices in Power BI workspaces and datasets.
  • Provide ongoing support and troubleshooting for existing Power BI solutions.
  • Stay updated with Power BI updates, best practices, and industry trends.


Required Skills & Qualifications:

  • Bachelor’s degree in Computer Science, Information Technology, Data Analytics, or a
  • related field.
  • 4+ years of professional experience in data analytics or business intelligence.
  • 3+ years of hands-on experience with Power BI (Power BI Desktop, Power BI Service).
  • Strong expertise in DAX, Power Query (M Language), and data modeling (star/snowflake schema).
  • Proficiency in writing complex SQL queries and optimizing them for performance.
  • Experience in working with large and complex datasets.
  • Experience in BigQuery, MySql, Looker Studio is a plus.
  • Ecommerce Industry Experience will be an added advantage.
  • Solid understanding of data warehousing concepts and ETL processes.
  • Experience with version control tools such as Power Apps & Power Automate would be a plus.


Preferred Qualifications:

  • Microsoft Power BI Certification (PL-300 or equivalent is a plus). Experience with Azure Data Services (Azure Data Factory, Azure SQL, Synapse).
  • Knowledge of other BI tools (Tableau, Qlik) is a plus.
  • Familiarity with scripting languages (Python, R) for data analysis is a bonus.
  • Experience integrating Power BI into web portals using Power BI Embedded.



Read more
NeoGenCode Technologies Pvt Ltd
Bengaluru (Bangalore)
5 - 12 yrs
₹15L - ₹35L / yr
skill iconNodeJS (Node.js)
Relational Database (RDBMS)
SQL
Data modeling
RESTful APIs
+4 more

Job Title : Senior Software Engineer – Backend

Experience Required : 6 to 12 Years

Location : Bengaluru (Hybrid – 3 Days Work From Office)

Number of Openings : 2

Work Hours : 11:00 AM – 8:00 PM IST

Notice Period : 30 Days Preferred

Work Location : SmartWorks The Cube, Karle Town SEZ, Building No. 5, Nagavara, Bangalore – 560045

Note : Face-to-face interview in Bangalore is mandatory during the second round.


Role Overview :

We are looking for an experienced Senior Backend Developer to join our growing team. This is a hands-on role focused on building cloud-based, scalable applications in the mortgage finance domain.


Key Responsibilities :

  • Design, develop, and maintain backend components for cloud-based web applications.
  • Contribute to architectural decisions involving microservices and distributed systems.
  • Work extensively with Node.js and RESTful APIs.
  • Implement scalable solutions using AWS services (e.g., Lambda, SQS, SNS, RDS).
  • Utilize both relational and NoSQL databases effectively.
  • Collaborate with cross-functional teams to deliver robust and maintainable code.
  • Participate in agile development practices and deliver rapid iterations based on feedback.
  • Take ownership of system performance, scalability, and reliability.

Core Requirements :

  • 5+ Years of total experience in backend development.
  • Minimum 3 Years of experience in building scalable microservices or delivering large-scale products.
  • Strong expertise in Node.js and REST APIs.
  • Solid experience with RDBMS, SQL, and data modeling.
  • Good understanding of distributed systems, scalability, and availability.
  • Familiarity with AWS infrastructure and services.
  • Development experience in Python and/or Java is a plus.

Preferred Skills :

  • Experience with frontend frameworks like React.js or AngularJS.
  • Working knowledge of Docker and containerized applications.

Interview Process :

  1. Round 1 : Online technical assessment (1 hour)
  2. Round 2 : Virtual technical interview
  3. Round 3 : In-person interview at the Bangalore office (2 hours – mandatory)
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort