Cutshort logo
Data archiving jobs

3+ Data archiving Jobs in India

Apply to 3+ Data archiving Jobs on CutShort.io. Find your next job, effortlessly. Browse Data archiving Jobs and apply today!

icon
AI-First Company

AI-First Company

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore), Mumbai, Hyderabad, Gurugram
5 - 10 yrs
₹20L - ₹45L / yr
dremio
lakehouse
Data architecture
Data engineering
SQL
+48 more

ROLES AND RESPONSIBILITIES:

You will be responsible for architecting, implementing, and optimizing Dremio-based data Lakehouse environments integrated with cloud storage, BI, and data engineering ecosystems. The role requires a strong balance of architecture design, data modeling, query optimization, and governance enablement in large-scale analytical environments.


  • Design and implement Dremio lakehouse architecture on cloud (AWS/Azure/Snowflake/Databricks ecosystem).
  • Define data ingestion, curation, and semantic modeling strategies to support analytics and AI workloads.
  • Optimize Dremio reflections, caching, and query performance for diverse data consumption patterns.
  • Collaborate with data engineering teams to integrate data sources via APIs, JDBC, Delta/Parquet, and object storage layers (S3/ADLS).
  • Establish best practices for data security, lineage, and access control aligned with enterprise governance policies.
  • Support self-service analytics by enabling governed data products and semantic layers.
  • Develop reusable design patterns, documentation, and standards for Dremio deployment, monitoring, and scaling.
  • Work closely with BI and data science teams to ensure fast, reliable, and well-modeled access to enterprise data.


IDEAL CANDIDATE:

  • Bachelor’s or Master’s in Computer Science, Information Systems, or related field.
  • 5+ years in data architecture and engineering, with 3+ years in Dremio or modern lakehouse platforms.
  • Strong expertise in SQL optimization, data modeling, and performance tuning within Dremio or similar query engines (Presto, Trino, Athena).
  • Hands-on experience with cloud storage (S3, ADLS, GCS), Parquet/Delta/Iceberg formats, and distributed query planning.
  • Knowledge of data integration tools and pipelines (Airflow, DBT, Kafka, Spark, etc.).
  • Familiarity with enterprise data governance, metadata management, and role-based access control (RBAC).
  • Excellent problem-solving, documentation, and stakeholder communication skills.


PREFERRED:

  • Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) and data catalogs (Collibra, Alation, Purview).
  • Exposure to Snowflake, Databricks, or BigQuery environments.
  • Experience in high-tech, manufacturing, or enterprise data modernization programs.
Read more
Technology Industry

Technology Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Delhi
10 - 15 yrs
₹105L - ₹140L / yr
Data engineering
Apache Spark
Apache
Apache Kafka
skill iconJava
+25 more

MANDATORY:

  • Super Quality Data Architect, Data Engineering Manager / Director Profile
  • Must have 12+ YOE in Data Engineering roles, with at least 2+ years in a Leadership role
  • Must have 7+ YOE in hands-on Tech development with Java (Highly preferred) or Python, Node.JS, GoLang
  • Must have strong experience in large data technologies, tools like HDFS, YARN, Map-Reduce, Hive, Kafka, Spark, Airflow, Presto etc.
  • Strong expertise in HLD and LLD, to design scalable, maintainable data architectures.
  • Must have managed a team of at least 5+ Data Engineers (Read Leadership role in CV)
  • Product Companies (Prefers high-scale, data-heavy companies)


PREFERRED:

  • Must be from Tier - 1 Colleges, preferred IIT
  • Candidates must have spent a minimum 3 yrs in each company.
  • Must have recent 4+ YOE with high-growth Product startups, and should have implemented Data Engineering systems from an early stage in the Company


ROLES & RESPONSIBILITIES:

  • Lead and mentor a team of data engineers, ensuring high performance and career growth.
  • Architect and optimize scalable data infrastructure, ensuring high availability and reliability.
  • Drive the development and implementation of data governance frameworks and best practices.
  • Work closely with cross-functional teams to define and execute a data roadmap.
  • Optimize data processing workflows for performance and cost efficiency.
  • Ensure data security, compliance, and quality across all data platforms.
  • Foster a culture of innovation and technical excellence within the data team.


IDEAL CANDIDATE:

  • 10+ years of experience in software/data engineering, with at least 3+ years in a leadership role.
  • Expertise in backend development with programming languages such as Java, PHP, Python, Node.JS, GoLang, JavaScript, HTML, and CSS.
  • Proficiency in SQL, Python, and Scala for data processing and analytics.
  • Strong understanding of cloud platforms (AWS, GCP, or Azure) and their data services.
  • Strong foundation and expertise in HLD and LLD, as well as design patterns, preferably using Spring Boot or Google Guice
  • Experience in big data technologies such as Spark, Hadoop, Kafka, and distributed computing frameworks.
  • Hands-on experience with data warehousing solutions such as Snowflake, Redshift, or BigQuery
  • Deep knowledge of data governance, security, and compliance (GDPR, SOC2, etc.).
  • Experience in NoSQL databases like Redis, Cassandra, MongoDB, and TiDB.
  • Familiarity with automation and DevOps tools like Jenkins, Ansible, Docker, Kubernetes, Chef, Grafana, and ELK.
  • Proven ability to drive technical strategy and align it with business objectives.
  • Strong leadership, communication, and stakeholder management skills.


PREFERRED QUALIFICATIONS:

  • Experience in machine learning infrastructure or MLOps is a plus.
  • Exposure to real-time data processing and analytics.
  • Interest in data structures, algorithm analysis and design, multicore programming, and scalable architecture.
  • Prior experience in a SaaS or high-growth tech company.
Read more
is an agile and innovative, global analytics company driven

is an agile and innovative, global analytics company driven

Agency job
via Jobdost by Mamatha A
Mumbai
3 - 5 yrs
₹5L - ₹10L / yr
ETL
Business Intelligence (BI)
PowerBI
SQL
uat
+6 more

Job Description – Developer (ETL + Database)

 

 Develop, document & Support ETL mappings, Database structures & BI reports.

 Perform unit testing of developments done by him/her.

 Participate in UAT process and ensure quick resolution of any UAT issue.

 Manage different environments and be responsible for proper deployment of code in all client

environments.

 Prepare release documents.

 Prepare and Maintain project documents as advised by Team Leads.

 

Skill-sets:

 3+ years of Hands on experience on ETL Pentaho Spoon Talend & MS SQL Server, Oracle & SYBASE Database tools.

 Ability to write complex SQL and database procedures.

 Good knowledge and understanding regarding Data warehouse Concepts, ETL Concepts, ETL

Loading Strategies, Data archiving, Data Reconciliation, ETL error handling etc.

 Problem Solving.

 Good communication skills – written and verbal.

 Self-motivated, team player, action and result oriented.

 Ability to successfully work under tight project schedule

Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort