11+ Data archiving Jobs in Bangalore (Bengaluru) | Data archiving Job openings in Bangalore (Bengaluru)
Apply to 11+ Data archiving Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest Data archiving Job opportunities across top companies like Google, Amazon & Adobe.
ROLES AND RESPONSIBILITIES:
You will be responsible for architecting, implementing, and optimizing Dremio-based data Lakehouse environments integrated with cloud storage, BI, and data engineering ecosystems. The role requires a strong balance of architecture design, data modeling, query optimization, and governance enablement in large-scale analytical environments.
- Design and implement Dremio lakehouse architecture on cloud (AWS/Azure/Snowflake/Databricks ecosystem).
- Define data ingestion, curation, and semantic modeling strategies to support analytics and AI workloads.
- Optimize Dremio reflections, caching, and query performance for diverse data consumption patterns.
- Collaborate with data engineering teams to integrate data sources via APIs, JDBC, Delta/Parquet, and object storage layers (S3/ADLS).
- Establish best practices for data security, lineage, and access control aligned with enterprise governance policies.
- Support self-service analytics by enabling governed data products and semantic layers.
- Develop reusable design patterns, documentation, and standards for Dremio deployment, monitoring, and scaling.
- Work closely with BI and data science teams to ensure fast, reliable, and well-modeled access to enterprise data.
IDEAL CANDIDATE:
- Bachelor’s or Master’s in Computer Science, Information Systems, or related field.
- 5+ years in data architecture and engineering, with 3+ years in Dremio or modern lakehouse platforms.
- Strong expertise in SQL optimization, data modeling, and performance tuning within Dremio or similar query engines (Presto, Trino, Athena).
- Hands-on experience with cloud storage (S3, ADLS, GCS), Parquet/Delta/Iceberg formats, and distributed query planning.
- Knowledge of data integration tools and pipelines (Airflow, DBT, Kafka, Spark, etc.).
- Familiarity with enterprise data governance, metadata management, and role-based access control (RBAC).
- Excellent problem-solving, documentation, and stakeholder communication skills.
PREFERRED:
- Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) and data catalogs (Collibra, Alation, Purview).
- Exposure to Snowflake, Databricks, or BigQuery environments.
- Experience in high-tech, manufacturing, or enterprise data modernization programs.
Job Title : Junior ELK Data Engineer
Experience Required : 3+ Years
Location : Bangalore (Work From Office / Hybrid as per project requirement)
Job Type : Full-Time
Joining : Immediate Joiners only
Job Summary :
We are seeking a Junior ELK Data Engineer with over 3 years of hands-on experience in the Elastic Stack (Elasticsearch, Logstash, Kibana, and Beats).
The ideal candidate will help design, develop, and optimize scalable data ingestion, indexing, and visualization solutions, contributing to the development of high-performance observability and analytics platforms for real-time monitoring and analysis.
Mandatory Skills :
Elastic Stack (Elasticsearch, Logstash, Kibana, Beats), real-time data ingestion, dashboard development, log processing, search optimization, system observability.
Key Responsibilities :
- Build and maintain data pipelines using Logstash, Beats, and Elasticsearch for real-time log ingestion and processing.
- Design and develop Kibana dashboards for effective visualization and alerting across various data sources.
- Optimize indexing strategies for large-scale distributed systems to ensure high search performance and reliability.
- Collaborate with DevOps and SRE teams to enable effective observability and monitoring solutions.
- Analyze system performance and troubleshoot issues related to logging and monitoring pipelines.
- Assist in configuring and maintaining ELK stack components in production and development environments.
Preferred Qualifications:
- Experience in handling distributed systems logs and metrics at scale.
- Familiarity with scripting (Python/Shell) for data manipulation or automation.
- Exposure to cloud platforms (AWS, Azure, or GCP) is a plus.
- Understanding of containerized environments like Docker/Kubernetes.
Review Criteria:
- Strong Dremio / Lakehouse Data Architect profile
- 5+ years of experience in Data Architecture / Data Engineering, with minimum 3+ years hands-on in Dremio
- Strong expertise in SQL optimization, data modeling, query performance tuning, and designing analytical schemas for large-scale systems
- Deep experience with cloud object storage (S3 / ADLS / GCS) and file formats such as Parquet, Delta, Iceberg along with distributed query planning concepts
- Hands-on experience integrating data via APIs, JDBC, Delta/Parquet, object storage, and coordinating with data engineering pipelines (Airflow, DBT, Kafka, Spark, etc.)
- Proven experience designing and implementing lakehouse architecture including ingestion, curation, semantic modeling, reflections/caching optimization, and enabling governed analytics
- Strong understanding of data governance, lineage, RBAC-based access control, and enterprise security best practices
- Excellent communication skills with ability to work closely with BI, data science, and engineering teams; strong documentation discipline
- Candidates must come from enterprise data modernization, cloud-native, or analytics-driven companies
Preferred:
- Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) or data catalogs (Collibra, Alation, Purview); familiarity with Snowflake, Databricks, or BigQuery environments
Role & Responsibilities:
You will be responsible for architecting, implementing, and optimizing Dremio-based data lakehouse environments integrated with cloud storage, BI, and data engineering ecosystems. The role requires a strong balance of architecture design, data modeling, query optimization, and governance enablement in large-scale analytical environments.
- Design and implement Dremio lakehouse architecture on cloud (AWS/Azure/Snowflake/Databricks ecosystem).
- Define data ingestion, curation, and semantic modeling strategies to support analytics and AI workloads.
- Optimize Dremio reflections, caching, and query performance for diverse data consumption patterns.
- Collaborate with data engineering teams to integrate data sources via APIs, JDBC, Delta/Parquet, and object storage layers (S3/ADLS).
- Establish best practices for data security, lineage, and access control aligned with enterprise governance policies.
- Support self-service analytics by enabling governed data products and semantic layers.
- Develop reusable design patterns, documentation, and standards for Dremio deployment, monitoring, and scaling.
- Work closely with BI and data science teams to ensure fast, reliable, and well-modeled access to enterprise data.
Ideal Candidate:
- Bachelor’s or Master’s in Computer Science, Information Systems, or related field.
- 5+ years in data architecture and engineering, with 3+ years in Dremio or modern lakehouse platforms.
- Strong expertise in SQL optimization, data modeling, and performance tuning within Dremio or similar query engines (Presto, Trino, Athena).
- Hands-on experience with cloud storage (S3, ADLS, GCS), Parquet/Delta/Iceberg formats, and distributed query planning.
- Knowledge of data integration tools and pipelines (Airflow, DBT, Kafka, Spark, etc.).
- Familiarity with enterprise data governance, metadata management, and role-based access control (RBAC).
- Excellent problem-solving, documentation, and stakeholder communication skills.
Preferred:
- Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) and data catalogs (Collibra, Alation, Purview).
- Exposure to Snowflake, Databricks, or BigQuery environments.
- Experience in high-tech, manufacturing, or enterprise data modernization programs.
🚀 Kickstart Your Career @ Bhanzu – India’s Fastest-Growing EdTech!
📍 Role: Business Development Trainee | Location: HSR Layout, Bangalore (WFO)
🔗 Apply Now: https://lnkd.in/dFiSRuHg
Join Bhanzu, founded by the 🌍 World’s Fastest Human Calculator, and be part of a mission to revolutionize how the world learns math!
💡 What’s in it for you?
✅ Real-time sales & strategy experience
✅ Fast-paced, high-growth startup culture
✅ Mentorship + rapid career acceleration
If you’ve got the passion to learn, hustle, and grow — this is your launchpad. 🚀
👇 Tag someone who’s made for this!
#Bhanzu #WeAreHiring #BusinessDevelopment #StartupJobs #EdTechRevolution #SalesCareers #BangaloreHiring #GrowthOpportunities #BhanzuCareers
About us:
HappyFox is a software-as-a-service (SaaS) support platform. We offer an enterprise-grade help desk ticketing system and intuitively designed live chat software.
We serve over 12,000 companies in 70+ countries. HappyFox is used by companies that span across education, media, e-commerce, retail, information technology, manufacturing, non-profit, government and many other verticals that have an internal or external support function.
To know more, Visit! - https://www.happyfox.com/
We are looking for an Automation Test Engineer with a natural flair for solving complex problems and making life easier for others. You'd be a part of our dynamic QA team responsible for maintaining and enhancing the quality of our products.
Responsibilities:
- Owning the quality of any deliverables including new features and enhancements
- Working closely with the product team in understanding the requirements and user workflow
- Writing highly efficient manual test cases and performing functional, ad-hoc and regression testing
- Designing, developing and executing automation test scripts
- Raising defects/bugs and tracking them till closure
Requirements:
- Bachelor's degree in engineering/IT/computer science or related field
- 2-4 years of relevant work experience in software testing
- Professional automation QA experience, using Selenium Webdriver with Python and/or Java
- Experience in using a defect tracking system to report, track and resolve defects
- Good understanding of the Agile software development methodology (Kanban or Scrum)
- Passion for software quality assurance, problem detection and analysis
- Experience working in SaaS based product company (optional)
- Worked for a bootstrapped high-growth startup (optional)
Responsibilities include creation of html pages using standard Ionic classes as well as custom CSS along with controller code using Angular 8+. Deep knowledge of core angular development like routing and navigation, forms, http client etc is a MUST.
Requirements:
• Required experience of 3 to 8 years
• Required Skill Set: Core Java, Java 8, Spring, Spring Boot, Database, Unit Testing & Rest API.
• Good to have Microservices
• Familiarity with concepts of MVC, JDBC, and RESTful
• You will be responsible to perform- requirement analysis, high-level design, coding, unit testing, and general quality assurance of the web applications
• Knack for writing clean, readable Java code
• Understanding fundamental design principles behind a scalable application
• Good to have knowledge of Spring Security, and JPA
Responsibilities:
• Designing and implementing Java-based applications.
• Analyzing user requirements to inform application design.
• Defining application objectives and functionality.
• Aligning application design with business goals.
• Developing and testing software.
• Debugging and resolving technical problems that arise.
• Producing detailed design documentation.
• Recommending changes to existing Java infrastructure.
• Developing multimedia applications.
HBOX is a US Based Digital Health Company, Enabling Primary Care Providers (PCP) to
capture true Virtual Care Opportunities beyond Telehealth. We enable PCP to
provide Proactive and Continuous Care and add new Recurring monthly revenue
streams without any upfront cost. With our unique distribution and business
model, we are seeing fast acceptance and great adaption with our target
customers. We have built unique and Industry's first Integrated
Hardware, Cloud & AI Technologies based Virtual care Platform for PCP
Market. We are US focused Post revenue company with customers in 7 US States
and growing extremely fast.
We are looking for an Android Developer who possesses a passion for pushing mobile technologies to the limits. This Android app developer will work with their US-based counterpart to design and build the next generation of our mobile applications. This job requires both Android System and UI development competence.
Responsibilities
Design and build advanced applications for the Android platform
Collaborate with cross-functional teams to define, design, and ship new features
Work with outside data sources and APIs
Unit-test code for robustness, including edge cases, usability, and general reliability
Work on bug fixing and improving application performance
Continuously discover, evaluate, and implement new technologies to maximize development efficiency
Requirements
BS/MS degree in Computer Science, Engineering or a related subject
Proven software development experience and Android skills development
Proven working experience in Android app development and
Have published at least one original Android app
Experience with Android SDK
Experience working with remote data via REST and JSON
Experience with third-party libraries and APIs
Working knowledge of the general mobile landscape, architectures, trends, and emerging technologies
Solid understanding of the full mobile development life cycle.
Be responsible for the design, development, quality, and delivery of initiatives for the team
Recruit great engineers, in collaboration with Velocity’s recruiting team
Define the technology roadmap for the team and liaison with the business teams and product managers to shape the business OKR planning
Oversee agile delivery processes and adoption of milestone-based execution; proactively lead stakeholders communications related to deliverables and risks
Research and identify emerging technologies to define long-term engineering initiatives
Lead and mentor your team, taking them to the next level of technical expertise and inspiring a team culture of ownership, accountability, data-based decisions, continuous improvement, and blameless retrospectives
Work closely with Engineering Leaders, Managers; Product Management, Business Development and Operation teams, and enable them by providing
You set up best practices for development and champion their adoption and at the same time Architect & design technically robust, flexible and scalable solutions.
Perform regular performance evaluation and share and seek feedback.
What to bring
At least 5 years of experience in software development and delivery, with in-depth knowledge of software architectures; must have at least 2 years of experience in leading small and mid-sized development teams for the technology ( Golang, Python)
Great project management skills to lead and juggle multi-engineer projects and deliver high-quality projects on time
BTech, MTech, or PhD in Computer Science or a related technical discipline (or equivalent) from Tier 1 Institutions like IIT, NIT, BIT
Past experience with startups and fast paced environments is an added advantage
Excellent leadership skills to mentor the engineers under you.
Track record of leading productive engineering teams
You lead by example, by setting the right context, and by helping teammates do their best work
Deep understanding of technologies and architecture in a highly scalable and available set-up.
Good communication skills in verbal and written English.
Should be preferred from only product company background
Responsibilities:
- Able to collaborate with designers & developers
- Able to write consistent, subtle & concise language for the purpose of product communication.
- Able to write clear & concise micro-copy for all scenarios including informational, error & instructional.
- Love for both technology and communication
- Should assist in a fast paced startup environment.
- Have a knowledge of communication style guidelines, User experience and UX Writing impact.
- Should be willing to get involved in UX research
Requirements:
- Experience writing UX copy for an App or Website (a plus/Good to have)
- Able to understand the target audience and write effective/persuasive copy
- Able to understand the product- what's & why- s- .
- Need to have an interest in sustainable transportation.
- Any degree with relevant experience or interest.
- Need to be creative, engaging & self-challenger.
Data Platform engineering at Uber is looking for a strong Technical Lead (Level 5a Engineer) who has built high quality platforms and services that can operate at scale. 5a Engineer at Uber exhibits following qualities:
- Demonstrate tech expertise › Demonstrate technical skills to go very deep or broad in solving classes of problems or creating broadly leverageable solutions.
- Execute large scale projects › Define, plan and execute complex and impactful projects. You communicate the vision to peers and stakeholders.
- Collaborate across teams › Domain resource to engineers outside your team and help them leverage the right solutions. Facilitate technical discussions and drive to a consensus.
- Coach engineers › Coach and mentor less experienced engineers and deeply invest in their learning and success. You give and solicit feedback, both positive and negative, to others you work with to help improve the entire team.
- Tech leadership › Lead the effort to define the best practices in your immediate team, and help the broader organization establish better technical or business processes.
What You’ll Do
- Build a scalable, reliable, operable and performant data analytics platform for Uber’s engineers, data scientists, products and operations teams.
- Work alongside the pioneers of big data systems such as Hive, Yarn, Spark, Presto, Kafka, Flink to build out a highly reliable, performant, easy to use software system for Uber’s planet scale of data.
- Become proficient of multi-tenancy, resource isolation, abuse prevention, self-serve debuggability aspects of a high performant, large scale, service while building these capabilities for Uber's engineers and operation folks.
What You’ll Need
- 7+ years experience in building large scale products, distributed systems in a high caliber environment.
- Architecture: Identify and solve major architectural problems by going deep in your field or broad across different teams. Extend, improve, or, when needed, build solutions to address architectural gaps or technical debt.
- Software Engineering/Programming: Create frameworks and abstractions that are reliable and reusable. advanced knowledge of at least one programming language, and are happy to learn more. Our core languages are Java, Python, Go, and Scala.
- Platform Engineering: Solid understanding of distributed systems and operating systems fundamentals such as concurrency, multithreading, file systems, locking etc.
- Execution & Results: You tackle large technical projects/problems that are not clearly defined. You anticipate roadblocks and have strategies to de-risk timelines. You orchestrate work that spans multiple teams and keep your stakeholders informed.
- A team player: You believe that you can achieve more on a team that the whole is greater than the sum of its parts. You rely on others’ candid feedback for continuous improvement.
- Business acumen: You understand requirements beyond the written word. Whether you’re working on an API used by other developers, an internal tool consumed by our operation teams, or a feature used by millions of customers, your attention to details leads to a delightful user experience.




