Loading...

{{notif_text}}

Last chance to connect with exciting companies hiring right now - Register now!|L I V E{{days_remaining}} days {{hours_remaining}} hours left!

Data Warehouse (DWH) Jobs in Bangalore (Bengaluru)

Explore top Data Warehouse (DWH) Job opportunities in Bangalore (Bengaluru) for Top Companies & Startups. All jobs are added by verified employees who can be contacted directly below.

Data Warehousing Engineer - Big Data/ETL

Founded 2014
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[1 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Remote, Bengaluru (Bangalore)
Experience icon
3 - 10 years
Salary icon
Best in industry{{renderSalaryString({min: 500000, max: 1500000, duration: "undefined", currency: "INR", equity: false})}}

Must Have Skills:- Solid Knowledge on DWH, ETL and Big Data Concepts- Excellent SQL Skills (With knowledge of SQL Analytics Functions)- Working Experience on any ETL tool i.e. SSIS / Informatica- Working Experience on any Azure or AWS Big Data Tools.- Experience on Implementing Data Jobs (Batch / Real time Streaming)- Excellent written and verbal communication skills in English, Self-motivated with strong sense of ownership and Ready to learn new tools and technologiesPreferred Skills:- Experience on Py-Spark / Spark SQL- AWS Data Tools (AWS Glue, AWS Athena)- Azure Data Tools (Azure Databricks, Azure Data Factory)Other Skills:- Knowledge about Azure Blob, Azure File Storage, AWS S3, Elastic Search / Redis Search- Knowledge on domain/function (across pricing, promotions and assortment).- Implementation Experience on Schema and Data Validator framework (Python / Java / SQL),- Knowledge on DQS and MDM.Key Responsibilities:- Independently work on ETL / DWH / Big data Projects- Gather and process raw data at scale.- Design and develop data applications using selected tools and frameworks as required and requested.- Read, extract, transform, stage and load data to selected tools and frameworks as required and requested.- Perform tasks such as writing scripts, web scraping, calling APIs, write SQL queries, etc.- Work closely with the engineering team to integrate your work into our production systems.- Process unstructured data into a form suitable for analysis.- Analyse processed data.- Support business decisions with ad hoc analysis as needed.- Monitoring data performance and modifying infrastructure as needed.Responsibility: Smart Resource, having excellent communication skills

Job posted by
apply for job
apply for job
Vishal Sharma picture
Vishal Sharma
Job posted by
Vishal Sharma picture
Vishal Sharma
Apply for job
apply for job

Data Engineer

Founded 2014
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[1 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Remote, Bengaluru (Bangalore)
Experience icon
3 - 7 years
Salary icon
Best in industry{{renderSalaryString({min: 500000, max: 1000000, duration: "undefined", currency: "INR", equity: false})}}

Basic Qualifications- Need to have a working knowledge of AWS Redshift.- Minimum 1 year of designing and implementing a fully operational production-grade large-scale data solution on Snowflake Data Warehouse.- 3 years of hands-on experience with building productized data ingestion and processing pipelines using Spark, Scala, Python- 2 years of hands-on experience designing and implementing production-grade data warehousing solutions- Expertise and excellent understanding of Snowflake Internals and integration of Snowflake with other data processing and reporting technologies- Excellent presentation and communication skills, both written and verbal- Ability to problem-solve and architect in an environment with unclear requirements

Job posted by
apply for job
apply for job
Vishal Sharma picture
Vishal Sharma
Job posted by
Vishal Sharma picture
Vishal Sharma
Apply for job
apply for job

Data Architect

Founded 2018
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
10 - 15 years
Salary icon
Best in industry{{renderSalaryString({min: 1500000, max: 2000000, duration: "undefined", currency: "INR", equity: false})}}

Hypersonix.ai is disrupting the Business Intelligence and Analytics space with AI, ML and NLP capabilities to drive specific business insights with a conversational user experience. Hypersonix.ai has been built ground up with new age technology to simplify the consumption of data for our customers in Restaurants, Hospitality and other industry verticals.Hypersonix.ai is seeking a Data Evangelist who can work closely with customers to understand the data sources, acquire data and drive product success by delivering insights based on customer needs.Primary Responsibilities :- Lead and deliver complete application lifecycle design, development, deployment, and support for actionable BI and Advanced Analytics solutions- Design and develop data models and ETL process for structured and unstructured data that is distributed across multiple Cloud platforms- Develop and deliver solutions with data streaming capabilities for a large volume of data- Design, code and maintain parts of the product and drive customer adoption- Build data acquisition strategy to onboard customer data with speed and accuracy- Working both independently and with team members to develop, refine, implement, and scale ETL processes- On-going support and maintenance of live-clients for their data and analytics needs- Defining the data automation architecture to drive self-service data load capabilitiesRequired Qualifications :- Bachelors/Masters/Ph.D. in Computer Science, Information Systems, Data Science, Artificial Intelligence, Machine Learning or related disciplines- 10+ years of experience guiding the development and implementation of Data architecture in structured, unstructured, and semi-structured data environments.- Highly proficient in Big Data, data architecture, data modeling, data warehousing, data wrangling, data integration, data testing and application performance tuning- Experience with data engineering tools and platforms such as Kafka, Spark, Databricks, Flink, Storm, Druid and Hadoop- Strong with hands-on programming and scripting for Big Data ecosystem (Python, Scala, Spark, etc)- Experience building batch and streaming ETL data pipelines using workflow management tools like Airflow, Luigi, NiFi, Talend, etc- Familiarity with cloud-based platforms like AWS, Azure or GCP- Experience with cloud data warehouses like Redshift and Snowflake- Proficient in writing complex SQL queries.- Excellent communication skills and prior experience of working closely with customers- Data savvy who loves to understand large data trends and obsessed with data analysis- Desire to learn about, explore, and invent new tools for solving real-world problems using dataDesired Qualifications :- Cloud computing experience, Amazon Web Services (AWS)- Prior experience in Data Warehousing concepts, multi-dimensional data models- Full command of Analytics concepts including Dimension, KPI, Reports & Dashboards- Prior experience in managing client implementation of Analytics projects- Knowledge and prior experience of using machine learning tools

Job posted by
apply for job
apply for job
Gowshini Maheswaran picture
Gowshini Maheswaran
Job posted by
Gowshini Maheswaran picture
Gowshini Maheswaran
Apply for job
apply for job

Data Modeler

Founded 2015
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
7 - 9 years
Salary icon
Best in industry{{renderSalaryString({min: 1200000, max: 1900000, duration: "undefined", currency: "INR", equity: false})}}

Sr. Data Modeler who has knowledge aroundData WarehousingDimensional Modeling with over 7 years of ModelingexperienceData Modeler Dimensional Modeling

Job posted by
apply for job
apply for job
Harpreet kour picture
Harpreet kour
Job posted by
Harpreet kour picture
Harpreet kour
Apply for job
apply for job

Data Engineer

Founded 2018
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
via Nu-Pie
{{rendered_skills_map[skill] || skill}}
Location icon
Remote, Bengaluru (Bangalore)
Experience icon
4 - 8 years
Salary icon
Best in industry{{renderSalaryString({min: 400000, max: 1600000, duration: "undefined", currency: "INR", equity: false})}}

Job Description Job Title: Data Engineer Tech Job Family: DACI • Bachelor's Degree in Engineering, Computer Science, CIS, or related field (or equivalent work experience in a related field) • 2 years of experience in Data, BI or Platform Engineering, Data Warehousing/ETL, or Software Engineering • 1 year of experience working on project(s) involving the implementation of solutions applying development life cycles (SDLC) Preferred Qualifications: • Master's Degree in Computer Science, CIS, or related field • 2 years of IT experience developing and implementing business systems within an organization • 4 years of experience working with defect or incident tracking software • 4 years of experience with technical documentation in a software development environment • 2 years of experience working with an IT Infrastructure Library (ITIL) framework • 2 years of experience leading teams, with or without direct reports • Experience with application and integration middleware • Experience with database technologies Data Engineering • 2 years of experience in Hadoop or any Cloud Bigdata components (specific to the Data Engineering role) • Expertise in Java/Scala/Python, SQL, Scripting, Teradata, Hadoop (Sqoop, Hive, Pig, Map Reduce), Spark (Spark Streaming, MLib), Kafka or equivalent Cloud Bigdata components (specific to the Data Engineering role) BI Engineering • Expertise in MicroStrategy/Power BI/SQL, Scripting, Teradata or equivalent RDBMS, Hadoop (OLAP on Hadoop), Dashboard development, Mobile development (specific to the BI Engineering role) Platform Engineering • 2 years of experience in Hadoop, NO-SQL, RDBMS or any Cloud Bigdata components, Teradata, MicroStrategy (specific to the Platform Engineering role) • Expertise in Python, SQL, Scripting, Teradata, Hadoop utilities like Sqoop, Hive, Pig, Map Reduce, Spark, Ambari, Ranger, Kafka or equivalent Cloud Bigdata components (specific to the Platform Engineering role) Lowe’s is an equal opportunity employer and administers all personnel practices without regard to race, color, religion, sex, age, national origin, disability, sexual orientation, gender identity or expression, marital status, veteran status, genetics or any other category protected under applicable law.

Job posted by
apply for job
apply for job
Sanjay Biswakarma picture
Sanjay Biswakarma
Job posted by
Sanjay Biswakarma picture
Sanjay Biswakarma
Apply for job
apply for job

Senior ETL Developer

Founded 2018
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
via Nu-Pie
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 8 years
Salary icon
Best in industry{{renderSalaryString({min: 800000, max: 1300000, duration: "undefined", currency: "INR", equity: false})}}

Minimum of 4 years’ experience of working on DW/ETL projects and expert hands-on working knowledge of ETL tools. Experience with Data Management & data warehouse development Star schemas, Data Vaults, RDBMS, and ODS Change Data capture Slowly changing dimensions Data governance Data quality Partitioning and tuning Data Stewardship Survivorship Fuzzy Matching Concurrency Vertical and horizontal scaling ELT, ETL Spark, Hadoop, MPP, RDBMS Experience with Dev/OPS architecture, implementation and operation Hand's on working knowledge of Unix/Linux Building Complex SQL Queries. Expert SQL and data analysis skills, ability to debug and fix data issue. Complex ETL program design coding Experience in Shell Scripting, Batch Scripting. Good communication (oral & written) and inter-personal skills Expert SQL and data analysis skill, ability to debug and fix data issue Work closely with business teams to understand their business needs and participate in requirements gathering, while creating artifacts and seek business approval. Helping business define new requirements, Participating in End user meetings to derive and define the business requirement, propose cost effective solutions for data analytics and familiarize the team with the customer needs, specifications, design targets & techniques to support task performance and delivery. Propose good design & solutions and adherence to the best Design & Standard practices. Review & Propose industry best tools & technology for ever changing business rules and data set. Conduct Proof of Concepts (POC) with new tools & technologies to derive convincing benchmarks. Prepare the plan, design and document the architecture, High-Level Topology Design, Functional Design, and review the same with customer IT managers and provide detailed knowledge to the development team to familiarize them with customer requirements, specifications, design standards and techniques. Review code developed by other programmers, mentor, guide and monitor their work ensuring adherence to programming and documentation policies. Work with functional business analysts to ensure that application programs are functioning as defined.  Capture user-feedback/comments on the delivered systems and document it for the client and project manager’s review. Review all deliverables before final delivery to client for quality adherence. Technologies (Select based on requirement) Databases - Oracle, Teradata, Postgres, SQL Server, Big Data, Snowflake, or Redshift Tools – Talend, Informatica, SSIS, Matillion, Glue, or Azure Data Factory Utilities for bulk loading and extracting Languages – SQL, PL-SQL, T-SQL, Python, Java, or Scala J/ODBC, JSON Data Virtualization Data services development Service Delivery - REST, Web Services Data Virtualization Delivery – Denodo   ELT, ETL Cloud certification Azure Complex SQL Queries   Data Ingestion, Data Modeling (Domain), Consumption(RDMS)

Job posted by
apply for job
apply for job
Jerrin Thomas picture
Jerrin Thomas
Job posted by
Jerrin Thomas picture
Jerrin Thomas
Apply for job
apply for job

Sr DevOps Engineer

Founded 2020
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[1 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Pune, Bengaluru (Bangalore)
Experience icon
5 - 10 years
Salary icon
Best in industry{{renderSalaryString({min: 1000000, max: 2000000, duration: "undefined", currency: "INR", equity: false})}}

What you will do• Develop and maintain CI/CD tools to build and deploy scalable web and responsive applications in production environment• Design and implement monitoring solutions that identify both system bottlenecks and production issues• Design and implement workflows for continuous integration, including provisioning, deployment, testing, and version control of the software.• Develop self-service solutions for the engineering team in order to deliver sites/software with great speed and qualityo Automating Infra creationo Provide easy to use solutions to engineering team• Conduct research, tests, and implements new metrics collection systems that can be reused and applied as engineering best practiceso Update our processes and design new processes as needed.o Establish DevOps Engineer team best practices.o Stay current with industry trends and source new ways for our business to improve.• Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.• Manage timely resolution of all critical and/or complex problems• Maintain, monitor, and establish best practices for containerized environments.• Mentor new DevOps engineersWhat you will bring• The desire to work in fast-paced environment.• 5+ years’ experience building, maintaining, and deploying production infrastructures in AWS or other cloud providers• Containerization experience with applications deployed on Docker and Kubernetes• Understanding of NoSQL and Relational Database with respect to deployment and horizontal scalability• Demonstrated knowledge of Distributed and Scalable systems Experience with maintaining and deployment of critical infrastructure components through Infrastructure-as-Code and configuration management tooling across multiple environments (Ansible, Terraform etc)• Strong knowledge of DevOps and CI/CD pipeline (GitHub, BitBucket, Artifactory etc)• Strong understanding of cloud and infrastructure components (server, storage, network, data, and applications) to deliver end-to-end cloud Infrastructure architectures and designs and recommendationso AWS services like S3, CloudFront, Kubernetes, RDS, Data Warehouses to come up with architecture/suggestions for new use cases.• Test our system integrity, implemented designs, application developments and other processes related to infrastructure, making improvements as neededGood to have• Experience with code quality tools, static or dynamic code analysis and compliance and undertaking and resolving issues identified from vulnerability and compliance scans of our infrastructure• Good knowledge of REST/SOAP/JSON web service API implementation•

Job posted by
apply for job
apply for job
HR Ezeu picture
HR Ezeu
Job posted by
HR Ezeu picture
HR Ezeu
Apply for job
apply for job

SQL- DWH Developer

Founded 2015
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 9 years
Salary icon
Best in industry{{renderSalaryString({min: 1500000, max: 2500000, duration: "undefined", currency: "INR", equity: false})}}

Work Days: Sunday through ThursdayWeek off: Friday & SaurdayDay Shift.Key responsibilities: Creating, designing and developing data models Prepare plans for all ETL (Extract/Transformation/Load) procedures and architectures Validating results and creating business reports Monitoring and tuning data loads and queries Develop and prepare a schedule for a new data warehouse Analyze large databases and recommend appropriate optimization for the same Administer all requirements and design various functional specifications for data Provide support to the Software Development Life cycle Prepare various code designs and ensure efficient implementation of the same Evaluate all codes and ensure the quality of all project deliverables Monitor data warehouse work and provide subject matter expertise Hands-on BI practices, data structures, data modeling, SQL skills Hard Skills for a Data Warehouse Developer: Hands-on experience with ETL tools e.g., DataStage, Informatica, Pentaho, Talend Sound knowledge of SQL Experience with SQL databases such as Oracle, DB2, and SQL Experience using Data Warehouse platforms e.g., SAP, Birst Experience designing, developing, and implementing Data Warehouse solutions Project management and system development methodology Ability to proactively research solutions and best practices Soft Skills for Data Warehouse Developers: Excellent Analytical skills Excellent verbal and written communications Strong organization skills Ability to work on a team, as well as independently

Job posted by
apply for job
apply for job
Priyanka U picture
Priyanka U
Job posted by
Priyanka U picture
Priyanka U
Apply for job
apply for job

Support Engineer - App Ops / Data Ops

Founded 2001
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
1 - 5 years
Salary icon
Best in industryBest in industry

Required: 1-6 years of experience in Application and/or Data Operations Support domain. Expertise in doing RCA (root-cause analysis) and collaborating with development teams for CoE (correction of errors). Good communication & collaboration skills - liaison with product, operations & business teams to understand the requirements and provide data extracts & reports on need basis. Experience in working in an enterprise environment, with a good discipline & adherence to the SLA. Good understanding of the ticketing tools, to track the various requests and manage the lifecycle for multiple requests e.g. JIRA, Service-Now, Rally, Change-Gear etc. Orientation towards addressing the root-cause for any issue i.e. collaborate and follow-up with development teams to ensure permanent fix & prevention is given high priority. Ability to create SOPs (system operating procedures) in Confluence/Wiki to ensure there is a good reference for the support team to utilise. Self-starter and a collaborator having the ability to independently acquire the knowledge required in succeeding the job. Specifically for Data Ops Engineer role, following experience is required: BI, Reporting & Data Warehousing domain Experience in production support for Data queries - monitoring, analysis & triage of issues Experience in using BI tools like MicroStrategy, Qlik, Power BI, Business Objects Expertise in data-analysis & writing SQL queries to provide insights into the production data.  Experience with relational database (RDBMS) & data-mart technologies like DB2, RedShift, SQL Server, My SQL, Netezza etc. Ability to monitor ETL jobs in AWS stack with tools like Tidal, Autosys etc. Experience with Big data platforms like Amazon RedShift Responsibilities: Production Support (Level 2) Job failures resolution - re-runs based on SOPs Report failures root-cause analysis & resolution Address queries for existing Reports & APIs Ad-hoc data requests for product & business stakeholders: Transactions per day, per entity (merchant, card-type, card-category) Custom extracts

Job posted by
apply for job
apply for job
Srinivas Avanthkar picture
Srinivas Avanthkar
Job posted by
Srinivas Avanthkar picture
Srinivas Avanthkar
Apply for job
apply for job

Engineering Head

Founded 2019
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
9 - 15 years
Salary icon
Best in industry{{renderSalaryString({min: 5000000, max: 7000000, duration: "undefined", currency: "INR", equity: false})}}

Main responsibilities: + Management of a growing technical team + Continued technical Architecture design based on product roadmap + Annual performance reviews + Work with DevOps to design and implement the product infrastructure Strategic: + Testing strategy + Security policy + Performance and performance testing policy + Logging policy Experience: + 9-15 years of experience including that of managing teams of developers + Technical & architectural expertise, and have evolved a growing code base, technology stack and architecture over many years + Have delivered distributed cloud applications + Understand the value of high quality code and can effectively manage technical debt + Stakeholder management + Work experience in consumer focused early stage (Series A, B) startups is a big plus Other innate skills: + Great motivator of people and able to lead by example + Understand how to get the most out of people + Delivery of products to tight deadlines but with a focus on high quality code + Up to date knowledge of technical applications

Job posted by
apply for job
apply for job
Jennifer Jocelyn picture
Jennifer Jocelyn
Job posted by
Jennifer Jocelyn picture
Jennifer Jocelyn
Apply for job
apply for job

Database Architect

Founded 2017
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 10 years
Salary icon
Best in industry{{renderSalaryString({min: 1000000, max: 2000000, duration: "undefined", currency: "INR", equity: false})}}

candidate will be responsible for all aspects of data acquisition, data transformation, and analytics scheduling and operationalization to drive high-visibility, cross-division outcomes. Expected deliverables will include the development of Big Data ELT jobs using a mix of technologies, stitching together complex and seemingly unrelated data sets for mass consumption, and automating and scaling analytics into the GRAND's Data Lake. Key Responsibilities : - Create a GRAND Data Lake and Warehouse which pools all the data from different regions and stores of GRAND in GCC - Ensure Source Data Quality Measurement, enrichment and reporting of Data Quality - Manage All ETL and Data Model Update Routines - Integrate new data sources into DWH - Manage DWH Cloud (AWS/AZURE/Google) and Infrastructure Skills Needed : - Very strong in SQL. Demonstrated experience with RDBMS, Unix Shell scripting preferred (e.g., SQL, Postgres, Mongo DB etc) - Experience with UNIX and comfortable working with the shell (bash or KRON preferred) - Good understanding of Data warehousing concepts. Big data systems : Hadoop, NoSQL, HBase, HDFS, MapReduce - Aligning with the systems engineering team to propose and deploy new hardware and software environments required for Hadoop and to expand existing environments. - Working with data delivery teams to set up new Hadoop users. This job includes setting up Linux users, setting up and testing HDFS, Hive, Pig and MapReduce access for the new users. - Cluster maintenance as well as creation and removal of nodes using tools like Ganglia, Nagios, Cloudera Manager Enterprise, and other tools. - Performance tuning of Hadoop clusters and Hadoop MapReduce routines. - Screen Hadoop cluster job performances and capacity planning - Monitor Hadoop cluster connectivity and security - File system management and monitoring. - HDFS support and maintenance. - Collaborating with application teams to install operating system and - Hadoop updates, patches, version upgrades when required. - Defines, develops, documents and maintains Hive based ETL mappings and scripts

Job posted by
apply for job
apply for job
Rahul Malani picture
Rahul Malani
Job posted by
Rahul Malani picture
Rahul Malani
Apply for job
apply for job
Why apply via CutShort?
Connect with actual hiring teams and get their fast response. No spam.
File upload not supportedAudio recording not supported
This browser does not support file upload. Please follow the instructions to upload your resume.This browser does not support audio recording. Please follow the instructions to record audio.
  1. Click on the 3 dots
  2. Click on "Copy link"
  3. Open Google Chrome (or any other browser) and enter the copied link in the URL bar
Done