Loading...

{{notif_text}}

Last chance to connect with exciting companies hiring right now - Register now!|L I V E{{days_remaining}} days {{hours_remaining}} hours left!

Data Warehouse (DWH) Jobs in Bangalore (Bengaluru)

Explore top Data Warehouse (DWH) Job opportunities in Bangalore (Bengaluru) for Top Companies & Startups. All jobs are added by verified employees who can be contacted directly below.

Sr DevOps Engineer

Founded 2020
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[1 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Pune, Bengaluru (Bangalore)
Experience icon
5 - 10 years
Salary icon
Best in industry{{renderSalaryString({min: 1000000, max: 2000000, duration: "undefined", currency: "INR", equity: false})}}

What you will do• Develop and maintain CI/CD tools to build and deploy scalable web and responsive applications in production environment• Design and implement monitoring solutions that identify both system bottlenecks and production issues• Design and implement workflows for continuous integration, including provisioning, deployment, testing, and version control of the software.• Develop self-service solutions for the engineering team in order to deliver sites/software with great speed and qualityo Automating Infra creationo Provide easy to use solutions to engineering team• Conduct research, tests, and implements new metrics collection systems that can be reused and applied as engineering best practiceso Update our processes and design new processes as needed.o Establish DevOps Engineer team best practices.o Stay current with industry trends and source new ways for our business to improve.• Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.• Manage timely resolution of all critical and/or complex problems• Maintain, monitor, and establish best practices for containerized environments.• Mentor new DevOps engineersWhat you will bring• The desire to work in fast-paced environment.• 5+ years’ experience building, maintaining, and deploying production infrastructures in AWS or other cloud providers• Containerization experience with applications deployed on Docker and Kubernetes• Understanding of NoSQL and Relational Database with respect to deployment and horizontal scalability• Demonstrated knowledge of Distributed and Scalable systems Experience with maintaining and deployment of critical infrastructure components through Infrastructure-as-Code and configuration management tooling across multiple environments (Ansible, Terraform etc)• Strong knowledge of DevOps and CI/CD pipeline (GitHub, BitBucket, Artifactory etc)• Strong understanding of cloud and infrastructure components (server, storage, network, data, and applications) to deliver end-to-end cloud Infrastructure architectures and designs and recommendationso AWS services like S3, CloudFront, Kubernetes, RDS, Data Warehouses to come up with architecture/suggestions for new use cases.• Test our system integrity, implemented designs, application developments and other processes related to infrastructure, making improvements as neededGood to have• Experience with code quality tools, static or dynamic code analysis and compliance and undertaking and resolving issues identified from vulnerability and compliance scans of our infrastructure• Good knowledge of REST/SOAP/JSON web service API implementation•

Job posted by
apply for job
apply for job
HR Ezeu picture
HR Ezeu
Job posted by
HR Ezeu picture
HR Ezeu
Apply for job
apply for job

Data Engineer

Founded 2011
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
6 - 9 years
Salary icon
Best in industry{{renderSalaryString({min: 1000000, max: 2000000, duration: "undefined", currency: "INR", equity: false})}}

AliveCor produces and delivers rich, informative, clinical-grade, personal heart data that can be easily understood by patients - anytime, anywhere. As a Data Engineer, you will architect and improve our data infrastructure, develop new products, and support medical science while protecting user privacy. The ideal candidate has a strong background in engineering, as well as experience in statistical analysis, data analysis tools, SQL and Python.  You will work closely with our product management, engineering, and AI teams to architect and build fast and efficient databases, pipelines, and services.   Responsibilities Work with structured and unstructured real-world medical data Design, build and launch efficient and reliable data pipelines to move complex data Write high-quality, efficient, testable code in Python, C++, or Go. Build data expertise and own data quality Collaborate with software and AI engineers to design and implement data architecture Integrate with 3rd party analytics tools and APIs (Mixpanel, Google Analytics) Build horizontally scalable infrastructure to support ML training and data mining research   Qualifications and Skills: E. in Computer Science or a related discipline, or related practical experience Minimum 6 years of Experience with Data Engineering & Data Architecture Experience using advanced SQL and databases in a business environment with large-scale datasets (Hadoop, Hive, Presto) Experience with statistical modeling and analyzing large data sets Experience with product analytics tools and APIs (Mixpanel, Google Analytics) AWS expertise (S3, EC2, Lambda, Redshift, Athena) is a plus Experience developing scalable microservices also a plus Familiarity with Kimball's data warehouse lifecycle   About Us AliveCor is on a mission to define modern healthcare through data, design and disruption. We’ve pioneered the creation of FDA-cleared machine-learning techniques, transformed wearable medtech to put proactive heart care at everyone’s fingertips. Kardia is the most clinically validated mobile EKG technology. AliveCor was named as one of the Top 10 Most Innovative Companies in Health for 2017 by Fast Company as part of the publication’s annual ranking of the world’s Most Innovative Companies. AliveCor was awarded the 2015 Tech Pioneer by the World Economic Forum and one of the 50 Smartest Companies in 2015 by the MIT Technology Review. AliveCor recently announced a collaboration with Mayo Clinic that will result in new machine learning capabilities to unlock previously hidden health indicators in EKG data, potentially improving heart health as well as overall health care for a variety of conditions. AliveCor is a privately held company headquartered in Mountain View, CA. AliveCor is an equal opportunity employer and will not discriminate against any employee or applicant on the basis of age, colour, disability, gender, national origin, race, religion, sexual orientation, or any other classification protected by federal, state, or local law. Watch the following video demonstrating our product. KardiaMobile: How's your heart?https://www.youtube.com/watch?v=8I9xosgA-Ig

Job posted by
apply for job
apply for job
Anuj Seth picture
Anuj Seth
Job posted by
Anuj Seth picture
Anuj Seth
Apply for job
apply for job

SQL- DWH Developer

Founded 2015
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 9 years
Salary icon
Best in industry{{renderSalaryString({min: 1500000, max: 2500000, duration: "undefined", currency: "INR", equity: false})}}

Work Days: Sunday through ThursdayWeek off: Friday & SaurdayDay Shift.Key responsibilities: Creating, designing and developing data models Prepare plans for all ETL (Extract/Transformation/Load) procedures and architectures Validating results and creating business reports Monitoring and tuning data loads and queries Develop and prepare a schedule for a new data warehouse Analyze large databases and recommend appropriate optimization for the same Administer all requirements and design various functional specifications for data Provide support to the Software Development Life cycle Prepare various code designs and ensure efficient implementation of the same Evaluate all codes and ensure the quality of all project deliverables Monitor data warehouse work and provide subject matter expertise Hands-on BI practices, data structures, data modeling, SQL skills Hard Skills for a Data Warehouse Developer: Hands-on experience with ETL tools e.g., DataStage, Informatica, Pentaho, Talend Sound knowledge of SQL Experience with SQL databases such as Oracle, DB2, and SQL Experience using Data Warehouse platforms e.g., SAP, Birst Experience designing, developing, and implementing Data Warehouse solutions Project management and system development methodology Ability to proactively research solutions and best practices Soft Skills for Data Warehouse Developers: Excellent Analytical skills Excellent verbal and written communications Strong organization skills Ability to work on a team, as well as independently

Job posted by
apply for job
apply for job
Priyanka U picture
Priyanka U
Job posted by
Priyanka U picture
Priyanka U
Apply for job
apply for job

Support Engineer - App Ops / Data Ops

Founded 2001
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
1 - 5 years
Salary icon
Best in industryBest in industry

Required: 1-6 years of experience in Application and/or Data Operations Support domain. Expertise in doing RCA (root-cause analysis) and collaborating with development teams for CoE (correction of errors). Good communication & collaboration skills - liaison with product, operations & business teams to understand the requirements and provide data extracts & reports on need basis. Experience in working in an enterprise environment, with a good discipline & adherence to the SLA. Good understanding of the ticketing tools, to track the various requests and manage the lifecycle for multiple requests e.g. JIRA, Service-Now, Rally, Change-Gear etc. Orientation towards addressing the root-cause for any issue i.e. collaborate and follow-up with development teams to ensure permanent fix & prevention is given high priority. Ability to create SOPs (system operating procedures) in Confluence/Wiki to ensure there is a good reference for the support team to utilise. Self-starter and a collaborator having the ability to independently acquire the knowledge required in succeeding the job. Specifically for Data Ops Engineer role, following experience is required: BI, Reporting & Data Warehousing domain Experience in production support for Data queries - monitoring, analysis & triage of issues Experience in using BI tools like MicroStrategy, Qlik, Power BI, Business Objects Expertise in data-analysis & writing SQL queries to provide insights into the production data.  Experience with relational database (RDBMS) & data-mart technologies like DB2, RedShift, SQL Server, My SQL, Netezza etc. Ability to monitor ETL jobs in AWS stack with tools like Tidal, Autosys etc. Experience with Big data platforms like Amazon RedShift Responsibilities: Production Support (Level 2) Job failures resolution - re-runs based on SOPs Report failures root-cause analysis & resolution Address queries for existing Reports & APIs Ad-hoc data requests for product & business stakeholders: Transactions per day, per entity (merchant, card-type, card-category) Custom extracts

Job posted by
apply for job
apply for job
Srinivas Avanthkar picture
Srinivas Avanthkar
Job posted by
Srinivas Avanthkar picture
Srinivas Avanthkar
Apply for job
apply for job

Sr Data Engineer(SQL / Spark )

Founded 2015
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
4 - 9 years
Salary icon
Best in industry{{renderSalaryString({min: 800000, max: 1800000, duration: "undefined", currency: "INR", equity: false})}}

Key Result Areas   ·         Create and maintain optimal data pipeline, ·         Assemble large, complex data sets that meet functional / non-functional business requirements. ·         Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. ·         Keep our data separated and secure ·         Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader. ·         Build analytics tools that utilize the data pipeline to provide actionable insights into key business performance metrics. ·         Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. ·         Work with data and analytics experts to strive for greater functionality in our data systems     Knowledge, Skills and Experience   Core Skills: We are looking for a candidate with 5+ years of experience in a Data Engineer role. They should also have experience using the following software/tools: ·         Experience in developing Big Data applications using Spark, Hive, Sqoop, Kafka, and Map Reduce ·         Experience with stream-processing systems: Spark-Streaming, Strom etc. ·         Experience with object-oriented/object function scripting languages: Python, Scala etc ·         Experience in designing and building dimensional data models to improve accessibility, efficiency, and quality of data ·         Should be proficient in writing Advanced SQLs, Expertise in performance tuning of SQLs. Experience with data science and machine learning tools and technologies is a plus ·         Experience with relational SQL and NoSQL databases, including Postgres and Cassandra. ·         Experience with Azure cloud services is a plus ·         Financial Services Knowledge is a plus

Job posted by
apply for job
apply for job
Jayanti M picture
Jayanti M
Job posted by
Jayanti M picture
Jayanti M
Apply for job
apply for job

Engineering Head

Founded 2019
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
9 - 15 years
Salary icon
Best in industry{{renderSalaryString({min: 5000000, max: 7000000, duration: "undefined", currency: "INR", equity: false})}}

Main responsibilities: + Management of a growing technical team + Continued technical Architecture design based on product roadmap + Annual performance reviews + Work with DevOps to design and implement the product infrastructure Strategic: + Testing strategy + Security policy + Performance and performance testing policy + Logging policy Experience: + 9-15 years of experience including that of managing teams of developers + Technical & architectural expertise, and have evolved a growing code base, technology stack and architecture over many years + Have delivered distributed cloud applications + Understand the value of high quality code and can effectively manage technical debt + Stakeholder management + Work experience in consumer focused early stage (Series A, B) startups is a big plus Other innate skills: + Great motivator of people and able to lead by example + Understand how to get the most out of people + Delivery of products to tight deadlines but with a focus on high quality code + Up to date knowledge of technical applications

Job posted by
apply for job
apply for job
Jennifer Jocelyn picture
Jennifer Jocelyn
Job posted by
Jennifer Jocelyn picture
Jennifer Jocelyn
Apply for job
apply for job

BI Developer (SQL writer for analytical queries)

Founded 2017
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[1 - 1]}}
via bipp
{{rendered_skills_map[skill] || skill}}
Location icon
Remote, Pune, Hyderabad, NCR (Delhi | Gurgaon | Noida), Bengaluru (Bangalore), Mumbai
Experience icon
3 - 9 years
Salary icon
Best in industry{{renderSalaryString({min: 400000, max: 800000, duration: "undefined", currency: "INR", equity: false})}}

Do NOT apply if you are :- Want to be a Power Bi, Qlik, or Tableau only developer.- A machine learning aspirant- A data scientist- Wanting to write Python scripts- Want to do AI - Want to do 'BIG' data- Want to do HADOOP- Fresh GraduateApply if you :- Write SQL for complicated analytical queries . - Understand existing business problem of the client and map their needs to the schema that they have.-Can neatly disassemble the problem into components and solve the needs by using SQL. - Have worked on existing BI products.Develop solutions with our exciting new BI product for our clients.You should be very experienced and comfortable with writing SQL against very complicated schema to help answer business questions.Have an analytical thought process.

Job posted by
apply for job
apply for job
Vish Josh picture
Vish Josh
Job posted by
Vish Josh picture
Vish Josh
Apply for job
apply for job

Database Architect

Founded 2017
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 10 years
Salary icon
Best in industry{{renderSalaryString({min: 1000000, max: 2000000, duration: "undefined", currency: "INR", equity: false})}}

candidate will be responsible for all aspects of data acquisition, data transformation, and analytics scheduling and operationalization to drive high-visibility, cross-division outcomes. Expected deliverables will include the development of Big Data ELT jobs using a mix of technologies, stitching together complex and seemingly unrelated data sets for mass consumption, and automating and scaling analytics into the GRAND's Data Lake. Key Responsibilities : - Create a GRAND Data Lake and Warehouse which pools all the data from different regions and stores of GRAND in GCC - Ensure Source Data Quality Measurement, enrichment and reporting of Data Quality - Manage All ETL and Data Model Update Routines - Integrate new data sources into DWH - Manage DWH Cloud (AWS/AZURE/Google) and Infrastructure Skills Needed : - Very strong in SQL. Demonstrated experience with RDBMS, Unix Shell scripting preferred (e.g., SQL, Postgres, Mongo DB etc) - Experience with UNIX and comfortable working with the shell (bash or KRON preferred) - Good understanding of Data warehousing concepts. Big data systems : Hadoop, NoSQL, HBase, HDFS, MapReduce - Aligning with the systems engineering team to propose and deploy new hardware and software environments required for Hadoop and to expand existing environments. - Working with data delivery teams to set up new Hadoop users. This job includes setting up Linux users, setting up and testing HDFS, Hive, Pig and MapReduce access for the new users. - Cluster maintenance as well as creation and removal of nodes using tools like Ganglia, Nagios, Cloudera Manager Enterprise, and other tools. - Performance tuning of Hadoop clusters and Hadoop MapReduce routines. - Screen Hadoop cluster job performances and capacity planning - Monitor Hadoop cluster connectivity and security - File system management and monitoring. - HDFS support and maintenance. - Collaborating with application teams to install operating system and - Hadoop updates, patches, version upgrades when required. - Defines, develops, documents and maintains Hive based ETL mappings and scripts

Job posted by
apply for job
apply for job
Rahul Malani picture
Rahul Malani
Job posted by
Rahul Malani picture
Rahul Malani
Apply for job
apply for job
Why apply via CutShort?
Connect with actual hiring teams and get their fast response. No spam.
File upload not supportedAudio recording not supported
This browser does not support file upload. Please follow the instructions to upload your resume.This browser does not support audio recording. Please follow the instructions to record audio.
  1. Click on the 3 dots
  2. Click on "Copy link"
  3. Open Google Chrome (or any other browser) and enter the copied link in the URL bar
Done