Cutshort logo
Data Warehouse (DWH) Jobs in Bangalore (Bengaluru)

41+ Data Warehouse (DWH) Jobs in Bangalore (Bengaluru) | Data Warehouse (DWH) Job openings in Bangalore (Bengaluru)

Apply to 41+ Data Warehouse (DWH) Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest Data Warehouse (DWH) Job opportunities across top companies like Google, Amazon & Adobe.

icon
Wissen Technology

at Wissen Technology

4 recruiters
Vijayalakshmi Selvaraj
Posted by Vijayalakshmi Selvaraj
Bengaluru (Bangalore), Pune
5 - 10 yrs
Best in industry
ETL
SQL
Snow flake schema
Data Warehouse (DWH)

Job Description for QA Engineer:

  • 6-10 years of experience in ETL Testing, Snowflake, DWH Concepts.
  • Strong SQL knowledge & debugging skills are a must.
  • Experience on Azure and Snowflake Testing is plus
  • Experience with Qlik Replicate and Compose tools (Change Data Capture) tools is considered a plus
  • Strong Data warehousing Concepts, ETL tools like Talend Cloud Data Integration, Pentaho/Kettle tool
  • Experience in JIRA, Xray defect management toolis good to have.
  • Exposure to the financial domain knowledge is considered a plus
  • Testing the data-readiness (data quality) address code or data issues
  • Demonstrated ability to rationalize problems and use judgment and innovation to define clear and concise solutions
  • Demonstrate strong collaborative experience across regions (APAC, EMEA and NA) to effectively and efficiently identify root cause of code/data issues and come up with a permanent solution
  • Prior experience with State Street and Charles River Development (CRD) considered a plus
  • Experience in tools such as PowerPoint, Excel, SQL
  • Exposure to Third party data providers such as Bloomberg, Reuters, MSCI and other Rating agencies is a plus


Key Attributes include:

  • Team player with professional and positive approach
  • Creative, innovative and able to think outside of the box
  • Strong attention to detail during root cause analysis and defect issue resolution
  • Self-motivated & self-sufficient
  • Effective communicator both written and verbal
  • Brings a high level of energy with enthusiasm to generate excitement and motivate the team
  • Able to work under pressure with tight deadlines and/or multiple projects
  • Experience in negotiation and conflict resolution


Read more
Wissen Technology

at Wissen Technology

4 recruiters
Gloria Dsouza
Posted by Gloria Dsouza
Bengaluru (Bangalore)
5 - 12 yrs
₹15L - ₹15L / yr
Snow flake schema
SQL
skill iconPython
Spark
Data Warehouse (DWH)
  • As a data engineer, you will build systems that collect, manage, and convert raw data into usable information for data scientists and business analysts to interpret. You ultimate goal is to make data accessible for organizations to optimize their performance. 
  • Work closely with PMs, business analysts to build and improvise data pipelines, identify and model business objects • Write scripts implementing data transformation, data structures, metadata for bringing structure for partially unstructured data and improvise quality of data 
  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL
  • Own data pipelines - Monitoring, testing, validating and ensuring meaningful data exists in data warehouse with high level of data quality 
  • What we look for in the candidate is strong analytical skills with the ability to collect, organise, analyse, and disseminate significant amounts of information with attention to detail and accuracy 
  • Create long term and short-term design solutions through collaboration with colleagues
  • Proactive to experiment with new tools
  • Strong programming skill in python
  • Skillset: Python, SQL, ETL frameworks, PySpark and Snowflake 
  • Strong communication and interpersonal skills to interact with senior-level management regarding the implementation of changes 
  • Willingness to learn and eagerness to contribute to projects 
  • Designing datawarehouse and most appropriate DB schema for the data product
  • Positive attitude and proactive problem-solving mindset
  • Experience in building data pipelines and connectors
  • Knowledge on AWS cloud services would be preferred


Read more
AxionConnect Infosolutions Pvt Ltd
Shweta Sharma
Posted by Shweta Sharma
Pune, Bengaluru (Bangalore), Hyderabad, Nagpur, Chennai
5.5 - 7 yrs
₹20L - ₹25L / yr
skill iconDjango
skill iconFlask
Snowflake
Snow flake schema
SQL
+4 more

Job Location: Hyderabad/Bangalore/ Chennai/Pune/Nagpur

Notice period: Immediate - 15 days

 

1.      Python Developer with Snowflake

 

Job Description :


  1. 5.5+ years of Strong Python Development Experience with Snowflake.
  2. Strong hands of experience with SQL ability to write complex queries.
  3. Strong understanding of how to connect to Snowflake using Python, should be able to handle any type of files
  4.  Development of Data Analysis, Data Processing engines using Python
  5. Good Experience in Data Transformation using Python. 
  6.  Experience in Snowflake data load using Python.
  7.  Experience in creating user-defined functions in Snowflake.
  8.  Snowsql implementation.
  9.  Knowledge of query performance tuning will be added advantage.
  10. Good understanding of Datawarehouse (DWH) concepts.
  11.  Interpret/analyze business requirements & functional specification
  12.  Good to have DBT, FiveTran, and AWS Knowledge.
Read more
Data Semantics
Deepu Vijayan
Posted by Deepu Vijayan
Remote, Hyderabad, Bengaluru (Bangalore)
4 - 15 yrs
₹3L - ₹30L / yr
ETL
Informatica
Data Warehouse (DWH)
SQL Server Analysis Services (SSAS)
SQL Server Reporting Services (SSRS)
+4 more

It's regarding a permanent opening with Data Semantics

Data Semantics 


We are Product base company and Microsoft Gold Partner

Data Semantics is an award-winning Data Science company with a vision to empower every organization to harness the full potential of its data assets. In order to achieve this, we provide Artificial Intelligence, Big Data and Data Warehousing solutions to enterprises across the globe. Data Semantics was listed as one of the top 20 Analytics companies by Silicon India 2018 & CIO Review India 2014 as one of the Top 20 BI companies.  We are headquartered in Bangalore, India with our offices in 6 global locations including USA United Kingdom, Canada, United Arab Emirates (Dubai Abu Dhabi), and Mumbai. Our mission is to enable our people to learn the art of data management and visualization to help our customers make quick and smart decisions. 

 

Our Services include: 

Business Intelligence & Visualization

App and Data Modernization

Low Code Application Development

Artificial Intelligence

Internet of Things

Data Warehouse Modernization

Robotic Process Automation

Advanced Analytics

 

Our Products:

Sirius – World’s most agile conversational AI platform

Serina

Conversational Analytics

Contactless Attendance Management System

 

 

Company URL:   https://datasemantics.co 


JD:

MSBI

SSAS

SSRS

SSIS

Datawarehousing

SQL

Read more
Gipfel & Schnell Consultings Pvt Ltd
TanmayaKumar Pattanaik
Posted by TanmayaKumar Pattanaik
Bengaluru (Bangalore)
3 - 9 yrs
₹9L - ₹30L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+10 more

Qualifications & Experience:


▪ 2 - 4 years overall experience in ETLs, data pipeline, Data Warehouse development and database design

▪ Software solution development using Hadoop Technologies such as MapReduce, Hive, Spark, Kafka, Yarn/Mesos etc.

▪ Expert in SQL, worked on advanced SQL for at least 2+ years

▪ Good development skills in Java, Python or other languages

▪ Experience with EMR, S3

▪ Knowledge and exposure to BI applications, e.g. Tableau, Qlikview

▪ Comfortable working in an agile environment

Read more
Tredence
Rohit S
Posted by Rohit S
Chennai, Pune, Bengaluru (Bangalore), Gurugram
11 - 16 yrs
₹20L - ₹32L / yr
Data Warehouse (DWH)
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
Data engineering
Data migration
+1 more
• Engages with Leadership of Tredence’s clients to identify critical business problems, define the need for data engineering solutions and build strategy and roadmap
• S/he possesses a wide exposure to complete lifecycle of data starting from creation to consumption
• S/he has in the past built repeatable tools / data-models to solve specific business problems
• S/he should have hand-on experience of having worked on projects (either as a consultant or with in a company) that needed them to
o Provide consultation to senior client personnel o Implement and enhance data warehouses or data lakes.
o Worked with business teams or was a part of the team that implemented process re-engineering driven by data analytics/insights
• Should have deep appreciation of how data can be used in decision-making
• Should have perspective on newer ways of solving business problems. E.g. external data, innovative techniques, newer technology
• S/he must have a solution-creation mindset.
Ability to design and enhance scalable data platforms to address the business need
• Working experience on data engineering tool for one or more cloud platforms -Snowflake, AWS/Azure/GCP
• Engage with technology teams from Tredence and Clients to create last mile connectivity of the solutions
o Should have experience of working with technology teams
• Demonstrated ability in thought leadership – Articles/White Papers/Interviews
Mandatory Skills Program Management, Data Warehouse, Data Lake, Analytics, Cloud Platform
Read more
Talent500

Talent500

Agency job
via Talent500 by ANSR by Raghu R
Bengaluru (Bangalore)
1 - 10 yrs
₹5L - ₹30L / yr
skill iconPython
ETL
SQL
SQL Server Reporting Services (SSRS)
Data Warehouse (DWH)
+6 more

A proficient, independent contributor that assists in technical design, development, implementation, and support of data pipelines; beginning to invest in less-experienced engineers.

Responsibilities:

- Design, Create and maintain on premise and cloud based data integration pipelines. 
- Assemble large, complex data sets that meet functional/non functional business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources.
- Build analytics tools that utilize the data pipeline to provide actionable insights into key business performance metrics.
- Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
- Create data pipelines to enable BI, Analytics and Data Science teams that assist them in building and optimizing their systems
- Assists in the onboarding, training and development of team members.
- Reviews code changes and pull requests for standardization and best practices
- Evolve existing development to be automated, scalable, resilient, self-serve platforms
- Assist the team in the design and requirements gathering for technical and non technical work to drive the direction of projects

 

Technical & Business Expertise:

-Hands on integration experience in SSIS/Mulesoft
- Hands on experience Azure Synapse
- Proven advanced level of writing database experience in SQL Server
- Proven advanced level of understanding about Data Lake
- Proven intermediate level of writing Python or similar programming language
- Intermediate understanding of Cloud Platforms (GCP) 
- Intermediate understanding of Data Warehousing
- Advanced Understanding of Source Control (Github)

Read more
Sixt R&D

Sixt R&D

Agency job
Bengaluru (Bangalore)
5 - 8 yrs
₹11L - ₹14L / yr
SQL
skill iconPython
RESTful APIs
Business Intelligence (BI)
QuickSight
+6 more

Technical-Requirements: 

  • Bachelor's Degree in Computer Science or a related technical field, and solid years of relevant experience. 
  • A strong grasp of SQL/Presto and at least one scripting (Python, preferable) or programming language. 
  • Experience with an enterprise class BI tools and it's auditing along with automations using REST API's. 
  • Experience with reporting tools – QuickSight (preferred, at least 2 years hands on). 
  • Tableau/Looker (both or anyone would suffice with at least 5+ years of hands on). 
  • 5+ years of experience with and detailed knowledge of data warehouse technical architectures, data modelling, infrastructure components, ETL/ ELT and reporting/analytic tools and environments, data structures and hands-on SQL coding. 
  • 5+ years of demonstrated quantitative and qualitative business intelligence. 
  • Experience with significant product analysis based business impact. 
  • 4+ years of large IT project delivery for BI oriented projects using agile framework. 
  • 2+ years of working with very large data warehousing environment. 
  • Experience in designing and delivering cross functional custom reporting solutions. 
  • Excellent oral and written communication skills including the ability to communicate effectively with both technical and non-technical stakeholders. 
  • Proven ability to meet tight deadlines, multi-task, and prioritize workload. 
  • A work ethic based on a strong desire to exceed expectations. 
  • Strong analytical and challenge process skills.
Read more
xpressbees
Alfiya Khan
Posted by Alfiya Khan
Pune, Bengaluru (Bangalore)
6 - 8 yrs
₹15L - ₹25L / yr
Big Data
Data Warehouse (DWH)
Data modeling
Apache Spark
Data integration
+10 more
Company Profile
XpressBees – a logistics company started in 2015 – is amongst the fastest growing
companies of its sector. While we started off rather humbly in the space of
ecommerce B2C logistics, the last 5 years have seen us steadily progress towards
expanding our presence. Our vision to evolve into a strong full-service logistics
organization reflects itself in our new lines of business like 3PL, B2B Xpress and cross
border operations. Our strong domain expertise and constant focus on meaningful
innovation have helped us rapidly evolve as the most trusted logistics partner of
India. We have progressively carved our way towards best-in-class technology
platforms, an extensive network reach, and a seamless last mile management
system. While on this aggressive growth path, we seek to become the one-stop-shop
for end-to-end logistics solutions. Our big focus areas for the very near future
include strengthening our presence as service providers of choice and leveraging the
power of technology to improve efficiencies for our clients.

Job Profile
As a Lead Data Engineer in the Data Platform Team at XpressBees, you will build the data platform
and infrastructure to support high quality and agile decision-making in our supply chain and logistics
workflows.
You will define the way we collect and operationalize data (structured / unstructured), and
build production pipelines for our machine learning models, and (RT, NRT, Batch) reporting &
dashboarding requirements. As a Senior Data Engineer in the XB Data Platform Team, you will use
your experience with modern cloud and data frameworks to build products (with storage and serving
systems)
that drive optimisation and resilience in the supply chain via data visibility, intelligent decision making,
insights, anomaly detection and prediction.

What You Will Do
• Design and develop data platform and data pipelines for reporting, dashboarding and
machine learning models. These pipelines would productionize machine learning models
and integrate with agent review tools.
• Meet the data completeness, correction and freshness requirements.
• Evaluate and identify the data store and data streaming technology choices.
• Lead the design of the logical model and implement the physical model to support
business needs. Come up with logical and physical database design across platforms (MPP,
MR, Hive/PIG) which are optimal physical designs for different use cases (structured/semi
structured). Envision & implement the optimal data modelling, physical design,
performance optimization technique/approach required for the problem.
• Support your colleagues by reviewing code and designs.
• Diagnose and solve issues in our existing data pipelines and envision and build their
successors.

Qualifications & Experience relevant for the role

• A bachelor's degree in Computer Science or related field with 6 to 9 years of technology
experience.
• Knowledge of Relational and NoSQL data stores, stream processing and micro-batching to
make technology & design choices.
• Strong experience in System Integration, Application Development, ETL, Data-Platform
projects. Talented across technologies used in the enterprise space.
• Software development experience using:
• Expertise in relational and dimensional modelling
• Exposure across all the SDLC process
• Experience in cloud architecture (AWS)
• Proven track record in keeping existing technical skills and developing new ones, so that
you can make strong contributions to deep architecture discussions around systems and
applications in the cloud ( AWS).

• Characteristics of a forward thinker and self-starter that flourishes with new challenges
and adapts quickly to learning new knowledge
• Ability to work with a cross functional teams of consulting professionals across multiple
projects.
• Knack for helping an organization to understand application architectures and integration
approaches, to architect advanced cloud-based solutions, and to help launch the build-out
of those systems
• Passion for educating, training, designing, and building end-to-end systems.
Read more
Rapidly growing fintech SaaS firm that propels business grow

Rapidly growing fintech SaaS firm that propels business grow

Agency job
via Jobdost by Mamatha A
Bengaluru (Bangalore)
5 - 10 yrs
₹30L - ₹45L / yr
Google Adwords
PPC
Data Warehouse (DWH)
SQL
Web Analytics
+7 more

What is the role?

We are looking for a Senior Performance Marketing manager (PPC/SEM) who will be responsible for paid advertising for this company across Google Ads, Social ads and other demand-gen channels.

Our ideal candidate has a blend of analytical and creative mindset, passionate about driving metrics while also being very sensitive to brand and user experience, and distinctive at operating in highly collaborative and cross-functional settings. This role partners closely with our Sales, product, design, and broader marketing teams.

Key responsibilities

  • Strategise, execute, monitor, and manage campaigns across multiple platforms such as Google Ads (incl. Search, Display & Youtube),, Facebook Ads & Linkedin Ads.
  • Oversee growth in performance campaigns to meet the brand's business goals and strategies.
  • Should have hands-on experience on managing landing pages, keyword plans, ad copies, display ads etc.
  • Should have extremely good analytical skills to figure out the signals in the campaigns and optimize the campaigns using these insights. 
  • Implement ongoing A/B and user experience testing for ads, quality score, placements, dynamic landing pages and measure their effectiveness.
  • Monitor campaign performance & budget pacing on a day-to-day basis. 
  • Measure the Campaign performance parameters methodically, analyze campaign performance, compile and present detailed reports with proactive insights.
  • Be informed on the latest trends, best practices, and standards in online advertising across demand-gen channels.
  • Perform Media Mix modeling. Design, develop, and monitor other digital media buying campaigns.

What are we looking for?

  • 5-10 years of pure PPC experience, preferably in a SaaS company managing annual budgets of more than 2 mn USD.
  • Highly comfortable with Google Ads Editor, Linkedin Ads, Facebook Business Manager & such.
  • Strong working knowledge of PPC Automations/Rules/Scripts and best practices with the ability to analyze Campaign metrics on Excel/Data Studio and optimize campaigns with insights.
  • Experience working with Ad channel APIs and other data APIs to deep-dive into metrics & make data-informed optimisations.
  • [Good to have] Working knowledge of SQL, Data Warehouses (Bigquery), Data connectors/pipelines, blends/joints (for blending multiple data sources) etc.
  • In-depth experience with GA4. Clear understanding of Web Analytics.
  • Comfortable writing and editing content for ad copies, landing page snippets, descriptions, etc.
  • Experience with running Campaigns for US, European, and the global markets.

What can you look for?

A wholesome opportunity in a fast-paced environment that will enable you to juggle between concepts, yet maintain the quality of content, interact, and share your ideas and have loads of learning while at work. Work with a team of highly talented young professionals and enjoy the benefits of being at this company

We are

It  is a rapidly growing fintech SaaS firm that propels business growth while focusing on human motivation. Backed by Giift and Apis Partners Growth Fund II, Company offers a suite of three products - Plum, Empuls, and Compass. The company works with more than 2000 clients across 10+ countries and over 2.5 million users. Headquartered in Bengaluru, Company is a 300+ strong team with four global offices in Dubai, San Francisco, Dublin, Singapore, New Delhi.

Way forward

We look forward to connecting with you. As you may take time to review this opportunity, we will wait for a reasonable time of around 3-5 days before we screen the collected applications and start lining up job discussions with the hiring manager. We however assure you that we will attempt to maintain a reasonable time window for successfully closing this requirement. The candidates will be kept informed and updated on the feedback and application status.

Read more
Velocity Services

at Velocity Services

2 recruiters
Newali Hazarika
Posted by Newali Hazarika
Bengaluru (Bangalore)
4 - 9 yrs
₹15L - ₹35L / yr
ETL
Informatica
Data Warehouse (DWH)
Data engineering
Oracle
+7 more

We are an early stage start-up, building new fintech products for small businesses. Founders are IIT-IIM alumni, with prior experience across management consulting, venture capital and fintech startups. We are driven by the vision to empower small business owners with technology and dramatically improve their access to financial services. To start with, we are building a simple, yet powerful solution to address a deep pain point for these owners: cash flow management. Over time, we will also add digital banking and 1-click financing to our suite of offerings.

 

We have developed an MVP which is being tested in the market. We have closed our seed funding from marquee global investors and are now actively building a world class tech team. We are a young, passionate team with a strong grip on this space and are looking to on-board enthusiastic, entrepreneurial individuals to partner with us in this exciting journey. We offer a high degree of autonomy, a collaborative fast-paced work environment and most importantly, a chance to create unparalleled impact using technology.

 

Reach out if you want to get in on the ground floor of something which can turbocharge SME banking in India!

 

Technology stack at Velocity comprises a wide variety of cutting edge technologies like, NodeJS, Ruby on Rails, Reactive Programming,, Kubernetes, AWS, NodeJS, Python, ReactJS, Redux (Saga) Redis, Lambda etc. 

 

Key Responsibilities

  • Responsible for building data and analytical engineering pipelines with standard ELT patterns, implementing data compaction pipelines, data modelling and overseeing overall data quality

  • Work with the Office of the CTO as an active member of our architecture guild

  • Writing pipelines to consume the data from multiple sources

  • Writing a data transformation layer using DBT to transform millions of data into data warehouses.

  • Implement Data warehouse entities with common re-usable data model designs with automation and data quality capabilities

  • Identify downstream implications of data loads/migration (e.g., data quality, regulatory)

 

What To Bring

  • 5+ years of software development experience, a startup experience is a plus.

  • Past experience of working with Airflow and DBT is preferred

  • 5+ years of experience working in any backend programming language. 

  • Strong first-hand experience with data pipelines and relational databases such as Oracle, Postgres, SQL Server or MySQL

  • Experience with DevOps tools (GitHub, Travis CI, and JIRA) and methodologies (Lean, Agile, Scrum, Test Driven Development)

  • Experienced with the formulation of ideas; building proof-of-concept (POC) and converting them to production-ready projects

  • Experience building and deploying applications on on-premise and AWS or Google Cloud cloud-based infrastructure

  • Basic understanding of Kubernetes & docker is a must.

  • Experience in data processing (ETL, ELT) and/or cloud-based platforms

  • Working proficiency and communication skills in verbal and written English.

 

Read more
Impetus

at Impetus

3 recruiters
Agency job
via Impetus by Gangadhar TM
Bengaluru (Bangalore), Pune, Hyderabad, Indore, Noida, Gurugram
10 - 16 yrs
₹30L - ₹50L / yr
Big Data
Data Warehouse (DWH)
Product Management

Job Title: Product Manager

 

Job Description

Bachelor or master’s degree in computer science or equivalent experience.
Worked as Product Owner before and took responsibility for a product or project delivery.
Well-versed with data warehouse modernization to Big Data and Cloud environments.
Good knowledge* of any of the Cloud (AWS/Azure/GCP) – Must Have
Practical experience with continuous integration and continuous delivery workflows.
Self-motivated with strong organizational/prioritization skills and ability to multi-task with close attention to detail.
Good communication skills
Experience in working within a distributed agile team
Experience in handling migration projects – Good to Have
 

*Data Ingestion, Processing, and Orchestration knowledge

 

Roles & Responsibilities


Responsible for coming up with innovative and novel ideas for the product.
Define product releases, features, and roadmap.
Collaborate with product teams on defining product objectives, including creating a product roadmap, delivery, market research, customer feedback, and stakeholder inputs.
Work with the Engineering teams to communicate release goals and be a part of the product lifecycle. Work closely with the UX and UI team to create the best user experience for the end customer.
Work with the Marketing team to define GTM activities.
Interface with Sales & Customer teams to identify customer needs and product gaps
Market and competition analysis activities.
Participate in the Agile ceremonies with the team, define epics, user stories, acceptance criteria
Ensure product usability from the end-user perspective

 

Mandatory Skills

Product Management, DWH, Big Data

Read more
Accion Labs

at Accion Labs

14 recruiters
Anjali Mohandas
Posted by Anjali Mohandas
Remote, Bengaluru (Bangalore), Pune, Hyderabad, Mumbai
4 - 8 yrs
₹15L - ₹28L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+5 more

4-6 years of total experience in data warehousing and business intelligence

3+ years of solid Power BI experience (Power Query, M-Query, DAX, Aggregates)

2 years’ experience building Power BI using cloud data (Snowflake, Azure Synapse, SQL DB, data lake)

Strong experience building visually appealing UI/UX in Power BI

Understand how to design Power BI solutions for performance (composite models, incremental refresh, analysis services)

Experience building Power BI using large data in direct query mode

Expert SQL background (query building, stored procedure, optimizing performance)

Read more
A fast-growing SaaS commerce company permanent WFH & Office

A fast-growing SaaS commerce company permanent WFH & Office

Agency job
via Jobdost by Mamatha A
Remote, Bengaluru (Bangalore)
6 - 9 yrs
₹25L - ₹40L / yr
skill iconAmazon Web Services (AWS)
Data Warehouse (DWH)
MySQL
NOSQL Databases
skill iconPostgreSQL
+4 more

Job Description :

A candidate who has a strong background in the design and implementation of scalable architecture and a good understanding of Algorithms, Data structures, and design patterns. Candidate must be ready to learn new tools, languages, and technologies

Skills :

Microservices, MySQL/Postgres, Kafka/Message Queues, Elasticsearch, Data pipelines, AWS Cloud, Clickhouse/Redshift

What you need to succeed in this role

  • Minimum 6 years of experience
  • Good understanding of various database types: RDBMS, NoSQL, GraphDB, etc
  • Ability to build highly stable reliable APIs and backend services.
  • Should be familiar with distributed, high availability database systems
  • Experience with queuing systems like Kafka
  • Hands-on in cloud infrastructure AWS/GCP/Azure
  • Highly plus if know one or more of the following: Confluent ksql, Kafka connect, Kafka streams
  • Hands-on experience with data warehouse/OLAP systems such as Redshift, click house and added plus. 
  • Good communication and interpersonal skills

Benefits of joining us

  • Ability to join a small and growing team, and work with some of the coolest people you've ever met
  • Opportunity to make an impact, and leave your mark on this organization.
  • Competitive compensation, with the ability to shape your own career trajectory
  • Go Extra Mile with Learning and Development

What can you look for?

A wholesome opportunity in a fast-paced environment that will enable you to juggle between concepts, yet maintain the quality on content, interact and share your ideas and have loads of learning while at work. Work with a team of highly talented young professionals and enjoy the benefits of being at Xoxoday.

We are

A fast-growing SaaS commerce company based in Bangalore with offices in Delhi, Mumbai, SF, Dubai, Singapore and Dublin. We have three products in our portfolio: Plum, Empuls and Compass. Works with over 1000 global clients. We help our clients in engaging and motivating their employees, sales teams, channel partners or consumers for better business results.

Read more
CoStrategix Technologies

at CoStrategix Technologies

1 video
1 recruiter
Jayasimha Kulkarni
Posted by Jayasimha Kulkarni
Remote, Bengaluru (Bangalore)
4 - 8 yrs
₹10L - ₹28L / yr
Data engineering
Data Structures
skill iconProgramming
skill iconPython
skill iconC#
+3 more

 

Job Description - Sr Azure Data Engineer

 

 

Roles & Responsibilities:

  1. Hands-on programming in C# / .Net,
  2. Develop serverless applications using Azure Function Apps.
  3. Writing complex SQL Queries, Stored procedures, and Views. 
  4. Creating Data processing pipeline(s).
  5. Develop / Manage large-scale Data Warehousing and Data processing solutions.
  6. Provide clean, usable data and recommend data efficiency, quality, and data integrity.

 

Skills

  1. Should have working experience on C# /.Net.
  2. Proficient with writing SQL queries, Stored Procedures, and Views
  3. Should have worked on Azure Cloud Stack.
  4. Should have working experience ofin developing serverless code.
  5. Must have MANDATORILY worked on Azure Data Factory.

 

Experience 

  1. 4+ years of relevant experience

 

Read more
Impetus Technologies

at Impetus Technologies

1 recruiter
Gangadhar T.M
Posted by Gangadhar T.M
Bengaluru (Bangalore), Hyderabad, Pune, Indore, Gurugram, Noida
10 - 17 yrs
₹25L - ₹50L / yr
Product Management
Big Data
Data Warehouse (DWH)
ETL
Hi All, 
Greetings! We are looking for Product Manager for our Data modernization product. We need a resource with good knowledge on Big Data/DWH. should have strong Stakeholders management and Presentation skills
Read more
A Pre-series A funded FinTech Company

A Pre-series A funded FinTech Company

Agency job
via GoHyre by Avik Majumder
Bengaluru (Bangalore)
3 - 6 yrs
₹15L - ₹30L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+6 more

Responsibilities:

  • Ensure and own Data integrity across distributed systems.
  • Extract, Transform and Load data from multiple systems for reporting into BI platform.
  • Create Data Sets and Data models to build intelligence upon.
  • Develop and own various integration tools and data points.
  • Hands-on development and/or design within the project in order to maintain timelines.
  • Work closely with the Project manager to deliver on business requirements OTIF (on time in full)
  • Understand the cross-functional business data points thoroughly and be SPOC for all data-related queries.
  • Work with both Web Analytics and Backend Data analytics.
  • Support the rest of the BI team in generating reports and analysis
  • Quickly learn and use Bespoke & third party SaaS reporting tools with little documentation.
  • Assist in presenting demos and preparing materials for Leadership.

 Requirements:

  • Strong experience in Datawarehouse modeling techniques and SQL queries
  • A good understanding of designing, developing, deploying, and maintaining Power BI report solutions
  • Ability to create KPIs, visualizations, reports, and dashboards based on business requirement
  • Knowledge and experience in prototyping, designing, and requirement analysis
  • Be able to implement row-level security on data and understand application security layer models in Power BI
  • Proficiency in making DAX queries in Power BI desktop.
  • Expertise in using advanced level calculations on data sets
  • Experience in the Fintech domain and stakeholder management.
Read more
Fintech Company

Fintech Company

Agency job
via Jobdost by Sathish Kumar
Bengaluru (Bangalore)
2 - 4 yrs
₹7L - ₹12L / yr
skill iconPython
SQL
Data Warehouse (DWH)
Hadoop
skill iconAmazon Web Services (AWS)
+7 more

Purpose of Job:

Responsible for drawing insights from many sources of data to answer important business
questions and help the organization make better use of data in their daily activities.


Job Responsibilities:

We are looking for a smart and experienced Data Engineer 1 who can work with a senior
manager to
⮚ Build DevOps solutions and CICD pipelines for code deployment
⮚ Build unit test cases for APIs and Code in Python
⮚ Manage AWS resources including EC2, RDS, Cloud Watch, Amazon Aurora etc.
⮚ Build and deliver high quality data architecture and pipelines to support business
and reporting needs
⮚ Deliver on data architecture projects and implementation of next generation BI
solutions
⮚ Interface with other teams to extract, transform, and load data from a wide variety
of data sources
Qualifications:
Education: MS/MTech/Btech graduates or equivalent with focus on data science and
quantitative fields (CS, Eng, Math, Eco)
Work Experience: Proven 1+ years of experience in data mining (SQL, ETL, data
warehouse, etc.) and using SQL databases

 

Skills
Technical Skills
⮚ Proficient in Python and SQL. Familiarity with statistics or analytical techniques
⮚ Data Warehousing Experience with Big Data Technologies (Hadoop, Hive,
Hbase, Pig, Spark, etc.)
⮚ Working knowledge of tools and utilities - AWS, DevOps with Git, Selenium,
Postman, Airflow, PySpark
Soft Skills
⮚ Deep Curiosity and Humility
⮚ Excellent storyteller and communicator
⮚ Design Thinking

Read more
Semperfi Solution

at Semperfi Solution

1 recruiter
Ambika Jituri
Posted by Ambika Jituri
Bengaluru (Bangalore)
0 - 1 yrs
₹3L - ₹6L / yr
skill iconGo Programming (Golang)
skill iconRuby on Rails (ROR)
skill iconRuby
skill iconPython
skill iconJava
+7 more

Job Description

Job Description SQL DBA - Trainee Analyst/Engineer

Experience : 0 to 1 Years

No.of Positions: 2

Job Location: Bangalore

Notice Period: Immediate / Max 15 Days

The candidate should have strong SQL knowledge, Here are few points

    • Implement and maintain the database design
    • Create database objects (tables, indexes, etc.)
    • Write database procedures, functions, and triggers

Good soft skills are a must (written and verbal communication)

Good team player

Ability to work in a 24x7 support model (rotation basis)

Strong fundamentals in Algorithms, OOPs and Data Structure

Should be flexible to support multiple IT platform

Analytical Thinking

 

Additional Information :

Functional Area: IT Software - DBA, Datawarehousing

Role Category: Admin/ Maintenance/ Security/ Datawarehousing

Role: DBA

 

Education :

B.Tech/ B.E

Skills

SQL DBA, IMPLEMENTATION, SQL, DBMS, DATA WAREHOUSING

Read more
Amagi Media Labs

at Amagi Media Labs

3 recruiters
Rajesh C
Posted by Rajesh C
Bengaluru (Bangalore), Chennai
12 - 15 yrs
₹50L - ₹60L / yr
skill iconData Science
skill iconMachine Learning (ML)
ETL
Data Warehouse (DWH)
skill iconAmazon Web Services (AWS)
+5 more
Job Title: Data Architect
Job Location: Chennai

Job Summary
The Engineering team is seeking a Data Architect. As a Data Architect, you will drive a
Data Architecture strategy across various Data Lake platforms. You will help develop
reference architecture and roadmaps to build highly available, scalable and distributed
data platforms using cloud based solutions to process high volume, high velocity and
wide variety of structured and unstructured data. This role is also responsible for driving
innovation, prototyping, and recommending solutions. Above all, you will influence how
users interact with Conde Nast’s industry-leading journalism.
Primary Responsibilities
Data Architect is responsible for
• Demonstrated technology and personal leadership experience in architecting,
designing, and building highly scalable solutions and products.
• Enterprise scale expertise in data management best practices such as data integration,
data security, data warehousing, metadata management and data quality.
• Extensive knowledge and experience in architecting modern data integration
frameworks, highly scalable distributed systems using open source and emerging data
architecture designs/patterns.
• Experience building external cloud (e.g. GCP, AWS) data applications and capabilities is
highly desirable.
• Expert ability to evaluate, prototype and recommend data solutions and vendor
technologies and platforms.
• Proven experience in relational, NoSQL, ELT/ETL technologies and in-memory
databases.
• Experience with DevOps, Continuous Integration and Continuous Delivery technologies
is desirable.
• This role requires 15+ years of data solution architecture, design and development
delivery experience.
• Solid experience in Agile methodologies (Kanban and SCRUM)
Required Skills
• Very Strong Experience in building Large Scale High Performance Data Platforms.
• Passionate about technology and delivering solutions for difficult and intricate
problems. Current on Relational Databases and No sql databases on cloud.
• Proven leadership skills, demonstrated ability to mentor, influence and partner with
cross teams to deliver scalable robust solutions..
• Mastery of relational database, NoSQL, ETL (such as Informatica, Datastage etc) /ELT
and data integration technologies.
• Experience in any one of Object Oriented Programming (Java, Scala, Python) and
Spark.
• Creative view of markets and technologies combined with a passion to create the
future.
• Knowledge on cloud based Distributed/Hybrid data-warehousing solutions and Data
Lake knowledge is mandate.
• Good understanding of emerging technologies and its applications.
• Understanding of code versioning tools such as GitHub, SVN, CVS etc.
• Understanding of Hadoop Architecture and Hive SQL
• Knowledge in any one of the workflow orchestration
• Understanding of Agile framework and delivery

Preferred Skills:
● Experience in AWS and EMR would be a plus
● Exposure in Workflow Orchestration like Airflow is a plus
● Exposure in any one of the NoSQL database would be a plus
● Experience in Databricks along with PySpark/Spark SQL would be a plus
● Experience with the Digital Media and Publishing domain would be a
plus
● Understanding of Digital web events, ad streams, context models

About Condé Nast

CONDÉ NAST INDIA (DATA)
Over the years, Condé Nast successfully expanded and diversified into digital, TV, and social
platforms - in other words, a staggering amount of user data. Condé Nast made the right
move to invest heavily in understanding this data and formed a whole new Data team
entirely dedicated to data processing, engineering, analytics, and visualization. This team
helps drive engagement, fuel process innovation, further content enrichment, and increase
market revenue. The Data team aimed to create a company culture where data was the
common language and facilitate an environment where insights shared in real-time could
improve performance.
The Global Data team operates out of Los Angeles, New York, Chennai, and London. The
team at Condé Nast Chennai works extensively with data to amplify its brands' digital
capabilities and boost online revenue. We are broadly divided into four groups, Data
Intelligence, Data Engineering, Data Science, and Operations (including Product and
Marketing Ops, Client Services) along with Data Strategy and monetization. The teams built
capabilities and products to create data-driven solutions for better audience engagement.
What we look forward to:
We want to welcome bright, new minds into our midst and work together to create diverse
forms of self-expression. At Condé Nast, we encourage the imaginative and celebrate the
extraordinary. We are a media company for the future, with a remarkable past. We are
Condé Nast, and It Starts Here.
Read more
TekSystems

TekSystems

Agency job
via NDSSG DataSync Pvt Ltd by Hamza Bootwala
Bengaluru (Bangalore), Hyderabad
4 - 8 yrs
₹10L - ₹20L / yr
WebFOCUS
Snow flake schema
Sybase
Data Warehouse (DWH)
Oracle
+2 more

Required:
1) WebFOCUS BI Reporting
2) WebFOCUS Adminstration
3) Sybase or Oracle or SQL Server or Snowflake
4) DWH Skills

Nice to have:
1) Experience in SAP BO / Crystal report / SSRS / Power BI
2) Experience in Informix
3) Experience in ETL

Responsibilities:

• Technical knowledge regarding best practices of BI development / integration.
• Candidate must understand business processes, be a detailed-oriented person and quickly grasp new concepts.
• Additionally the candidate will have strong presentation, interpersonal, software development and work management skills.
• Strong Advanced SQL programming skills are required
• Proficient in MS Word, Excel, Access, and PowerPoint
• Experience working with one or more BI Reporting tools as Analyst/Developer.
• Knowledge of data mining techniques and procedures and knowing when their use is appropriate
• Ability to present complex information in an understandable and compelling manner.
• Experience converting reports from one reporting tool to another

Read more
A global provider of Business Process Management company

A global provider of Business Process Management company

Agency job
via Jobdost by Saida Jabbar
Bengaluru (Bangalore), UK
5 - 10 yrs
₹15L - ₹25L / yr
Data Visualization
PowerBI
ADF
Business Intelligence (BI)
PySpark
+11 more

Power BI Developer

Senior visualization engineer with 5 years’ experience in Power BI to develop and deliver solutions that enable delivery of information to audiences in support of key business processes. In addition, Hands-on experience on Azure data services like ADF and databricks is a must.

Ensure code and design quality through execution of test plans and assist in development of standards & guidelines working closely with internal and external design, business, and technical counterparts.

Candidates should have worked in agile development environments.

Desired Competencies:

  • Should have minimum of 3 years project experience using Power BI on Azure stack.
  • Should have good understanding and working knowledge of Data Warehouse and Data Modelling.
  • Good hands-on experience of Power BI
  • Hands-on experience T-SQL/ DAX/ MDX/ SSIS
  • Data Warehousing on SQL Server (preferably 2016)
  • Experience in Azure Data Services – ADF, DataBricks & PySpark
  • Manage own workload with minimum supervision.
  • Take responsibility of projects or issues assigned to them
  • Be personable, flexible and a team player
  • Good written and verbal communications
  • Have a strong personality who will be able to operate directly with users
Read more
company operates on a software as a service-based (SaaS) mod

company operates on a software as a service-based (SaaS) mod

Agency job
via Jobdost by Saida Jabbar
Bengaluru (Bangalore)
8 - 15 yrs
₹12L - ₹16L / yr
MariaDB
Relational Database (RDBMS)
Databases
MySQL
Microsoft Windows Azure
+6 more
As a Senior Database Developer, you will design stable and reliable databases,
according to our company’s needs. You will be responsible for planning, developing,
testing, improving and maintaining new and existing databases to help users retrieve
data effectively. As part of our IT team, you will work closely with developers to ensure
system consistency. You will also collaborate with administrators and clients to
provide technical support and identify new requirements. Communication and
organization skills are keys for this position, along with a problem-solution attitude.
You get to work with some of the best minds in the industry at a place where
opportunity lurks everywhere and in everything.
Responsibilities
Your responsibilities are as follows.
• Working cross functional teams to develop robust solutions aligned with the
business needs
• Maintaining communication, providing regular updates to the development
team ensuring solutions provided are fit for purpose
• Training other developers in the team on best practices and technologies
• Troubleshooting issues in the production environment understanding the root
cause and developing robust solutions
• Developing, implement, maintain and solutions that are both reliable and
scalable
• Capture data analysis requirements effectively and represent them formerly
and visually through our data models.
• Maintaining effective database performance by identifying and resolving
production and application development problems
• Optimise the integration and installation of new releases
• Monitoring system performance, test, troubleshoot and integrating new
features
• Proactively recommending solutions to improve new and existing database
systems
• Providing database support by coding utilities, resolving user questions and
problems
• Ensuring compliance to database implementation procedures
• Performing code and design reviews as per the code review process
• Installing and organising information systems to guarantee company
functionality
• Preparing accurate documentation and reports
• Migration of data from legacy systems to new solutions
• Stakeholders’ analysis of our current clients, company operations and
applications, and programming requirements
• Collaborate with functional teams across the business to deliver end-to-end
products, features enabling enhanced performance
Required Qualifications
We are looking for individuals who are curious, excited about learning, and navigating
through the uncertainties and complexities that are associated with a growing
company. Some qualifications that we think would help you thrive in this role are:
• Minimum 8 Years of experience as a Database Administrator
• Strong knowledge of data structures and database systems
• In depth expertise and hands on experience with MySQL/MariaDB Database
Management System
• In depth expertise and hands on experience in database design, data
maintenance, database security, data analysis and mining
• Hands-on experience with at least one web-hosting platform such as Microsoft
Azure, AWS (Amazon Web Services) etc.
• Strong understanding of security principles and how they apply to web
applications
• Basic knowledge of networking, Desirable knowledge of business intelligence
• Desirable knowledge of data architectures related to data warehouse
implementations
• Strong interpersonal skills and a desire to work collaboratively to achieve
objectives
• Understanding of Agile methodologies
• Bachelor/Masters of CS/IT Engineering, BCA/MCA, B Sc/M Sc in CS/IT
Preferred Qualifications
• Sense of ownership and pride in your performance and its impact on company’s
success
• Critical thinker, Team player
• Excellent trouble-shooting and problem-solving skills
• Excellent analytical and Strong organisational skills
• Good time-management skills
• Great interpersonal and communication skills
Read more
Mobile Programming India Pvt Ltd

at Mobile Programming India Pvt Ltd

1 video
17 recruiters
Inderjit Kaur
Posted by Inderjit Kaur
Bengaluru (Bangalore), Chennai, Pune, Gurugram
4 - 8 yrs
₹9L - ₹14L / yr
ETL
Data Warehouse (DWH)
Data engineering
Data modeling
BRIEF JOB RESPONSIBILITIES:

• Responsible for designing, deploying, and maintaining analytics environment that processes data at scale
• Contribute design, configuration, deployment, and documentation for components that manage data ingestion, real time streaming, batch processing, data extraction, transformation, enrichment, and loading of data into a variety of cloud data platforms, including AWS and Microsoft Azure
• Identify gaps and improve the existing platform to improve quality, robustness, maintainability, and speed
• Evaluate new and upcoming big data solutions and make recommendations for adoption to extend our platform to meet advanced analytics use cases, such as predictive modeling and recommendation engines
• Data Modelling , Data Warehousing on Cloud Scale using Cloud native solutions.
• Perform development, QA, and dev-ops roles as needed to ensure total end to end responsibility of solutions

COMPETENCIES
• Experience building, maintaining, and improving Data Models / Processing Pipeline / routing in large scale environments
• Fluency in common query languages, API development, data transformation, and integration of data streams
• Strong experience with large dataset platforms such as (e.g. Amazon EMR, Amazon Redshift, AWS Lambda & Fargate, Amazon Athena, Azure SQL Database, Azure Database for PostgreSQL, Azure Cosmos DB , Data Bricks)
• Fluency in multiple programming languages, such as Python, Shell Scripting, SQL, Java, or similar languages and tools appropriate for large scale data processing.
• Experience with acquiring data from varied sources such as: API, data queues, flat-file, remote databases
• Understanding of traditional Data Warehouse components (e.g. ETL, Business Intelligence Tools)
• Creativity to go beyond current tools to deliver the best solution to the problem
Read more
Velocity Services
Bengaluru (Bangalore)
4 - 8 yrs
₹20L - ₹35L / yr
Data engineering
Data Engineer
Big Data
Big Data Engineer
skill iconPython
+10 more

We are an early stage start-up, building new fintech products for small businesses. Founders are IIT-IIM alumni, with prior experience across management consulting, venture capital and fintech startups. We are driven by the vision to empower small business owners with technology and dramatically improve their access to financial services. To start with, we are building a simple, yet powerful solution to address a deep pain point for these owners: cash flow management. Over time, we will also add digital banking and 1-click financing to our suite of offerings.

 

We have developed an MVP which is being tested in the market. We have closed our seed funding from marquee global investors and are now actively building a world class tech team. We are a young, passionate team with a strong grip on this space and are looking to on-board enthusiastic, entrepreneurial individuals to partner with us in this exciting journey. We offer a high degree of autonomy, a collaborative fast-paced work environment and most importantly, a chance to create unparalleled impact using technology.

 

Reach out if you want to get in on the ground floor of something which can turbocharge SME banking in India!

 

Technology stack at Velocity comprises a wide variety of cutting edge technologies like, NodeJS, Ruby on Rails, Reactive Programming,, Kubernetes, AWS, NodeJS, Python, ReactJS, Redux (Saga) Redis, Lambda etc. 

 

Key Responsibilities

  • Responsible for building data and analytical engineering pipelines with standard ELT patterns, implementing data compaction pipelines, data modelling and overseeing overall data quality

  • Work with the Office of the CTO as an active member of our architecture guild

  • Writing pipelines to consume the data from multiple sources

  • Writing a data transformation layer using DBT to transform millions of data into data warehouses.

  • Implement Data warehouse entities with common re-usable data model designs with automation and data quality capabilities

  • Identify downstream implications of data loads/migration (e.g., data quality, regulatory)

 

What To Bring

  • 3+ years of software development experience, a startup experience is a plus.

  • Past experience of working with Airflow and DBT is preferred

  • 2+ years of experience working in any backend programming language. 

  • Strong first-hand experience with data pipelines and relational databases such as Oracle, Postgres, SQL Server or MySQL

  • Experience with DevOps tools (GitHub, Travis CI, and JIRA) and methodologies (Lean, Agile, Scrum, Test Driven Development)

  • Experienced with the formulation of ideas; building proof-of-concept (POC) and converting them to production-ready projects

  • Experience building and deploying applications on on-premise and AWS or Google Cloud cloud-based infrastructure

  • Basic understanding of Kubernetes & docker is a must.

  • Experience in data processing (ETL, ELT) and/or cloud-based platforms

  • Working proficiency and communication skills in verbal and written English.

 

 

 

Read more
A global business process management company

A global business process management company

Agency job
via Jobdost by Saida Jabbar
Gurugram, Pune, Mumbai, Bengaluru (Bangalore), Chennai, Nashik
4 - 12 yrs
₹12L - ₹15L / yr
Data engineering
Data modeling
data pipeline
Data integration
Data Warehouse (DWH)
+12 more

 

 

Designation – Deputy Manager - TS


Job Description

  1. Total of  8/9 years of development experience Data Engineering . B1/BII role
  2. Minimum of 4/5 years in AWS Data Integrations and should be very good on Data modelling skills.
  3. Should be very proficient in end to end AWS Data solution design, that not only includes strong data ingestion, integrations (both Data @ rest and Data in Motion) skills but also complete DevOps knowledge.
  4. Should have experience in delivering at least 4 Data Warehouse or Data Lake Solutions on AWS.
  5. Should be very strong experience on Glue, Lambda, Data Pipeline, Step functions, RDS, CloudFormation etc.
  6. Strong Python skill .
  7. Should be an expert in Cloud design principles, Performance tuning and cost modelling. AWS certifications will have an added advantage
  8. Should be a team player with Excellent communication and should be able to manage his work independently with minimal or no supervision.
  9. Life Science & Healthcare domain background will be a plus

Qualifications

BE/Btect/ME/MTech

 

Read more
A Product Company

A Product Company

Agency job
via wrackle by Lokesh M
Bengaluru (Bangalore)
3 - 6 yrs
₹15L - ₹26L / yr
Looker
Big Data
Hadoop
Spark
Apache Hive
+4 more
Job Title: Senior Data Engineer/Analyst
Location: Bengaluru
Department: - Engineering 

Bidgely is looking for extraordinary and dynamic Senior Data Analyst to be part of its core team in Bangalore. You must have delivered exceptionally high quality robust products dealing with large data. Be part of a highly energetic and innovative team that believes nothing is impossible with some creativity and hard work. 

Responsibilities 
● Design and implement a high volume data analytics pipeline in Looker for Bidgely flagship product.
●  Implement data pipeline in Bidgely Data Lake
● Collaborate with product management and engineering teams to elicit & understand their requirements & challenges and develop potential solutions 
● Stay current with the latest tools, technology ideas and methodologies; share knowledge by clearly articulating results and ideas to key decision makers. 

Requirements 
● 3-5 years of strong experience in data analytics and in developing data pipelines. 
● Very good expertise in Looker 
● Strong in data modeling, developing SQL queries and optimizing queries. 
● Good knowledge of data warehouse (Amazon Redshift, BigQuery, Snowflake, Hive). 
● Good understanding of Big data applications (Hadoop, Spark, Hive, Airflow, S3, Cloudera) 
● Attention to details. Strong communication and collaboration skills.
● BS/MS in Computer Science or equivalent from premier institutes.
Read more
India's best Short Video App

India's best Short Video App

Agency job
via wrackle by Naveen Taalanki
Bengaluru (Bangalore)
4 - 12 yrs
₹25L - ₹50L / yr
Data engineering
Big Data
Spark
Apache Kafka
Apache Hive
+26 more
What Makes You a Great Fit for The Role?

You’re awesome at and will be responsible for
 
Extensive programming experience with cross-platform development of one of the following Java/SpringBoot, Javascript/Node.js, Express.js or Python
3-4 years of experience in big data analytics technologies like Storm, Spark/Spark streaming, Flink, AWS Kinesis, Kafka streaming, Hive, Druid, Presto, Elasticsearch, Airflow, etc.
3-4 years of experience in building high performance RPC services using different high performance paradigms: multi-threading, multi-processing, asynchronous programming (nonblocking IO), reactive programming,
3-4 years of experience working high throughput low latency databases and cache layers like MongoDB, Hbase, Cassandra, DynamoDB,, Elasticache ( Redis + Memcache )
Experience with designing and building high scale app backends and micro-services leveraging cloud native services on AWS like proxies, caches, CDNs, messaging systems, Serverless compute(e.g. lambda), monitoring and telemetry.
Strong understanding of distributed systems fundamentals around scalability, elasticity, availability, fault-tolerance.
Experience in analysing and improving the efficiency, scalability, and stability of distributed systems and backend micro services.
5-7 years of strong design/development experience in building massively large scale, high throughput low latency distributed internet systems and products.
Good experience in working with Hadoop and Big Data technologies like HDFS, Pig, Hive, Storm, HBase, Scribe, Zookeeper and NoSQL systems etc.
Agile methodologies, Sprint management, Roadmap, Mentoring, Documenting, Software architecture.
Liaison with Product Management, DevOps, QA, Client and other teams
 
Your Experience Across The Years in the Roles You’ve Played
 
Have total or more 5 - 7 years of experience with 2-3 years in a startup.
Have B.Tech or M.Tech or equivalent academic qualification from premier institute.
Experience in Product companies working on Internet-scale applications is preferred
Thoroughly aware of cloud computing infrastructure on AWS leveraging cloud native service and infrastructure services to design solutions.
Follow Cloud Native Computing Foundation leveraging mature open source projects including understanding of containerisation/Kubernetes.
 
You are passionate about learning or growing your expertise in some or all of the following
Data Pipelines
Data Warehousing
Statistics
Metrics Development
 
We Value Engineers Who Are
 
Customer-focused: We believe that doing what’s right for the creator is ultimately what will drive our business forward.
Obsessed with Quality: Your Production code just works & scales linearly
Team players. You believe that more can be achieved together. You listen to feedback and also provide supportive feedback to help others grow/improve.
Pragmatic: We do things quickly to learn what our creators desire. You know when it’s appropriate to take shortcuts that don’t sacrifice quality or maintainability.
Owners: Engineers at Chingari know how to positively impact the business.
Read more
Fragma Data Systems

at Fragma Data Systems

8 recruiters
Evelyn Charles
Posted by Evelyn Charles
Remote, Bengaluru (Bangalore)
3.5 - 8 yrs
₹5L - ₹18L / yr
PySpark
Data engineering
Data Warehouse (DWH)
SQL
Spark
+1 more
Must-Have Skills:
• Good experience in Pyspark - Including Dataframe core functions and Spark SQL
• Good experience in SQL DBs - Be able to write queries including fair complexity.
• Should have excellent experience in Big Data programming for data transformation and aggregations
• Good at ELT architecture. Business rules processing and data extraction from Data Lake into data streams for business consumption.
• Good customer communication.
• Good Analytical skill
 
 
Technology Skills (Good to Have):
  • Building and operationalizing large scale enterprise data solutions and applications using one or more of AZURE data and analytics services in combination with custom solutions - Azure Synapse/Azure SQL DWH, Azure Data Lake, Azure Blob Storage, Spark, HDInsights, Databricks, CosmosDB, EventHub/IOTHub.
  • Experience in migrating on-premise data warehouses to data platforms on AZURE cloud. 
  • Designing and implementing data engineering, ingestion, and transformation functions
  • Azure Synapse or Azure SQL data warehouse
  • Spark on Azure is available in HD insights and data bricks
 
Good to Have: 
  • Experience with Azure Analysis Services
  • Experience in Power BI
  • Experience with third-party solutions like Attunity/Stream sets, Informatica
  • Experience with PreSales activities (Responding to RFPs, Executing Quick POCs)
  • Capacity Planning and Performance Tuning on Azure Stack and Spark.
Read more
Servian

at Servian

2 recruiters
sakshi nigam
Posted by sakshi nigam
Bengaluru (Bangalore)
2 - 8 yrs
₹10L - ₹25L / yr
Data engineering
ETL
Data Warehouse (DWH)
Powershell
DA
+7 more
Who we are
 
We are a consultant led organisation. We invest heavily in our consultants to ensure they have the technical skills and commercial acumen to be successful in their work.
 
Our consultants have a passion for data and solving complex problems. They are curious, ambitious and experts in their fields. We have developed a first rate team so you will be supported and learn from the best

About the role

  • Collaborating with a team of like-minded and experienced engineers for Tier 1 customers, you will focus on data engineering on large complex data projects. Your work will have an impact on platforms that handle crores of customers and millions of transactions daily.

  • As an engineer, you will use the latest cloud services to design and develop reusable core components and frameworks to modernise data integrations in a cloud first world and own those integrations end to end working closely with business units. You will design and build for efficiency, reliability, security and scalability. As a consultant, you will help drive a data engineering culture and advocate best practices.

Mandatory experience

    • 1-6 years of relevant experience
    • Strong SQL skills and data literacy
    • Hands-on experience designing and developing data integrations, either in ETL tools, cloud native tools or in custom software
    • Proficiency in scripting and automation (e.g. PowerShell, Bash, Python)
    • Experience in an enterprise data environment
    • Strong communication skills

Desirable experience

    • Ability to work on data architecture, data models, data migration, integration and pipelines
    • Ability to work on data platform modernisation from on-premise to cloud-native
    • Proficiency in data security best practices
    • Stakeholder management experience
    • Positive attitude with the flexibility and ability to adapt to an ever-changing technology landscape
    • Desire to gain breadth and depth of technologies to support customer's vision and project objectives

What to expect if you join Servian?

    • Learning & Development: We invest heavily in our consultants and offer internal training weekly (both technical and non-technical alike!) and abide by a ‘You Pass We Pay” policy.
    • Career progression: We take a longer term view of every hire. We have a flat org structure and promote from within. Every hire is developed as a future leader and client adviser.
    • Variety of projects: As a consultant, you will have the opportunity to work across multiple projects across our client base significantly increasing your skills and exposure in the industry.
    • Great culture: Working on the latest Apple MacBook pro in our custom designed offices in the heart of leafy Jayanagar, we provide a peaceful and productive work environment close to shops, parks and metro station.
    • Professional development: We invest heavily in professional development both technically, through training and guided certification pathways, and in consulting, through workshops in client engagement and communication. Growth in our organisation happens from the growth of our people.
Read more
Futurense Technologies

at Futurense Technologies

1 recruiter
Rajendra Dasigari
Posted by Rajendra Dasigari
Bengaluru (Bangalore)
2 - 7 yrs
₹6L - ₹12L / yr
ETL
Data Warehouse (DWH)
Apache Hive
Informatica
Data engineering
+5 more
1. Create and maintain optimal data pipeline architecture
2. Assemble large, complex data sets that meet business requirements
3. Identify, design, and implement internal process improvements
4. Optimize data delivery and re-design infrastructure for greater scalability
5. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS technologies
6. Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics
7. Work with internal and external stakeholders to assist with data-related technical issues and support data infrastructure needs
8. Create data tools for analytics and data scientist team members
 
Skills Required:
 
1. Working knowledge of ETL on any cloud (Azure / AWS / GCP)
2. Proficient in Python (Programming / Scripting)
3. Good understanding of any of the data warehousing concepts (Snowflake / AWS Redshift / Azure Synapse Analytics / Google Big Query / Hive)
4. In-depth understanding of principles of database structure
5.  Good understanding of any of the ETL technologies (Informatica PowerCenter / AWS Glue / Data Factory / SSIS / Spark / Matillion / Talend / Azure)
6. Proficient in SQL (query solving)
7. Knowledge in Change case Management / Version Control – (VSS / DevOps / TFS / GitHub, Bit bucket, CICD Jenkin)
Read more
RentoMojo

at RentoMojo

1 video
5 recruiters
Anand Pandey
Posted by Anand Pandey
Bengaluru (Bangalore)
1 - 2 yrs
₹5L - ₹7L / yr
Business Analysis
Windows Azure
PySpark
SQL
Data Warehouse (DWH)
+4 more
RESPONSIBILITIES & OWNERSHIP: THINGS THE ROLE CAN'T MISS
  • Setting KPIs, monitoring key trends, and helping stakeholders by generating insights from the data delivered.
  • Understanding user behaviour and performing root-cause analysis of changes in data trends across different verticals.
  • Get answers to business questions, identify areas of improvement, and identify opportunities for growth.
  • Work on ad-hoc requests for data and analysis.
  • Work with Cross functional Teams as when required to automate reports and create informative dashboards based on problem statements.

WHO COULD BE A GREAT FIT:
Functional Experience
  • 1-2 years of experience working in Analytics as a Business or Data Analyst.
  • Analytical mind with a problem-solving aptitude.
  • Familiarity with Microsoft Azure & AWS PySpark, Python, Data Bricks, Metabase, Understanding of APIs, data warehouse and ETL etc.
  • Proficient in writing Complex Queries in SQL.
  • Experience in Performing hands-on analysis on data and across multiple datasets and databases primarily using Excel, Google Sheets and R.
  • Ability to work across cross-functional teams proactively.
Read more
Gulf client

Gulf client

Agency job
via Fragma Data Systems by Priyanka U
Remote, Bengaluru (Bangalore)
5 - 9 yrs
₹10L - ₹20L / yr
PowerBI
Data Warehouse (DWH)
SQL
DAX
Power query
Key Skills:
 Strong knowledge in Power BI (DAX + Power Query + Power BI Service + Power BI
Desktop Visualisations) and Azure Data Storages.
 Should have experience in Power BI mobile Dashboards.
 Strong knowledge in SQL.
 Good knowledge of DWH concepts.
 Work as an independent contributor at the client location.
 Implementing Access Control and impose required Security.
 Candidate must have very good communication skills.
Read more
MNC

MNC

Agency job
via Fragma Data Systems by Priyanka U
Remote, Bengaluru (Bangalore)
5 - 8 yrs
₹12L - ₹20L / yr
PySpark
SQL
Data Warehouse (DWH)
ETL
SQL Developer with Relevant experience of 7 Yrs with Strong Communication Skills.
 
Key responsibilities:
 
  • Creating, designing and developing data models
  • Prepare plans for all ETL (Extract/Transformation/Load) procedures and architectures
  • Validating results and creating business reports
  • Monitoring and tuning data loads and queries
  • Develop and prepare a schedule for a new data warehouse
  • Analyze large databases and recommend appropriate optimization for the same
  • Administer all requirements and design various functional specifications for data
  • Provide support to the Software Development Life cycle
  • Prepare various code designs and ensure efficient implementation of the same
  • Evaluate all codes and ensure the quality of all project deliverables
  • Monitor data warehouse work and provide subject matter expertise
  • Hands-on BI practices, data structures, data modeling, SQL skills
  • Minimum 1 year experience in Pyspark
Read more
Our client is a leading IT service providers in India

Our client is a leading IT service providers in India

Agency job
via GSN Consulting by Mahendrand Deepak
Bengaluru (Bangalore)
5 - 8 yrs
₹15L - ₹20L / yr
TIBCO Spotfire
TIBCO
DXP
Spotfire
Dashboard
+2 more
  • 4+ years of extensive EXP in TIBCO SPOTFIRE Dashboard Development is MUST
  • Design and create data visualizations in TIBCO Spotfire
  • Proven EXP in delivering Spotfire solutions to advance business goals and needs.
  • Detailed knowledge of TIBCO Spotfire - report developer configuration
  • EXP on creating all charts that exist in Spotfire (scatter, line, bar, combo, pie, etc.) and how to manipulate every property associated with a visualization (trellis, color, shape, size, etc.)
  • EXP in writing efficient SQL queries, views in relational databases such as Oracle, SQL Server, Postgres and BigQuery (Optional).
  • Ability to incorporate multiple data sources into one Spotfire DXP and have that information linked via data table relations.
  • EXP with Spotfire Administrative tasks, load balancing, installation/configuration of servers and clients, upgrades and patches would be a plus.
  • Strong background in analytical visualizations and building executive dashboards.
  • In-depth knowledge & understanding of BI and Datawarehouse concepts
Read more
Marktine

at Marktine

1 recruiter
Vishal Sharma
Posted by Vishal Sharma
Remote, Bengaluru (Bangalore)
3 - 10 yrs
₹5L - ₹15L / yr
Big Data
ETL
PySpark
SSIS
Microsoft Windows Azure
+4 more

Must Have Skills:

- Solid Knowledge on DWH, ETL and Big Data Concepts

- Excellent SQL Skills (With knowledge of SQL Analytics Functions)

- Working Experience on any ETL tool i.e. SSIS / Informatica

- Working Experience on any Azure or AWS Big Data Tools.

- Experience on Implementing Data Jobs (Batch / Real time Streaming)

- Excellent written and verbal communication skills in English, Self-motivated with strong sense of ownership and Ready to learn new tools and technologies

Preferred Skills:

- Experience on Py-Spark / Spark SQL

- AWS Data Tools (AWS Glue, AWS Athena)

- Azure Data Tools (Azure Databricks, Azure Data Factory)

Other Skills:

- Knowledge about Azure Blob, Azure File Storage, AWS S3, Elastic Search / Redis Search

- Knowledge on domain/function (across pricing, promotions and assortment).

- Implementation Experience on Schema and Data Validator framework (Python / Java / SQL),

- Knowledge on DQS and MDM.

Key Responsibilities:

- Independently work on ETL / DWH / Big data Projects

- Gather and process raw data at scale.

- Design and develop data applications using selected tools and frameworks as required and requested.

- Read, extract, transform, stage and load data to selected tools and frameworks as required and requested.

- Perform tasks such as writing scripts, web scraping, calling APIs, write SQL queries, etc.

- Work closely with the engineering team to integrate your work into our production systems.

- Process unstructured data into a form suitable for analysis.

- Analyse processed data.

- Support business decisions with ad hoc analysis as needed.

- Monitoring data performance and modifying infrastructure as needed.

Responsibility: Smart Resource, having excellent communication skills

 

 
Read more
Marktine

at Marktine

1 recruiter
Vishal Sharma
Posted by Vishal Sharma
Remote, Bengaluru (Bangalore)
3 - 7 yrs
₹5L - ₹10L / yr
Data Warehouse (DWH)
Spark
Data engineering
skill iconPython
PySpark
+5 more

Basic Qualifications

- Need to have a working knowledge of AWS Redshift.

- Minimum 1 year of designing and implementing a fully operational production-grade large-scale data solution on Snowflake Data Warehouse.

- 3 years of hands-on experience with building productized data ingestion and processing pipelines using Spark, Scala, Python

- 2 years of hands-on experience designing and implementing production-grade data warehousing solutions

- Expertise and excellent understanding of Snowflake Internals and integration of Snowflake with other data processing and reporting technologies

- Excellent presentation and communication skills, both written and verbal

- Ability to problem-solve and architect in an environment with unclear requirements

Read more
EZEU (OPC) India Pvt Ltd

at EZEU (OPC) India Pvt Ltd

2 recruiters
HR Ezeu
Posted by HR Ezeu
Pune, Bengaluru (Bangalore)
5 - 10 yrs
₹10L - ₹20L / yr
DevOps
CI/CD
Software deployment
Automation
skill iconPython
+16 more
What you will do
• Develop and maintain CI/CD tools to build and deploy scalable web and responsive applications in production environment
• Design and implement monitoring solutions that identify both system bottlenecks and production issues
• Design and implement workflows for continuous integration, including provisioning, deployment, testing, and version control of the software.
• Develop self-service solutions for the engineering team in order to deliver sites/software with great speed and quality
o Automating Infra creation
o Provide easy to use solutions to engineering team
• Conduct research, tests, and implements new metrics collection systems that can be reused and applied as engineering best practices
o Update our processes and design new processes as needed.
o Establish DevOps Engineer team best practices.
o Stay current with industry trends and source new ways for our business to improve.
• Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
• Manage timely resolution of all critical and/or complex problems
• Maintain, monitor, and establish best practices for containerized environments.
• Mentor new DevOps engineers
What you will bring
• The desire to work in fast-paced environment.
• 5+ years’ experience building, maintaining, and deploying production infrastructures in AWS or other cloud providers
• Containerization experience with applications deployed on Docker and Kubernetes
• Understanding of NoSQL and Relational Database with respect to deployment and horizontal scalability
• Demonstrated knowledge of Distributed and Scalable systems Experience with maintaining and deployment of critical infrastructure components through Infrastructure-as-Code and configuration management tooling across multiple environments (Ansible, Terraform etc)
• Strong knowledge of DevOps and CI/CD pipeline (GitHub, BitBucket, Artifactory etc)
• Strong understanding of cloud and infrastructure components (server, storage, network, data, and applications) to deliver end-to-end cloud Infrastructure architectures and designs and recommendations
o AWS services like S3, CloudFront, Kubernetes, RDS, Data Warehouses to come up with architecture/suggestions for new use cases.
• Test our system integrity, implemented designs, application developments and other processes related to infrastructure, making improvements as needed
Good to have
• Experience with code quality tools, static or dynamic code analysis and compliance and undertaking and resolving issues identified from vulnerability and compliance scans of our infrastructure
• Good knowledge of REST/SOAP/JSON web service API implementation
Read more
MNC

MNC

Agency job
via Fragma Data Systems by Priyanka U
Bengaluru (Bangalore)
5 - 9 yrs
₹15L - ₹25L / yr
Data Warehouse (DWH)
dwh
warehousing
Datawarehousing
SQL
+1 more
Work Days: Sunday through Thursday
Week off: Friday & Saurday
Day Shift.


Key responsibilities:

  • Creating, designing and developing data models
  • Prepare plans for all ETL (Extract/Transformation/Load) procedures and architectures
  • Validating results and creating business reports
  • Monitoring and tuning data loads and queries
  • Develop and prepare a schedule for a new data warehouse
  • Analyze large databases and recommend appropriate optimization for the same
  • Administer all requirements and design various functional specifications for data
  • Provide support to the Software Development Life cycle
  • Prepare various code designs and ensure efficient implementation of the same
  • Evaluate all codes and ensure the quality of all project deliverables
  • Monitor data warehouse work and provide subject matter expertise
  • Hands-on BI practices, data structures, data modeling, SQL skills


Hard Skills for a Data Warehouse Developer:

  • Hands-on experience with ETL tools e.g., DataStage, Informatica, Pentaho, Talend
  • Sound knowledge of https://www.freelancermap.com/it-projects/sql-1084" target="_blank">SQL
  • Experience with SQL databases such as Oracle, DB2, and SQL
  • Experience using Data Warehouse platforms e.g., SAP, Birst
  • Experience designing, developing, and implementing Data Warehouse solutions
  • Project management and system development methodology
  • Ability to proactively research solutions and best practices

Soft Skills for Data Warehouse Developers:

  • Excellent Analytical skills
  • Excellent verbal and written communications
  • Strong organization skills
  • Ability to work on a team, as well as independently
Read more
 05Bn FinHealth

at 05Bn FinHealth

1 recruiter
Jennifer Jocelyn
Posted by Jennifer Jocelyn
Bengaluru (Bangalore)
9 - 15 yrs
₹50L - ₹70L / yr
Technical Architecture
Team Management
Web Development
Data engineering
Team building
+15 more
Main responsibilities: + Management of a growing technical team + Continued technical Architecture design based on product roadmap + Annual performance reviews + Work with DevOps to design and implement the product infrastructure Strategic: + Testing strategy + Security policy + Performance and performance testing policy + Logging policy Experience: + 9-15 years of experience including that of managing teams of developers + Technical & architectural expertise, and have evolved a growing code base, technology stack and architecture over many years + Have delivered distributed cloud applications + Understand the value of high quality code and can effectively manage technical debt + Stakeholder management + Work experience in consumer focused early stage (Series A, B) startups is a big plus Other innate skills: + Great motivator of people and able to lead by example + Understand how to get the most out of people + Delivery of products to tight deadlines but with a focus on high quality code + Up to date knowledge of technical applications
Read more
Grand Hyper

at Grand Hyper

1 recruiter
Rahul Malani
Posted by Rahul Malani
Bengaluru (Bangalore)
5 - 10 yrs
₹10L - ₹20L / yr
Data Warehouse (DWH)
Apache Hive
ETL
DWH Cloud
Hadoop
+3 more
candidate will be responsible for all aspects of data acquisition, data transformation, and analytics scheduling and operationalization to drive high-visibility, cross-division outcomes. Expected deliverables will include the development of Big Data ELT jobs using a mix of technologies, stitching together complex and seemingly unrelated data sets for mass consumption, and automating and scaling analytics into the GRAND's Data Lake. Key Responsibilities : - Create a GRAND Data Lake and Warehouse which pools all the data from different regions and stores of GRAND in GCC - Ensure Source Data Quality Measurement, enrichment and reporting of Data Quality - Manage All ETL and Data Model Update Routines - Integrate new data sources into DWH - Manage DWH Cloud (AWS/AZURE/Google) and Infrastructure Skills Needed : - Very strong in SQL. Demonstrated experience with RDBMS, Unix Shell scripting preferred (e.g., SQL, Postgres, Mongo DB etc) - Experience with UNIX and comfortable working with the shell (bash or KRON preferred) - Good understanding of Data warehousing concepts. Big data systems : Hadoop, NoSQL, HBase, HDFS, MapReduce - Aligning with the systems engineering team to propose and deploy new hardware and software environments required for Hadoop and to expand existing environments. - Working with data delivery teams to set up new Hadoop users. This job includes setting up Linux users, setting up and testing HDFS, Hive, Pig and MapReduce access for the new users. - Cluster maintenance as well as creation and removal of nodes using tools like Ganglia, Nagios, Cloudera Manager Enterprise, and other tools. - Performance tuning of Hadoop clusters and Hadoop MapReduce routines. - Screen Hadoop cluster job performances and capacity planning - Monitor Hadoop cluster connectivity and security - File system management and monitoring. - HDFS support and maintenance. - Collaborating with application teams to install operating system and - Hadoop updates, patches, version upgrades when required. - Defines, develops, documents and maintains Hive based ETL mappings and scripts
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort