Data Warehouse (DWH) Jobs in Bangalore (Bengaluru)

Explore top Data Warehouse (DWH) Job opportunities in Bangalore (Bengaluru) from Top Companies & Startups. All jobs are added by verified employees who can be contacted directly below.
icon

Diggibyte Technology private Limited

Agency job
via KUKULKAN by Pragathi P
icon
Bengaluru (Bangalore)
icon
5 - 7 yrs
icon
₹9L - ₹15L / yr
ETL
Informatica
Data Warehouse (DWH)
SQL
Dimensional modeling
+1 more

Data Modeler JD: - 

 

1. Understand and translate business needs into dimension models supporting long-term solutions

 

2. Experience on building models on ERwin or similar tools.

 

3. Experience and understanding on dimensional data model, customer 360 and Entity relationship model.

 

4. Work with the Development team to implement data strategies, build data flows and develop conceptual data models.

 

5. Create logical and physical data models using best practices to ensure high data quality and reduced redundancy

 

6. Optimize and update logical and physical data models to support new and existing projects

 

7. Maintain conceptual, logical, and physical data models along with corresponding metadata

 

8. Develop best practices for standard naming conventions and coding practices to ensure consistency of data models

 

9. Recommend opportunities for reuse of data models in new environments

 

10. Perform reverse engineering of physical data models from databases and SQL scripts

 

11. Evaluate models and physical databases for variances and discrepancies

 

12. Validate business data objects for accuracy and completeness

 

13. Analyze data-related system integration challenges and propose appropriate solutions

 

14. Develop data models according to company standards

 

15. Guide System Analysts, Engineers, Programmers and others on project limitations and capabilities, performance requirements and interfaces

 

16. Good to have Home appliance/Retail domain knowledge and Azure Synapse.

 

Job Functions: Information Technology 

 

Employment Type - Full-time 

 

Thank you!

 

 

Read more

Sixt R&D

Agency job
icon
Bengaluru (Bangalore)
icon
5 - 8 yrs
icon
₹11L - ₹14L / yr
SQL
Python
RESTful APIs
Business Intelligence (BI)
QuickSight
+6 more

Technical-Requirements: 

  • Bachelor's Degree in Computer Science or a related technical field, and solid years of relevant experience. 
  • A strong grasp of SQL/Presto and at least one scripting (Python, preferable) or programming language. 
  • Experience with an enterprise class BI tools and it's auditing along with automations using REST API's. 
  • Experience with reporting tools – QuickSight (preferred, at least 2 years hands on). 
  • Tableau/Looker (both or anyone would suffice with at least 5+ years of hands on). 
  • 5+ years of experience with and detailed knowledge of data warehouse technical architectures, data modelling, infrastructure components, ETL/ ELT and reporting/analytic tools and environments, data structures and hands-on SQL coding. 
  • 5+ years of demonstrated quantitative and qualitative business intelligence. 
  • Experience with significant product analysis based business impact. 
  • 4+ years of large IT project delivery for BI oriented projects using agile framework. 
  • 2+ years of working with very large data warehousing environment. 
  • Experience in designing and delivering cross functional custom reporting solutions. 
  • Excellent oral and written communication skills including the ability to communicate effectively with both technical and non-technical stakeholders. 
  • Proven ability to meet tight deadlines, multi-task, and prioritize workload. 
  • A work ethic based on a strong desire to exceed expectations. 
  • Strong analytical and challenge process skills.
Read more
DP
Posted by Alfiya Khan
icon
Pune, Bengaluru (Bangalore)
icon
6 - 8 yrs
icon
₹15L - ₹25L / yr
Big Data
Data Warehouse (DWH)
Data modeling
Apache Spark
Data integration
+10 more
Company Profile
XpressBees – a logistics company started in 2015 – is amongst the fastest growing
companies of its sector. While we started off rather humbly in the space of
ecommerce B2C logistics, the last 5 years have seen us steadily progress towards
expanding our presence. Our vision to evolve into a strong full-service logistics
organization reflects itself in our new lines of business like 3PL, B2B Xpress and cross
border operations. Our strong domain expertise and constant focus on meaningful
innovation have helped us rapidly evolve as the most trusted logistics partner of
India. We have progressively carved our way towards best-in-class technology
platforms, an extensive network reach, and a seamless last mile management
system. While on this aggressive growth path, we seek to become the one-stop-shop
for end-to-end logistics solutions. Our big focus areas for the very near future
include strengthening our presence as service providers of choice and leveraging the
power of technology to improve efficiencies for our clients.

Job Profile
As a Lead Data Engineer in the Data Platform Team at XpressBees, you will build the data platform
and infrastructure to support high quality and agile decision-making in our supply chain and logistics
workflows.
You will define the way we collect and operationalize data (structured / unstructured), and
build production pipelines for our machine learning models, and (RT, NRT, Batch) reporting &
dashboarding requirements. As a Senior Data Engineer in the XB Data Platform Team, you will use
your experience with modern cloud and data frameworks to build products (with storage and serving
systems)
that drive optimisation and resilience in the supply chain via data visibility, intelligent decision making,
insights, anomaly detection and prediction.

What You Will Do
• Design and develop data platform and data pipelines for reporting, dashboarding and
machine learning models. These pipelines would productionize machine learning models
and integrate with agent review tools.
• Meet the data completeness, correction and freshness requirements.
• Evaluate and identify the data store and data streaming technology choices.
• Lead the design of the logical model and implement the physical model to support
business needs. Come up with logical and physical database design across platforms (MPP,
MR, Hive/PIG) which are optimal physical designs for different use cases (structured/semi
structured). Envision & implement the optimal data modelling, physical design,
performance optimization technique/approach required for the problem.
• Support your colleagues by reviewing code and designs.
• Diagnose and solve issues in our existing data pipelines and envision and build their
successors.

Qualifications & Experience relevant for the role

• A bachelor's degree in Computer Science or related field with 6 to 9 years of technology
experience.
• Knowledge of Relational and NoSQL data stores, stream processing and micro-batching to
make technology & design choices.
• Strong experience in System Integration, Application Development, ETL, Data-Platform
projects. Talented across technologies used in the enterprise space.
• Software development experience using:
• Expertise in relational and dimensional modelling
• Exposure across all the SDLC process
• Experience in cloud architecture (AWS)
• Proven track record in keeping existing technical skills and developing new ones, so that
you can make strong contributions to deep architecture discussions around systems and
applications in the cloud ( AWS).

• Characteristics of a forward thinker and self-starter that flourishes with new challenges
and adapts quickly to learning new knowledge
• Ability to work with a cross functional teams of consulting professionals across multiple
projects.
• Knack for helping an organization to understand application architectures and integration
approaches, to architect advanced cloud-based solutions, and to help launch the build-out
of those systems
• Passion for educating, training, designing, and building end-to-end systems.
Read more

Rapidly growing fintech SaaS firm that propels business grow

Agency job
via Jobdost by Mamatha A
icon
Bengaluru (Bangalore)
icon
5 - 10 yrs
icon
₹30L - ₹45L / yr
Google Adwords
PPC
Data Warehouse (DWH)
SQL
Web Analytics
+7 more

What is the role?

We are looking for a Senior Performance Marketing manager (PPC/SEM) who will be responsible for paid advertising for this company across Google Ads, Social ads and other demand-gen channels.

Our ideal candidate has a blend of analytical and creative mindset, passionate about driving metrics while also being very sensitive to brand and user experience, and distinctive at operating in highly collaborative and cross-functional settings. This role partners closely with our Sales, product, design, and broader marketing teams.

Key responsibilities

  • Strategise, execute, monitor, and manage campaigns across multiple platforms such as Google Ads (incl. Search, Display & Youtube),, Facebook Ads & Linkedin Ads.
  • Oversee growth in performance campaigns to meet the brand's business goals and strategies.
  • Should have hands-on experience on managing landing pages, keyword plans, ad copies, display ads etc.
  • Should have extremely good analytical skills to figure out the signals in the campaigns and optimize the campaigns using these insights. 
  • Implement ongoing A/B and user experience testing for ads, quality score, placements, dynamic landing pages and measure their effectiveness.
  • Monitor campaign performance & budget pacing on a day-to-day basis. 
  • Measure the Campaign performance parameters methodically, analyze campaign performance, compile and present detailed reports with proactive insights.
  • Be informed on the latest trends, best practices, and standards in online advertising across demand-gen channels.
  • Perform Media Mix modeling. Design, develop, and monitor other digital media buying campaigns.

What are we looking for?

  • 5-10 years of pure PPC experience, preferably in a SaaS company managing annual budgets of more than 2 mn USD.
  • Highly comfortable with Google Ads Editor, Linkedin Ads, Facebook Business Manager & such.
  • Strong working knowledge of PPC Automations/Rules/Scripts and best practices with the ability to analyze Campaign metrics on Excel/Data Studio and optimize campaigns with insights.
  • Experience working with Ad channel APIs and other data APIs to deep-dive into metrics & make data-informed optimisations.
  • [Good to have] Working knowledge of SQL, Data Warehouses (Bigquery), Data connectors/pipelines, blends/joints (for blending multiple data sources) etc.
  • In-depth experience with GA4. Clear understanding of Web Analytics.
  • Comfortable writing and editing content for ad copies, landing page snippets, descriptions, etc.
  • Experience with running Campaigns for US, European, and the global markets.

What can you look for?

A wholesome opportunity in a fast-paced environment that will enable you to juggle between concepts, yet maintain the quality of content, interact, and share your ideas and have loads of learning while at work. Work with a team of highly talented young professionals and enjoy the benefits of being at this company

We are

It  is a rapidly growing fintech SaaS firm that propels business growth while focusing on human motivation. Backed by Giift and Apis Partners Growth Fund II, Company offers a suite of three products - Plum, Empuls, and Compass. The company works with more than 2000 clients across 10+ countries and over 2.5 million users. Headquartered in Bengaluru, Company is a 300+ strong team with four global offices in Dubai, San Francisco, Dublin, Singapore, New Delhi.

Way forward

We look forward to connecting with you. As you may take time to review this opportunity, we will wait for a reasonable time of around 3-5 days before we screen the collected applications and start lining up job discussions with the hiring manager. We however assure you that we will attempt to maintain a reasonable time window for successfully closing this requirement. The candidates will be kept informed and updated on the feedback and application status.

Read more
DP
Posted by Newali Hazarika
icon
Bengaluru (Bangalore)
icon
4 - 9 yrs
icon
₹15L - ₹35L / yr
ETL
Informatica
Data Warehouse (DWH)
Data engineering
Oracle
+7 more

We are an early stage start-up, building new fintech products for small businesses. Founders are IIT-IIM alumni, with prior experience across management consulting, venture capital and fintech startups. We are driven by the vision to empower small business owners with technology and dramatically improve their access to financial services. To start with, we are building a simple, yet powerful solution to address a deep pain point for these owners: cash flow management. Over time, we will also add digital banking and 1-click financing to our suite of offerings.

 

We have developed an MVP which is being tested in the market. We have closed our seed funding from marquee global investors and are now actively building a world class tech team. We are a young, passionate team with a strong grip on this space and are looking to on-board enthusiastic, entrepreneurial individuals to partner with us in this exciting journey. We offer a high degree of autonomy, a collaborative fast-paced work environment and most importantly, a chance to create unparalleled impact using technology.

 

Reach out if you want to get in on the ground floor of something which can turbocharge SME banking in India!

 

Technology stack at Velocity comprises a wide variety of cutting edge technologies like, NodeJS, Ruby on Rails, Reactive Programming,, Kubernetes, AWS, NodeJS, Python, ReactJS, Redux (Saga) Redis, Lambda etc. 

 

Key Responsibilities

  • Responsible for building data and analytical engineering pipelines with standard ELT patterns, implementing data compaction pipelines, data modelling and overseeing overall data quality

  • Work with the Office of the CTO as an active member of our architecture guild

  • Writing pipelines to consume the data from multiple sources

  • Writing a data transformation layer using DBT to transform millions of data into data warehouses.

  • Implement Data warehouse entities with common re-usable data model designs with automation and data quality capabilities

  • Identify downstream implications of data loads/migration (e.g., data quality, regulatory)

 

What To Bring

  • 5+ years of software development experience, a startup experience is a plus.

  • Past experience of working with Airflow and DBT is preferred

  • 5+ years of experience working in any backend programming language. 

  • Strong first-hand experience with data pipelines and relational databases such as Oracle, Postgres, SQL Server or MySQL

  • Experience with DevOps tools (GitHub, Travis CI, and JIRA) and methodologies (Lean, Agile, Scrum, Test Driven Development)

  • Experienced with the formulation of ideas; building proof-of-concept (POC) and converting them to production-ready projects

  • Experience building and deploying applications on on-premise and AWS or Google Cloud cloud-based infrastructure

  • Basic understanding of Kubernetes & docker is a must.

  • Experience in data processing (ETL, ELT) and/or cloud-based platforms

  • Working proficiency and communication skills in verbal and written English.

 

Read more
Agency job
via Impetus by Gangadhar TM
icon
Bengaluru (Bangalore), Pune, Hyderabad, Indore, Noida, Gurugram
icon
10 - 16 yrs
icon
₹30L - ₹50L / yr
Big Data
Data Warehouse (DWH)
Product Management

Job Title: Product Manager

 

Job Description

Bachelor or master’s degree in computer science or equivalent experience.
Worked as Product Owner before and took responsibility for a product or project delivery.
Well-versed with data warehouse modernization to Big Data and Cloud environments.
Good knowledge* of any of the Cloud (AWS/Azure/GCP) – Must Have
Practical experience with continuous integration and continuous delivery workflows.
Self-motivated with strong organizational/prioritization skills and ability to multi-task with close attention to detail.
Good communication skills
Experience in working within a distributed agile team
Experience in handling migration projects – Good to Have
 

*Data Ingestion, Processing, and Orchestration knowledge

 

Roles & Responsibilities


Responsible for coming up with innovative and novel ideas for the product.
Define product releases, features, and roadmap.
Collaborate with product teams on defining product objectives, including creating a product roadmap, delivery, market research, customer feedback, and stakeholder inputs.
Work with the Engineering teams to communicate release goals and be a part of the product lifecycle. Work closely with the UX and UI team to create the best user experience for the end customer.
Work with the Marketing team to define GTM activities.
Interface with Sales & Customer teams to identify customer needs and product gaps
Market and competition analysis activities.
Participate in the Agile ceremonies with the team, define epics, user stories, acceptance criteria
Ensure product usability from the end-user perspective

 

Mandatory Skills

Product Management, DWH, Big Data

Read more
DP
Posted by Anjali Mohandas
icon
Remote, Bengaluru (Bangalore), Pune, Hyderabad, Mumbai
icon
4 - 8 yrs
icon
₹15L - ₹28L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+5 more

4-6 years of total experience in data warehousing and business intelligence

3+ years of solid Power BI experience (Power Query, M-Query, DAX, Aggregates)

2 years’ experience building Power BI using cloud data (Snowflake, Azure Synapse, SQL DB, data lake)

Strong experience building visually appealing UI/UX in Power BI

Understand how to design Power BI solutions for performance (composite models, incremental refresh, analysis services)

Experience building Power BI using large data in direct query mode

Expert SQL background (query building, stored procedure, optimizing performance)

Read more

A fast-growing SaaS commerce company permanent WFH & Office

Agency job
via Jobdost by Mamatha A
icon
Bengaluru (Bangalore)
icon
5 - 8 yrs
icon
₹20L - ₹25L / yr
SQL
Relational Database (RDBMS)
Amazon Redshift
PostgreSQL
Data Analytics
+3 more

What is the role?

You will be responsible for building and maintaining highly scalable data infrastructure for our cloud-hosted SAAS product. You will work closely with the Product Managers and Technical team to define and implement data pipelines for customer-facing and internal reports.

Key Responsibilities

  • Understanding of the business process and requirements thoroughly and convert them to the reports.
  • Should be able to suggest the right way to the users of the reports.
  • Developing, maintaining, and managing advanced reporting, analytics, dashboards and other BI solutions.

What are we looking for?

An enthusiastic individual with the following skills. Please do not hesitate to apply if you do not match all of it. We are open to promising candidates who are passionate about their work and are team players.

  • Education - BE/MCA or equivalent
  • Good experience in working on the performance side of the reports.
  • Expert level knowlege of querying in any RDBMS and preferrably in Redshift or Postgress
  • Expert level knowledge of Datawarehousing concepts
  • Advanced level scripting to create calculated fields, sets, parameters, etc
  • Degree in mathematics, computer science, information systems, or related field.
  • 5-7 years of exclusive experience Tableau and Dataware house.

Whom will you work with?

You will work with a top-notch tech team, working closely with the CTO and product team.  

What can you look for?

A wholesome opportunity in a fast-paced environment that will enable you to juggle between concepts, yet maintain the quality on content, interact and share your ideas and have loads of learning while at work. Work with a team of highly talented young professionals and enjoy the benefits.

We are

A fast-growing SaaS commerce company based in Bangalore with offices in Delhi, Mumbai, SF, Dubai, Singapore and Dublin. We have three products in our portfolio: Plum, Empuls and Compass. Works with over 1000 global clients. We help our clients in engaging and motivating their employees, sales teams, channel partners or consumers for better business results.

 

Read more

A fast-growing SaaS commerce company permanent WFH & Office

Agency job
via Jobdost by Mamatha A
icon
Remote, Bengaluru (Bangalore)
icon
6 - 9 yrs
icon
₹25L - ₹40L / yr
Amazon Web Services (AWS)
Data Warehouse (DWH)
MySQL
NOSQL Databases
PostgreSQL
+4 more

Job Description :

A candidate who has a strong background in the design and implementation of scalable architecture and a good understanding of Algorithms, Data structures, and design patterns. Candidate must be ready to learn new tools, languages, and technologies

Skills :

Microservices, MySQL/Postgres, Kafka/Message Queues, Elasticsearch, Data pipelines, AWS Cloud, Clickhouse/Redshift

What you need to succeed in this role

  • Minimum 6 years of experience
  • Good understanding of various database types: RDBMS, NoSQL, GraphDB, etc
  • Ability to build highly stable reliable APIs and backend services.
  • Should be familiar with distributed, high availability database systems
  • Experience with queuing systems like Kafka
  • Hands-on in cloud infrastructure AWS/GCP/Azure
  • Highly plus if know one or more of the following: Confluent ksql, Kafka connect, Kafka streams
  • Hands-on experience with data warehouse/OLAP systems such as Redshift, click house and added plus. 
  • Good communication and interpersonal skills

Benefits of joining us

  • Ability to join a small and growing team, and work with some of the coolest people you've ever met
  • Opportunity to make an impact, and leave your mark on this organization.
  • Competitive compensation, with the ability to shape your own career trajectory
  • Go Extra Mile with Learning and Development

What can you look for?

A wholesome opportunity in a fast-paced environment that will enable you to juggle between concepts, yet maintain the quality on content, interact and share your ideas and have loads of learning while at work. Work with a team of highly talented young professionals and enjoy the benefits of being at Xoxoday.

We are

A fast-growing SaaS commerce company based in Bangalore with offices in Delhi, Mumbai, SF, Dubai, Singapore and Dublin. We have three products in our portfolio: Plum, Empuls and Compass. Works with over 1000 global clients. We help our clients in engaging and motivating their employees, sales teams, channel partners or consumers for better business results.

Read more
DP
Posted by Jayasimha Kulkarni
icon
Remote, Bengaluru (Bangalore)
icon
4 - 8 yrs
icon
₹10L - ₹28L / yr
Data engineering
Data Structures
Programming
Python
C#
+3 more

 

Job Description - Sr Azure Data Engineer

 

 

Roles & Responsibilities:

  1. Hands-on programming in C# / .Net,
  2. Develop serverless applications using Azure Function Apps.
  3. Writing complex SQL Queries, Stored procedures, and Views. 
  4. Creating Data processing pipeline(s).
  5. Develop / Manage large-scale Data Warehousing and Data processing solutions.
  6. Provide clean, usable data and recommend data efficiency, quality, and data integrity.

 

Skills

  1. Should have working experience on C# /.Net.
  2. Proficient with writing SQL queries, Stored Procedures, and Views
  3. Should have worked on Azure Cloud Stack.
  4. Should have working experience ofin developing serverless code.
  5. Must have MANDATORILY worked on Azure Data Factory.

 

Experience 

  1. 4+ years of relevant experience

 

Read more
icon
Bengaluru (Bangalore), Hyderabad, Pune, Indore, Gurugram, Noida
icon
10 - 17 yrs
icon
₹25L - ₹50L / yr
Product Management
Big Data
Data Warehouse (DWH)
ETL
Hi All, 
Greetings! We are looking for Product Manager for our Data modernization product. We need a resource with good knowledge on Big Data/DWH. should have strong Stakeholders management and Presentation skills
Read more
DP
Posted by Vaishnavi Shingane
icon
Bengaluru (Bangalore), Delhi
icon
3 - 5 yrs
icon
₹20L - ₹40L / yr
DevOps
Kubernetes
Docker
Amazon Web Services (AWS)
Windows Azure
+12 more

About Rupifi

Rupifi is a B2B payments solution that allows marketplaces to extend risk-free payment & credit terms to their SME retailers. By making it easier to get paid, we help manufacturers, online/offline marketplaces, distributors, and wholesalers increase sales, while enabling SME retailers better manage their cash flow & purchase inventory. These merchants use Rupifi in all their sales channels, including ecommerce checkouts. 


Rupifi is integrated into 20+ Indian B2B marketplaces today, including some of the largest ones. Rupifi is backed by top global investors - Tiger Global, Bessemer Venture Partners, Quona Capital


Over the next five years B2B payments present us an opportunity that is almost without parallel in the past three decades. We are looking for engineering leads who are passionate about building a world class B2B payments & credit platform.

Core Infrastructure Team

The team is responsible for taking a platform approach to core infrastructure engineering problems ranging from databases, log management, data platform, CI/CD infrastructure, monitoring, and reliability.

 

What you will do (or learn):

  1. Build our application stack on AWS. Infrastructure as code (read Terraform)
  2. Build state of the art CI/CD pipelines.
  3. Manage data warehouses and data pipelines.
  4. Work on infrastructure and data security.
  5. State of the art log management system and tooling around them.
  6. Monitoring and alerting system.

 

What do we expect from you?

  1. 3 to 10 years of experience with DevOps or SRE principles.
  2. Good fundamentals of database management and other distributed systems management.
  3. Experience in infrastructure as code or other configuration management systems.
  4. Experience in scripting languages (like bash, python, golang etc.)
  5. Good understanding of linux systems
  6. Strong debugging and troubleshooting skills
  7. Experience in tooling around monitoring, CI/CD, log management systems.
Read more

A Pre-series A funded FinTech Company

Agency job
via GoHyre by Avik Majumder
icon
Bengaluru (Bangalore)
icon
3 - 6 yrs
icon
₹15L - ₹30L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+6 more

Responsibilities:

  • Ensure and own Data integrity across distributed systems.
  • Extract, Transform and Load data from multiple systems for reporting into BI platform.
  • Create Data Sets and Data models to build intelligence upon.
  • Develop and own various integration tools and data points.
  • Hands-on development and/or design within the project in order to maintain timelines.
  • Work closely with the Project manager to deliver on business requirements OTIF (on time in full)
  • Understand the cross-functional business data points thoroughly and be SPOC for all data-related queries.
  • Work with both Web Analytics and Backend Data analytics.
  • Support the rest of the BI team in generating reports and analysis
  • Quickly learn and use Bespoke & third party SaaS reporting tools with little documentation.
  • Assist in presenting demos and preparing materials for Leadership.

 Requirements:

  • Strong experience in Datawarehouse modeling techniques and SQL queries
  • A good understanding of designing, developing, deploying, and maintaining Power BI report solutions
  • Ability to create KPIs, visualizations, reports, and dashboards based on business requirement
  • Knowledge and experience in prototyping, designing, and requirement analysis
  • Be able to implement row-level security on data and understand application security layer models in Power BI
  • Proficiency in making DAX queries in Power BI desktop.
  • Expertise in using advanced level calculations on data sets
  • Experience in the Fintech domain and stakeholder management.
Read more

Fintech Company

Agency job
via Jobdost by Sathish Kumar
icon
Bengaluru (Bangalore)
icon
2 - 4 yrs
icon
₹7L - ₹12L / yr
Python
SQL
Data Warehouse (DWH)
Hadoop
Amazon Web Services (AWS)
+7 more

Purpose of Job:

Responsible for drawing insights from many sources of data to answer important business
questions and help the organization make better use of data in their daily activities.


Job Responsibilities:

We are looking for a smart and experienced Data Engineer 1 who can work with a senior
manager to
⮚ Build DevOps solutions and CICD pipelines for code deployment
⮚ Build unit test cases for APIs and Code in Python
⮚ Manage AWS resources including EC2, RDS, Cloud Watch, Amazon Aurora etc.
⮚ Build and deliver high quality data architecture and pipelines to support business
and reporting needs
⮚ Deliver on data architecture projects and implementation of next generation BI
solutions
⮚ Interface with other teams to extract, transform, and load data from a wide variety
of data sources
Qualifications:
Education: MS/MTech/Btech graduates or equivalent with focus on data science and
quantitative fields (CS, Eng, Math, Eco)
Work Experience: Proven 1+ years of experience in data mining (SQL, ETL, data
warehouse, etc.) and using SQL databases

 

Skills
Technical Skills
⮚ Proficient in Python and SQL. Familiarity with statistics or analytical techniques
⮚ Data Warehousing Experience with Big Data Technologies (Hadoop, Hive,
Hbase, Pig, Spark, etc.)
⮚ Working knowledge of tools and utilities - AWS, DevOps with Git, Selenium,
Postman, Airflow, PySpark
Soft Skills
⮚ Deep Curiosity and Humility
⮚ Excellent storyteller and communicator
⮚ Design Thinking

Read more
icon
Bengaluru (Bangalore)
icon
0 - 1 yrs
icon
₹3L - ₹6L / yr
Go Programming (Golang)
Ruby on Rails (ROR)
Ruby
Python
Java
+7 more

Job Description

Job Description SQL DBA - Trainee Analyst/Engineer

Experience : 0 to 1 Years

No.of Positions: 2

Job Location: Bangalore

Notice Period: Immediate / Max 15 Days

The candidate should have strong SQL knowledge, Here are few points

    • Implement and maintain the database design
    • Create database objects (tables, indexes, etc.)
    • Write database procedures, functions, and triggers

Good soft skills are a must (written and verbal communication)

Good team player

Ability to work in a 24x7 support model (rotation basis)

Strong fundamentals in Algorithms, OOPs and Data Structure

Should be flexible to support multiple IT platform

Analytical Thinking

 

Additional Information :

Functional Area: IT Software - DBA, Datawarehousing

Role Category: Admin/ Maintenance/ Security/ Datawarehousing

Role: DBA

 

Education :

B.Tech/ B.E

Skills

SQL DBA, IMPLEMENTATION, SQL, DBMS, DATA WAREHOUSING

Read more
DP
Posted by Rajesh C
icon
Bengaluru (Bangalore), Chennai
icon
12 - 15 yrs
icon
₹50L - ₹60L / yr
Data Science
Machine Learning (ML)
ETL
Data Warehouse (DWH)
Amazon Web Services (AWS)
+5 more
Job Title: Data Architect
Job Location: Chennai

Job Summary
The Engineering team is seeking a Data Architect. As a Data Architect, you will drive a
Data Architecture strategy across various Data Lake platforms. You will help develop
reference architecture and roadmaps to build highly available, scalable and distributed
data platforms using cloud based solutions to process high volume, high velocity and
wide variety of structured and unstructured data. This role is also responsible for driving
innovation, prototyping, and recommending solutions. Above all, you will influence how
users interact with Conde Nast’s industry-leading journalism.
Primary Responsibilities
Data Architect is responsible for
• Demonstrated technology and personal leadership experience in architecting,
designing, and building highly scalable solutions and products.
• Enterprise scale expertise in data management best practices such as data integration,
data security, data warehousing, metadata management and data quality.
• Extensive knowledge and experience in architecting modern data integration
frameworks, highly scalable distributed systems using open source and emerging data
architecture designs/patterns.
• Experience building external cloud (e.g. GCP, AWS) data applications and capabilities is
highly desirable.
• Expert ability to evaluate, prototype and recommend data solutions and vendor
technologies and platforms.
• Proven experience in relational, NoSQL, ELT/ETL technologies and in-memory
databases.
• Experience with DevOps, Continuous Integration and Continuous Delivery technologies
is desirable.
• This role requires 15+ years of data solution architecture, design and development
delivery experience.
• Solid experience in Agile methodologies (Kanban and SCRUM)
Required Skills
• Very Strong Experience in building Large Scale High Performance Data Platforms.
• Passionate about technology and delivering solutions for difficult and intricate
problems. Current on Relational Databases and No sql databases on cloud.
• Proven leadership skills, demonstrated ability to mentor, influence and partner with
cross teams to deliver scalable robust solutions..
• Mastery of relational database, NoSQL, ETL (such as Informatica, Datastage etc) /ELT
and data integration technologies.
• Experience in any one of Object Oriented Programming (Java, Scala, Python) and
Spark.
• Creative view of markets and technologies combined with a passion to create the
future.
• Knowledge on cloud based Distributed/Hybrid data-warehousing solutions and Data
Lake knowledge is mandate.
• Good understanding of emerging technologies and its applications.
• Understanding of code versioning tools such as GitHub, SVN, CVS etc.
• Understanding of Hadoop Architecture and Hive SQL
• Knowledge in any one of the workflow orchestration
• Understanding of Agile framework and delivery

Preferred Skills:
● Experience in AWS and EMR would be a plus
● Exposure in Workflow Orchestration like Airflow is a plus
● Exposure in any one of the NoSQL database would be a plus
● Experience in Databricks along with PySpark/Spark SQL would be a plus
● Experience with the Digital Media and Publishing domain would be a
plus
● Understanding of Digital web events, ad streams, context models

About Condé Nast

CONDÉ NAST INDIA (DATA)
Over the years, Condé Nast successfully expanded and diversified into digital, TV, and social
platforms - in other words, a staggering amount of user data. Condé Nast made the right
move to invest heavily in understanding this data and formed a whole new Data team
entirely dedicated to data processing, engineering, analytics, and visualization. This team
helps drive engagement, fuel process innovation, further content enrichment, and increase
market revenue. The Data team aimed to create a company culture where data was the
common language and facilitate an environment where insights shared in real-time could
improve performance.
The Global Data team operates out of Los Angeles, New York, Chennai, and London. The
team at Condé Nast Chennai works extensively with data to amplify its brands' digital
capabilities and boost online revenue. We are broadly divided into four groups, Data
Intelligence, Data Engineering, Data Science, and Operations (including Product and
Marketing Ops, Client Services) along with Data Strategy and monetization. The teams built
capabilities and products to create data-driven solutions for better audience engagement.
What we look forward to:
We want to welcome bright, new minds into our midst and work together to create diverse
forms of self-expression. At Condé Nast, we encourage the imaginative and celebrate the
extraordinary. We are a media company for the future, with a remarkable past. We are
Condé Nast, and It Starts Here.
Read more

TekSystems

Agency job
via NDSSG DataSync Pvt Ltd by Hamza Bootwala
icon
Bengaluru (Bangalore), Hyderabad
icon
4 - 8 yrs
icon
₹10L - ₹20L / yr
WebFOCUS
Snow flake schema
Sybase
Data Warehouse (DWH)
Oracle
+2 more

Required:
1) WebFOCUS BI Reporting
2) WebFOCUS Adminstration
3) Sybase or Oracle or SQL Server or Snowflake
4) DWH Skills

Nice to have:
1) Experience in SAP BO / Crystal report / SSRS / Power BI
2) Experience in Informix
3) Experience in ETL

Responsibilities:

• Technical knowledge regarding best practices of BI development / integration.
• Candidate must understand business processes, be a detailed-oriented person and quickly grasp new concepts.
• Additionally the candidate will have strong presentation, interpersonal, software development and work management skills.
• Strong Advanced SQL programming skills are required
• Proficient in MS Word, Excel, Access, and PowerPoint
• Experience working with one or more BI Reporting tools as Analyst/Developer.
• Knowledge of data mining techniques and procedures and knowing when their use is appropriate
• Ability to present complex information in an understandable and compelling manner.
• Experience converting reports from one reporting tool to another

Read more

A global provider of Business Process Management company

Agency job
via Jobdost by Saida Jabbar
icon
Bengaluru (Bangalore), UK
icon
5 - 10 yrs
icon
₹15L - ₹25L / yr
Data Visualization
PowerBI
ADF
Business Intelligence (BI)
PySpark
+11 more

Power BI Developer

Senior visualization engineer with 5 years’ experience in Power BI to develop and deliver solutions that enable delivery of information to audiences in support of key business processes. In addition, Hands-on experience on Azure data services like ADF and databricks is a must.

Ensure code and design quality through execution of test plans and assist in development of standards & guidelines working closely with internal and external design, business, and technical counterparts.

Candidates should have worked in agile development environments.

Desired Competencies:

  • Should have minimum of 3 years project experience using Power BI on Azure stack.
  • Should have good understanding and working knowledge of Data Warehouse and Data Modelling.
  • Good hands-on experience of Power BI
  • Hands-on experience T-SQL/ DAX/ MDX/ SSIS
  • Data Warehousing on SQL Server (preferably 2016)
  • Experience in Azure Data Services – ADF, DataBricks & PySpark
  • Manage own workload with minimum supervision.
  • Take responsibility of projects or issues assigned to them
  • Be personable, flexible and a team player
  • Good written and verbal communications
  • Have a strong personality who will be able to operate directly with users
Read more

at Koo

DP
Posted by Neha Gandhi
icon
Bengaluru (Bangalore)
icon
4 - 8 yrs
icon
₹30L - ₹70L / yr
Data engineering
Data Engineer
Big Data
Big Data Engineer
Data Warehouse (DWH)
+2 more

Problem Statement-Solution 

Only 10% of India speaks English and 90% speak over 25 languages and 1000s of dialects.  The internet has largely been in English. A good part of India is now getting internet  connectivity thanks to cheap smartphones and Jio. The non-English speaking internet users will  balloon to about 600 million users out of the total 750 million internet users in India by 2020.  This will make the vernacular segment one of the largest segments in the world - almost 2x the  size of the US population. The vernacular segment has very few products that they can use on  the internet.

One large human need is that of sharing thoughts and connecting with people of the same  community on the basis of language and common interests. Twitter serves this need globally  but the experience is mostly in English. There’s a large unaddressed need for these vernacular  users to express themselves in their mother tongue and connect with others from their  community. Koo is a solution to this problem.


About Koo

Koo was founded in March 2020, as a micro-blogging platform in both Indian languages and English, which gives a voice to the millions of Indians who communicate in Indian languages.

Currently available in Assamese, Bengali, English, Hindi, Kannada, Marathi, Tamil and Telugu, Koo enables people from across India to express themselves online in their mother tongues. In a country where under 10% of the population speaks English as a native language, Koo meets the need for a social media platform that can deliver an immersive language experience to an Indian user, thereby enabling them to connect and interact with each other. The recently introduced ‘Talk to Type’ enables users to leverage the voice assistant to share their thoughts without having to type. In August 2021, Koo crossed 10 million downloads, in just 16 months of launch.

Since June 2021, Koo is available in Nigeria.  

Founding Team

Koo is founded by veteran internet entrepreneurs - Aprameya Radhakrishna (CEO,  Taxiforsure) and Mayank Bidawatka (Co-founder, Goodbox & Coreteam, redBus).

 

 

Technology Team & Culture

The technology team comprises sharp coders, technology geeks and guys who have been entrepreneurs or are entrepreneurial and extremely passionate towards technology. Talent is coming from the likes of Google, Walmart, Redbus, Dailyhunt. Anyone being part of a technology team will have a lot to learn from their peers and mentors. Download our android app and take a look at what we’ve built. Technology stack compromises of a wide variety of cutting-edge technologies like Kotlin, Java 15, Reactive Programming, MongoDB, Cassandra, Kubernetes, AWS, NodeJS, Python, ReactJS, Redis, Aerospike, ML, Deep learning etc. We believe in giving a lot of independence and autonomy to ownership-driven individuals.


Technology skill sets required for a matching profile

  1. Work experience of 4 to 8 years in building large scale high user traffic consumer  facing applications with desire to work in a fast paced startup.
  2. Development experience of real-time data analytics backend infrastructure on AWS
  3. Responsible for building data and analytical engineering solutions with standard e2e  design & ELT patterns, implementing data compaction pipelines, data modelling and  overseeing overall data quality.
  4. Responsible to enable access of data in AWS S3 storage layer and transformations in  Data Warehouse 
  5. Implement Data warehouse entities with common re-usable data model designs with  automation and data quality capabilities.
  6. Integrate domain data knowledge into development of data requirements.
  7. Identify downstream implications of data loads/migration (e.g., data quality,  regulatory)
Read more

company operates on a software as a service-based (SaaS) mod

Agency job
via Jobdost by Saida Jabbar
icon
Bengaluru (Bangalore)
icon
8 - 15 yrs
icon
₹12L - ₹16L / yr
MariaDB
Relational Database (RDBMS)
Databases
MySQL
Microsoft Windows Azure
+6 more
As a Senior Database Developer, you will design stable and reliable databases,
according to our company’s needs. You will be responsible for planning, developing,
testing, improving and maintaining new and existing databases to help users retrieve
data effectively. As part of our IT team, you will work closely with developers to ensure
system consistency. You will also collaborate with administrators and clients to
provide technical support and identify new requirements. Communication and
organization skills are keys for this position, along with a problem-solution attitude.
You get to work with some of the best minds in the industry at a place where
opportunity lurks everywhere and in everything.
Responsibilities
Your responsibilities are as follows.
• Working cross functional teams to develop robust solutions aligned with the
business needs
• Maintaining communication, providing regular updates to the development
team ensuring solutions provided are fit for purpose
• Training other developers in the team on best practices and technologies
• Troubleshooting issues in the production environment understanding the root
cause and developing robust solutions
• Developing, implement, maintain and solutions that are both reliable and
scalable
• Capture data analysis requirements effectively and represent them formerly
and visually through our data models.
• Maintaining effective database performance by identifying and resolving
production and application development problems
• Optimise the integration and installation of new releases
• Monitoring system performance, test, troubleshoot and integrating new
features
• Proactively recommending solutions to improve new and existing database
systems
• Providing database support by coding utilities, resolving user questions and
problems
• Ensuring compliance to database implementation procedures
• Performing code and design reviews as per the code review process
• Installing and organising information systems to guarantee company
functionality
• Preparing accurate documentation and reports
• Migration of data from legacy systems to new solutions
• Stakeholders’ analysis of our current clients, company operations and
applications, and programming requirements
• Collaborate with functional teams across the business to deliver end-to-end
products, features enabling enhanced performance
Required Qualifications
We are looking for individuals who are curious, excited about learning, and navigating
through the uncertainties and complexities that are associated with a growing
company. Some qualifications that we think would help you thrive in this role are:
• Minimum 8 Years of experience as a Database Administrator
• Strong knowledge of data structures and database systems
• In depth expertise and hands on experience with MySQL/MariaDB Database
Management System
• In depth expertise and hands on experience in database design, data
maintenance, database security, data analysis and mining
• Hands-on experience with at least one web-hosting platform such as Microsoft
Azure, AWS (Amazon Web Services) etc.
• Strong understanding of security principles and how they apply to web
applications
• Basic knowledge of networking, Desirable knowledge of business intelligence
• Desirable knowledge of data architectures related to data warehouse
implementations
• Strong interpersonal skills and a desire to work collaboratively to achieve
objectives
• Understanding of Agile methodologies
• Bachelor/Masters of CS/IT Engineering, BCA/MCA, B Sc/M Sc in CS/IT
Preferred Qualifications
• Sense of ownership and pride in your performance and its impact on company’s
success
• Critical thinker, Team player
• Excellent trouble-shooting and problem-solving skills
• Excellent analytical and Strong organisational skills
• Good time-management skills
• Great interpersonal and communication skills
Read more
DP
Posted by Alex P
icon
Hyderabad, Bengaluru (Bangalore)
icon
8 - 15 yrs
icon
₹6L - ₹23L / yr
Tableau
Data Visualization
Data Warehouse (DWH)
SQL
PL/SQL
Short Description

Work at client location as Tableau Squad BA / Tech lead gathering visualization requirements and building Business Insights and Dashboards

Roles And Responsibilities
  • Work with Business stakeholders to gather Dashboarding/reporting requirements
  • Document user stories
  • High level and detail design for Tableau application
  • Suggest visualization options and collaborate on prototyping of Dashboards
  • Actively collaborate with customers and colleagues to ensure delivery excellence
  • Display a growth mindset by proactively seeking feedback
  • Technical lead for developers guiding them with complex set analysis expressions and Dashboard development
Essential

Skills /Competencies:
  • Experience in reporting requirement analysis
  • Strong hands-on experience in Tableau design and development
  • Experience in set analysis, storytelling, data load scripting and security setup of Tableau reports
  • Experience in working with large scale RBDMS (Oracle/Teradata preferred)
  • Working knowledge of QMC
  • Good SQL knowledge
  • Ability to present analysis and technical information to a non-technical audience
  • Ability to work independently and collaboratively, as part of a team
  • Excellent communication skills
  • Ability to create documented standards and procedures for others to follow
Read more
icon
Bengaluru (Bangalore), Chennai, Pune, Gurugram
icon
4 - 8 yrs
icon
₹9L - ₹14L / yr
ETL
Data Warehouse (DWH)
Data engineering
Data modeling
BRIEF JOB RESPONSIBILITIES:

• Responsible for designing, deploying, and maintaining analytics environment that processes data at scale
• Contribute design, configuration, deployment, and documentation for components that manage data ingestion, real time streaming, batch processing, data extraction, transformation, enrichment, and loading of data into a variety of cloud data platforms, including AWS and Microsoft Azure
• Identify gaps and improve the existing platform to improve quality, robustness, maintainability, and speed
• Evaluate new and upcoming big data solutions and make recommendations for adoption to extend our platform to meet advanced analytics use cases, such as predictive modeling and recommendation engines
• Data Modelling , Data Warehousing on Cloud Scale using Cloud native solutions.
• Perform development, QA, and dev-ops roles as needed to ensure total end to end responsibility of solutions

COMPETENCIES
• Experience building, maintaining, and improving Data Models / Processing Pipeline / routing in large scale environments
• Fluency in common query languages, API development, data transformation, and integration of data streams
• Strong experience with large dataset platforms such as (e.g. Amazon EMR, Amazon Redshift, AWS Lambda & Fargate, Amazon Athena, Azure SQL Database, Azure Database for PostgreSQL, Azure Cosmos DB , Data Bricks)
• Fluency in multiple programming languages, such as Python, Shell Scripting, SQL, Java, or similar languages and tools appropriate for large scale data processing.
• Experience with acquiring data from varied sources such as: API, data queues, flat-file, remote databases
• Understanding of traditional Data Warehouse components (e.g. ETL, Business Intelligence Tools)
• Creativity to go beyond current tools to deliver the best solution to the problem
Read more
icon
Bengaluru (Bangalore)
icon
4 - 8 yrs
icon
₹20L - ₹35L / yr
Data engineering
Data Engineer
Big Data
Big Data Engineer
Python
+10 more

We are an early stage start-up, building new fintech products for small businesses. Founders are IIT-IIM alumni, with prior experience across management consulting, venture capital and fintech startups. We are driven by the vision to empower small business owners with technology and dramatically improve their access to financial services. To start with, we are building a simple, yet powerful solution to address a deep pain point for these owners: cash flow management. Over time, we will also add digital banking and 1-click financing to our suite of offerings.

 

We have developed an MVP which is being tested in the market. We have closed our seed funding from marquee global investors and are now actively building a world class tech team. We are a young, passionate team with a strong grip on this space and are looking to on-board enthusiastic, entrepreneurial individuals to partner with us in this exciting journey. We offer a high degree of autonomy, a collaborative fast-paced work environment and most importantly, a chance to create unparalleled impact using technology.

 

Reach out if you want to get in on the ground floor of something which can turbocharge SME banking in India!

 

Technology stack at Velocity comprises a wide variety of cutting edge technologies like, NodeJS, Ruby on Rails, Reactive Programming,, Kubernetes, AWS, NodeJS, Python, ReactJS, Redux (Saga) Redis, Lambda etc. 

 

Key Responsibilities

  • Responsible for building data and analytical engineering pipelines with standard ELT patterns, implementing data compaction pipelines, data modelling and overseeing overall data quality

  • Work with the Office of the CTO as an active member of our architecture guild

  • Writing pipelines to consume the data from multiple sources

  • Writing a data transformation layer using DBT to transform millions of data into data warehouses.

  • Implement Data warehouse entities with common re-usable data model designs with automation and data quality capabilities

  • Identify downstream implications of data loads/migration (e.g., data quality, regulatory)

 

What To Bring

  • 3+ years of software development experience, a startup experience is a plus.

  • Past experience of working with Airflow and DBT is preferred

  • 2+ years of experience working in any backend programming language. 

  • Strong first-hand experience with data pipelines and relational databases such as Oracle, Postgres, SQL Server or MySQL

  • Experience with DevOps tools (GitHub, Travis CI, and JIRA) and methodologies (Lean, Agile, Scrum, Test Driven Development)

  • Experienced with the formulation of ideas; building proof-of-concept (POC) and converting them to production-ready projects

  • Experience building and deploying applications on on-premise and AWS or Google Cloud cloud-based infrastructure

  • Basic understanding of Kubernetes & docker is a must.

  • Experience in data processing (ETL, ELT) and/or cloud-based platforms

  • Working proficiency and communication skills in verbal and written English.

 

 

 

Read more

A global business process management company

Agency job
via Jobdost by Saida Jabbar
icon
Gurugram, Pune, Mumbai, Bengaluru (Bangalore), Chennai, Nashik
icon
4 - 12 yrs
icon
₹12L - ₹15L / yr
Data engineering
Data modeling
data pipeline
Data integration
Data Warehouse (DWH)
+12 more

 

 

Designation – Deputy Manager - TS


Job Description

  1. Total of  8/9 years of development experience Data Engineering . B1/BII role
  2. Minimum of 4/5 years in AWS Data Integrations and should be very good on Data modelling skills.
  3. Should be very proficient in end to end AWS Data solution design, that not only includes strong data ingestion, integrations (both Data @ rest and Data in Motion) skills but also complete DevOps knowledge.
  4. Should have experience in delivering at least 4 Data Warehouse or Data Lake Solutions on AWS.
  5. Should be very strong experience on Glue, Lambda, Data Pipeline, Step functions, RDS, CloudFormation etc.
  6. Strong Python skill .
  7. Should be an expert in Cloud design principles, Performance tuning and cost modelling. AWS certifications will have an added advantage
  8. Should be a team player with Excellent communication and should be able to manage his work independently with minimal or no supervision.
  9. Life Science & Healthcare domain background will be a plus

Qualifications

BE/Btect/ME/MTech

 

Read more
Agency job
via Nu-Pie by Sanjay Biswakarma
icon
Bengaluru (Bangalore)
icon
4 - 8 yrs
icon
₹7L - ₹15L / yr
Data engineering
Data Engineer
Data modeling
Scheme
Data Analytics
+18 more

Role description

  • Work with the application development team to implement database schema, data strategies, build data flows
  • Should have expertise across the database. Responsible for gathering and analyzing data requirements and data flows to other systems
  • Perform database modeling and SQL scripting

 

Background/ Experience

  • 8+ years of experience in Database modeling and SQL scripting
  • Experience in OLTP and OLAP database modeling
  • Experience in Data modeling for Relational (PostgreSQL) and Non-relational Databases
  • Strong scripting experience P-SQL (Views, Store procedures), Python, Spark.
  • Develop notebooks & jobs using Python language
  • Big Data stack: Spark, Hadoop, Sqoop, Pig, Hive, Hbase, Flume, Kafka, Storm
  • Should have good understanding on Azure Cosmos DB, Azure DW

Education

  • Bachelors Master’s degree in information technology/Computer Science, Engineering (or equivalent)

 

Read more

A Product Company

Agency job
via wrackle by Lokesh M
icon
Bengaluru (Bangalore)
icon
3 - 6 yrs
icon
₹15L - ₹26L / yr
Looker
Big Data
Hadoop
Spark
Apache Hive
+4 more
Job Title: Senior Data Engineer/Analyst
Location: Bengaluru
Department: - Engineering 

Bidgely is looking for extraordinary and dynamic Senior Data Analyst to be part of its core team in Bangalore. You must have delivered exceptionally high quality robust products dealing with large data. Be part of a highly energetic and innovative team that believes nothing is impossible with some creativity and hard work. 

Responsibilities 
● Design and implement a high volume data analytics pipeline in Looker for Bidgely flagship product.
●  Implement data pipeline in Bidgely Data Lake
● Collaborate with product management and engineering teams to elicit & understand their requirements & challenges and develop potential solutions 
● Stay current with the latest tools, technology ideas and methodologies; share knowledge by clearly articulating results and ideas to key decision makers. 

Requirements 
● 3-5 years of strong experience in data analytics and in developing data pipelines. 
● Very good expertise in Looker 
● Strong in data modeling, developing SQL queries and optimizing queries. 
● Good knowledge of data warehouse (Amazon Redshift, BigQuery, Snowflake, Hive). 
● Good understanding of Big data applications (Hadoop, Spark, Hive, Airflow, S3, Cloudera) 
● Attention to details. Strong communication and collaboration skills.
● BS/MS in Computer Science or equivalent from premier institutes.
Read more

India's best Short Video App

Agency job
via wrackle by Naveen Taalanki
icon
Bengaluru (Bangalore)
icon
4 - 12 yrs
icon
₹25L - ₹50L / yr
Data engineering
Big Data
Spark
Apache Kafka
Apache Hive
+26 more
What Makes You a Great Fit for The Role?

You’re awesome at and will be responsible for
 
Extensive programming experience with cross-platform development of one of the following Java/SpringBoot, Javascript/Node.js, Express.js or Python
3-4 years of experience in big data analytics technologies like Storm, Spark/Spark streaming, Flink, AWS Kinesis, Kafka streaming, Hive, Druid, Presto, Elasticsearch, Airflow, etc.
3-4 years of experience in building high performance RPC services using different high performance paradigms: multi-threading, multi-processing, asynchronous programming (nonblocking IO), reactive programming,
3-4 years of experience working high throughput low latency databases and cache layers like MongoDB, Hbase, Cassandra, DynamoDB,, Elasticache ( Redis + Memcache )
Experience with designing and building high scale app backends and micro-services leveraging cloud native services on AWS like proxies, caches, CDNs, messaging systems, Serverless compute(e.g. lambda), monitoring and telemetry.
Strong understanding of distributed systems fundamentals around scalability, elasticity, availability, fault-tolerance.
Experience in analysing and improving the efficiency, scalability, and stability of distributed systems and backend micro services.
5-7 years of strong design/development experience in building massively large scale, high throughput low latency distributed internet systems and products.
Good experience in working with Hadoop and Big Data technologies like HDFS, Pig, Hive, Storm, HBase, Scribe, Zookeeper and NoSQL systems etc.
Agile methodologies, Sprint management, Roadmap, Mentoring, Documenting, Software architecture.
Liaison with Product Management, DevOps, QA, Client and other teams
 
Your Experience Across The Years in the Roles You’ve Played
 
Have total or more 5 - 7 years of experience with 2-3 years in a startup.
Have B.Tech or M.Tech or equivalent academic qualification from premier institute.
Experience in Product companies working on Internet-scale applications is preferred
Thoroughly aware of cloud computing infrastructure on AWS leveraging cloud native service and infrastructure services to design solutions.
Follow Cloud Native Computing Foundation leveraging mature open source projects including understanding of containerisation/Kubernetes.
 
You are passionate about learning or growing your expertise in some or all of the following
Data Pipelines
Data Warehousing
Statistics
Metrics Development
 
We Value Engineers Who Are
 
Customer-focused: We believe that doing what’s right for the creator is ultimately what will drive our business forward.
Obsessed with Quality: Your Production code just works & scales linearly
Team players. You believe that more can be achieved together. You listen to feedback and also provide supportive feedback to help others grow/improve.
Pragmatic: We do things quickly to learn what our creators desire. You know when it’s appropriate to take shortcuts that don’t sacrifice quality or maintainability.
Owners: Engineers at Chingari know how to positively impact the business.
Read more
icon
Remote, Bengaluru (Bangalore)
icon
3.5 - 8 yrs
icon
₹5L - ₹18L / yr
PySpark
Data engineering
Data Warehouse (DWH)
SQL
Spark
+1 more
Must-Have Skills:
• Good experience in Pyspark - Including Dataframe core functions and Spark SQL
• Good experience in SQL DBs - Be able to write queries including fair complexity.
• Should have excellent experience in Big Data programming for data transformation and aggregations
• Good at ELT architecture. Business rules processing and data extraction from Data Lake into data streams for business consumption.
• Good customer communication.
• Good Analytical skill
 
 
Technology Skills (Good to Have):
  • Building and operationalizing large scale enterprise data solutions and applications using one or more of AZURE data and analytics services in combination with custom solutions - Azure Synapse/Azure SQL DWH, Azure Data Lake, Azure Blob Storage, Spark, HDInsights, Databricks, CosmosDB, EventHub/IOTHub.
  • Experience in migrating on-premise data warehouses to data platforms on AZURE cloud. 
  • Designing and implementing data engineering, ingestion, and transformation functions
  • Azure Synapse or Azure SQL data warehouse
  • Spark on Azure is available in HD insights and data bricks
 
Good to Have: 
  • Experience with Azure Analysis Services
  • Experience in Power BI
  • Experience with third-party solutions like Attunity/Stream sets, Informatica
  • Experience with PreSales activities (Responding to RFPs, Executing Quick POCs)
  • Capacity Planning and Performance Tuning on Azure Stack and Spark.
Read more
DP
Posted by sakshi nigam
icon
Bengaluru (Bangalore)
icon
2 - 8 yrs
icon
₹10L - ₹25L / yr
Data engineering
ETL
Data Warehouse (DWH)
Powershell
DA
+7 more
Who we are
 
We are a consultant led organisation. We invest heavily in our consultants to ensure they have the technical skills and commercial acumen to be successful in their work.
 
Our consultants have a passion for data and solving complex problems. They are curious, ambitious and experts in their fields. We have developed a first rate team so you will be supported and learn from the best

About the role

  • Collaborating with a team of like-minded and experienced engineers for Tier 1 customers, you will focus on data engineering on large complex data projects. Your work will have an impact on platforms that handle crores of customers and millions of transactions daily.

  • As an engineer, you will use the latest cloud services to design and develop reusable core components and frameworks to modernise data integrations in a cloud first world and own those integrations end to end working closely with business units. You will design and build for efficiency, reliability, security and scalability. As a consultant, you will help drive a data engineering culture and advocate best practices.

Mandatory experience

    • 1-6 years of relevant experience
    • Strong SQL skills and data literacy
    • Hands-on experience designing and developing data integrations, either in ETL tools, cloud native tools or in custom software
    • Proficiency in scripting and automation (e.g. PowerShell, Bash, Python)
    • Experience in an enterprise data environment
    • Strong communication skills

Desirable experience

    • Ability to work on data architecture, data models, data migration, integration and pipelines
    • Ability to work on data platform modernisation from on-premise to cloud-native
    • Proficiency in data security best practices
    • Stakeholder management experience
    • Positive attitude with the flexibility and ability to adapt to an ever-changing technology landscape
    • Desire to gain breadth and depth of technologies to support customer's vision and project objectives

What to expect if you join Servian?

    • Learning & Development: We invest heavily in our consultants and offer internal training weekly (both technical and non-technical alike!) and abide by a ‘You Pass We Pay” policy.
    • Career progression: We take a longer term view of every hire. We have a flat org structure and promote from within. Every hire is developed as a future leader and client adviser.
    • Variety of projects: As a consultant, you will have the opportunity to work across multiple projects across our client base significantly increasing your skills and exposure in the industry.
    • Great culture: Working on the latest Apple MacBook pro in our custom designed offices in the heart of leafy Jayanagar, we provide a peaceful and productive work environment close to shops, parks and metro station.
    • Professional development: We invest heavily in professional development both technically, through training and guided certification pathways, and in consulting, through workshops in client engagement and communication. Growth in our organisation happens from the growth of our people.
Read more
DP
Posted by Jaya Harjai
icon
Bengaluru (Bangalore)
icon
3 - 4 yrs
icon
₹18L - ₹35L / yr
Python
SQL
Data engineering
Big Data
Data Warehouse (DWH)
+3 more

Responsibilities:

  • Design, construct, install, test and maintain data pipeline and data management systems.
  • Ensure that all systems meet the business/company requirements as well as industry practices.
  • Integrate up-and-coming data management and software engineering technologies into existing data structures.
  • Processes for data mining, data modeling, and data production.
  • Create custom software components and analytics applications.
  • Collaborate with members of your team (eg, Data Architects, the Software team, Data Scientists) on the project's goals.
  • Recommend different ways to constantly improve data reliability and quality.

 

Requirements:

  • Experience in a related field with real-world skills and testimonials from former employees.
  • Familiar with data warehouses like Redshift, Bigquery and Athena.
  • Familiar with data processing systems like flink, spark and storm. Develop set
  • Proficiency in Python and SQL. Possible work experience and proof of technical expertise.
  • You may also consider a Master's degree in computer engineering or science in order to fine-tune your skills while on the job. (Although a Master's isn't required, it is always appreciated).
  • Intellectual curiosity to find new and unusual ways of how to solve data management issues.
  • Ability to approach data organization challenges while keeping an eye on what's important.
  • Minimal data science knowledge is a Must, should understand a bit of analytics.
Read more
DP
Posted by Rajendra Dasigari
icon
Bengaluru (Bangalore)
icon
2 - 7 yrs
icon
₹6L - ₹12L / yr
ETL
Data Warehouse (DWH)
Apache Hive
Informatica
Data engineering
+5 more
1. Create and maintain optimal data pipeline architecture
2. Assemble large, complex data sets that meet business requirements
3. Identify, design, and implement internal process improvements
4. Optimize data delivery and re-design infrastructure for greater scalability
5. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS technologies
6. Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics
7. Work with internal and external stakeholders to assist with data-related technical issues and support data infrastructure needs
8. Create data tools for analytics and data scientist team members
 
Skills Required:
 
1. Working knowledge of ETL on any cloud (Azure / AWS / GCP)
2. Proficient in Python (Programming / Scripting)
3. Good understanding of any of the data warehousing concepts (Snowflake / AWS Redshift / Azure Synapse Analytics / Google Big Query / Hive)
4. In-depth understanding of principles of database structure
5.  Good understanding of any of the ETL technologies (Informatica PowerCenter / AWS Glue / Data Factory / SSIS / Spark / Matillion / Talend / Azure)
6. Proficient in SQL (query solving)
7. Knowledge in Change case Management / Version Control – (VSS / DevOps / TFS / GitHub, Bit bucket, CICD Jenkin)
Read more
DP
Posted by Anand Pandey
icon
Bengaluru (Bangalore)
icon
1 - 2 yrs
icon
₹5L - ₹7L / yr
Business Analysis
Windows Azure
PySpark
SQL
Data Warehouse (DWH)
+4 more
RESPONSIBILITIES & OWNERSHIP: THINGS THE ROLE CAN'T MISS
  • Setting KPIs, monitoring key trends, and helping stakeholders by generating insights from the data delivered.
  • Understanding user behaviour and performing root-cause analysis of changes in data trends across different verticals.
  • Get answers to business questions, identify areas of improvement, and identify opportunities for growth.
  • Work on ad-hoc requests for data and analysis.
  • Work with Cross functional Teams as when required to automate reports and create informative dashboards based on problem statements.

WHO COULD BE A GREAT FIT:
Functional Experience
  • 1-2 years of experience working in Analytics as a Business or Data Analyst.
  • Analytical mind with a problem-solving aptitude.
  • Familiarity with Microsoft Azure & AWS PySpark, Python, Data Bricks, Metabase, Understanding of APIs, data warehouse and ETL etc.
  • Proficient in writing Complex Queries in SQL.
  • Experience in Performing hands-on analysis on data and across multiple datasets and databases primarily using Excel, Google Sheets and R.
  • Ability to work across cross-functional teams proactively.
Read more

Gulf client

Agency job
via Fragma Data Systems by Priyanka U
icon
Remote, Bengaluru (Bangalore)
icon
5 - 9 yrs
icon
₹10L - ₹20L / yr
PowerBI
Data Warehouse (DWH)
SQL
DAX
Power query
Key Skills:
 Strong knowledge in Power BI (DAX + Power Query + Power BI Service + Power BI
Desktop Visualisations) and Azure Data Storages.
 Should have experience in Power BI mobile Dashboards.
 Strong knowledge in SQL.
 Good knowledge of DWH concepts.
 Work as an independent contributor at the client location.
 Implementing Access Control and impose required Security.
 Candidate must have very good communication skills.
Read more

MNC

Agency job
via Fragma Data Systems by Priyanka U
icon
Remote, Bengaluru (Bangalore)
icon
5 - 8 yrs
icon
₹12L - ₹20L / yr
PySpark
SQL
Data Warehouse (DWH)
ETL
SQL Developer with Relevant experience of 7 Yrs with Strong Communication Skills.
 
Key responsibilities:
 
  • Creating, designing and developing data models
  • Prepare plans for all ETL (Extract/Transformation/Load) procedures and architectures
  • Validating results and creating business reports
  • Monitoring and tuning data loads and queries
  • Develop and prepare a schedule for a new data warehouse
  • Analyze large databases and recommend appropriate optimization for the same
  • Administer all requirements and design various functional specifications for data
  • Provide support to the Software Development Life cycle
  • Prepare various code designs and ensure efficient implementation of the same
  • Evaluate all codes and ensure the quality of all project deliverables
  • Monitor data warehouse work and provide subject matter expertise
  • Hands-on BI practices, data structures, data modeling, SQL skills
  • Minimum 1 year experience in Pyspark
Read more

Our client is a leading IT service providers in India

Agency job
via GSN Consulting by Mahendrand Deepak
icon
Bengaluru (Bangalore)
icon
5 - 8 yrs
icon
₹15L - ₹20L / yr
TIBCO Spotfire
TIBCO
DXP
Spotfire
Dashboard
+2 more
  • 4+ years of extensive EXP in TIBCO SPOTFIRE Dashboard Development is MUST
  • Design and create data visualizations in TIBCO Spotfire
  • Proven EXP in delivering Spotfire solutions to advance business goals and needs.
  • Detailed knowledge of TIBCO Spotfire - report developer configuration
  • EXP on creating all charts that exist in Spotfire (scatter, line, bar, combo, pie, etc.) and how to manipulate every property associated with a visualization (trellis, color, shape, size, etc.)
  • EXP in writing efficient SQL queries, views in relational databases such as Oracle, SQL Server, Postgres and BigQuery (Optional).
  • Ability to incorporate multiple data sources into one Spotfire DXP and have that information linked via data table relations.
  • EXP with Spotfire Administrative tasks, load balancing, installation/configuration of servers and clients, upgrades and patches would be a plus.
  • Strong background in analytical visualizations and building executive dashboards.
  • In-depth knowledge & understanding of BI and Datawarehouse concepts
Read more
Agency job
via Nu-Pie by Sanjay Biswakarma
icon
Remote, Mumbai, Bengaluru (Bangalore)
icon
8 - 14 yrs
icon
₹9L - ₹19L / yr
Teradata
Teradata DBA
Data Warehouse (DWH)
Teradata SQL Assistant
Teradata Warehouse Miner
+11 more

Role: Teradata Lead

Band: C2

Experience level: Minimum 10 years

Job Description:

This role would be leading the DBA teams of multiple experience level DBAs for a mix of – Teradata, Oracle and SQL.

Skill Set:

Minimum 10 years of relevant Database and Datawarehouse experience.

Hands on experience of administrating Teradata.

Leading the performance analysis, capacity planning and supporting the batchops and users with their jobs.

Drive implementation of standards and best practices to optimize database utilization and availability.

Hands on with AWS Cloud infrastructure services such as EC2, S3 and network services.

Proficient in Linux system administration relevant to Teradata management.

 

Teradata Specific (Mandatory)

Manage and Operate 24x7 production as well as development databases to ensure maximum availability of system resources.

Responsible for operational activities of a Database Administrator such as System monitoring, User Management, Space Management, Troubleshooting, and Batch/user support.

Perform DBA related tasks in key areas of Performance Management & Reporting, workload management using TASM.

Manage Production/Development databases in areas like Capacity Planning, Performance Monitoring & Tuning, Strategies Defined for Backup/Recovery Techniques, Space/ User/ Security management along With Problem determination and resolution.

Experience with Teradata Workload management & monitoring and query optimization.

Expertise with system monitoring using viewpoint and logs.

Proficient in analysing the performance and optimizing at different levels.

Ability to create advanced system-level capacity reports as well as root cause analysis.

 

Oracle Specific (Optional)

Database Administration Installation of Oracle software on Unix/Linux platform.

Database Lifecycle Management - Database creation, setup decommissioning.

Database event alert monitoring, space management, user management.

Database upgrades migrations, cloning.

Database backup restore recovery using RMAN.

Setup and maintain High-Availability and Disaster Recovery solutions.

Proficient in Standby and Data Guard technology.

Hands on with the OEM CC.

 

Mandatory Certification:

  • Teradata Vantage Certified Administrator
  • ITIL Foundation
Read more
icon
Remote, Bengaluru (Bangalore)
icon
3 - 10 yrs
icon
₹5L - ₹15L / yr
Big Data
ETL
PySpark
SSIS
Microsoft Windows Azure
+4 more

Must Have Skills:

- Solid Knowledge on DWH, ETL and Big Data Concepts

- Excellent SQL Skills (With knowledge of SQL Analytics Functions)

- Working Experience on any ETL tool i.e. SSIS / Informatica

- Working Experience on any Azure or AWS Big Data Tools.

- Experience on Implementing Data Jobs (Batch / Real time Streaming)

- Excellent written and verbal communication skills in English, Self-motivated with strong sense of ownership and Ready to learn new tools and technologies

Preferred Skills:

- Experience on Py-Spark / Spark SQL

- AWS Data Tools (AWS Glue, AWS Athena)

- Azure Data Tools (Azure Databricks, Azure Data Factory)

Other Skills:

- Knowledge about Azure Blob, Azure File Storage, AWS S3, Elastic Search / Redis Search

- Knowledge on domain/function (across pricing, promotions and assortment).

- Implementation Experience on Schema and Data Validator framework (Python / Java / SQL),

- Knowledge on DQS and MDM.

Key Responsibilities:

- Independently work on ETL / DWH / Big data Projects

- Gather and process raw data at scale.

- Design and develop data applications using selected tools and frameworks as required and requested.

- Read, extract, transform, stage and load data to selected tools and frameworks as required and requested.

- Perform tasks such as writing scripts, web scraping, calling APIs, write SQL queries, etc.

- Work closely with the engineering team to integrate your work into our production systems.

- Process unstructured data into a form suitable for analysis.

- Analyse processed data.

- Support business decisions with ad hoc analysis as needed.

- Monitoring data performance and modifying infrastructure as needed.

Responsibility: Smart Resource, having excellent communication skills

 

 
Read more
DP
Posted by Vishal Sharma
icon
Remote, Bengaluru (Bangalore)
icon
3 - 7 yrs
icon
₹5L - ₹10L / yr
Data Warehouse (DWH)
Spark
Data engineering
Python
PySpark
+5 more

Basic Qualifications

- Need to have a working knowledge of AWS Redshift.

- Minimum 1 year of designing and implementing a fully operational production-grade large-scale data solution on Snowflake Data Warehouse.

- 3 years of hands-on experience with building productized data ingestion and processing pipelines using Spark, Scala, Python

- 2 years of hands-on experience designing and implementing production-grade data warehousing solutions

- Expertise and excellent understanding of Snowflake Internals and integration of Snowflake with other data processing and reporting technologies

- Excellent presentation and communication skills, both written and verbal

- Ability to problem-solve and architect in an environment with unclear requirements

Read more
DP
Posted by Gowshini Maheswaran
icon
Bengaluru (Bangalore)
icon
10 - 15 yrs
icon
₹15L - ₹20L / yr
Big Data
Data Warehouse (DWH)
Apache Kafka
Spark
Hadoop
+23 more
Hypersonix.ai is disrupting the Business Intelligence and Analytics space with AI, ML and NLP capabilities to drive specific business insights with a conversational user experience. Hypersonix.ai has been built ground up with new age technology to simplify the consumption of data for our customers in Restaurants, Hospitality and other industry verticals.

Hypersonix.ai is seeking a Data Evangelist who can work closely with customers to understand the data sources, acquire data and drive product success by delivering insights based on customer needs.

Primary Responsibilities :

- Lead and deliver complete application lifecycle design, development, deployment, and support for actionable BI and Advanced Analytics solutions

- Design and develop data models and ETL process for structured and unstructured data that is distributed across multiple Cloud platforms

- Develop and deliver solutions with data streaming capabilities for a large volume of data

- Design, code and maintain parts of the product and drive customer adoption

- Build data acquisition strategy to onboard customer data with speed and accuracy

- Working both independently and with team members to develop, refine, implement, and scale ETL processes

- On-going support and maintenance of live-clients for their data and analytics needs

- Defining the data automation architecture to drive self-service data load capabilities

Required Qualifications :

- Bachelors/Masters/Ph.D. in Computer Science, Information Systems, Data Science, Artificial Intelligence, Machine Learning or related disciplines

- 10+ years of experience guiding the development and implementation of Data architecture in structured, unstructured, and semi-structured data environments.

- Highly proficient in Big Data, data architecture, data modeling, data warehousing, data wrangling, data integration, data testing and application performance tuning

- Experience with data engineering tools and platforms such as Kafka, Spark, Databricks, Flink, Storm, Druid and Hadoop

- Strong with hands-on programming and scripting for Big Data ecosystem (Python, Scala, Spark, etc)

- Experience building batch and streaming ETL data pipelines using workflow management tools like Airflow, Luigi, NiFi, Talend, etc

- Familiarity with cloud-based platforms like AWS, Azure or GCP

- Experience with cloud data warehouses like Redshift and Snowflake

- Proficient in writing complex SQL queries.

- Excellent communication skills and prior experience of working closely with customers

- Data savvy who loves to understand large data trends and obsessed with data analysis

- Desire to learn about, explore, and invent new tools for solving real-world problems using data

Desired Qualifications :

- Cloud computing experience, Amazon Web Services (AWS)

- Prior experience in Data Warehousing concepts, multi-dimensional data models

- Full command of Analytics concepts including Dimension, KPI, Reports & Dashboards

- Prior experience in managing client implementation of Analytics projects

- Knowledge and prior experience of using machine learning tools
Read more
Agency job
via Nu-Pie by Sanjay Biswakarma
icon
Remote, Bengaluru (Bangalore)
icon
4 - 8 yrs
icon
₹4L - ₹16L / yr
Big Data
Hadoop
Data engineering
data engineer
Google Cloud Platform (GCP)
+14 more
Job Description
Job Title: Data Engineer
Tech Job Family: DACI
• Bachelor's Degree in Engineering, Computer Science, CIS, or related field (or equivalent work experience in a related field)
• 2 years of experience in Data, BI or Platform Engineering, Data Warehousing/ETL, or Software Engineering
• 1 year of experience working on project(s) involving the implementation of solutions applying development life cycles (SDLC)
Preferred Qualifications:
• Master's Degree in Computer Science, CIS, or related field
• 2 years of IT experience developing and implementing business systems within an organization
• 4 years of experience working with defect or incident tracking software
• 4 years of experience with technical documentation in a software development environment
• 2 years of experience working with an IT Infrastructure Library (ITIL) framework
• 2 years of experience leading teams, with or without direct reports
• Experience with application and integration middleware
• Experience with database technologies
Data Engineering
• 2 years of experience in Hadoop or any Cloud Bigdata components (specific to the Data Engineering role)
• Expertise in Java/Scala/Python, SQL, Scripting, Teradata, Hadoop (Sqoop, Hive, Pig, Map Reduce), Spark (Spark Streaming, MLib), Kafka or equivalent Cloud Bigdata components (specific to the Data Engineering role)
BI Engineering
• Expertise in MicroStrategy/Power BI/SQL, Scripting, Teradata or equivalent RDBMS, Hadoop (OLAP on Hadoop), Dashboard development, Mobile development (specific to the BI Engineering role)
Platform Engineering
• 2 years of experience in Hadoop, NO-SQL, RDBMS or any Cloud Bigdata components, Teradata, MicroStrategy (specific to the Platform Engineering role)
• Expertise in Python, SQL, Scripting, Teradata, Hadoop utilities like Sqoop, Hive, Pig, Map Reduce, Spark, Ambari, Ranger, Kafka or equivalent Cloud Bigdata components (specific to the Platform Engineering role)
Lowe’s is an equal opportunity employer and administers all personnel practices without regard to race, color, religion, sex, age, national origin, disability, sexual orientation, gender identity or expression, marital status, veteran status, genetics or any other category protected under applicable law.
Read more
Agency job
via Nu-Pie by Jerrin Thomas
icon
Bengaluru (Bangalore)
icon
5 - 8 yrs
icon
₹8L - ₹13L / yr
ETL
Data Warehouse (DWH)
ETL Developer
Relational Database (RDBMS)
Spark
+8 more

 Minimum of 4 years’ experience of working on DW/ETL projects and expert hands-on working knowledge of ETL tools.

Experience with Data Management & data warehouse development

Star schemas, Data Vaults, RDBMS, and ODS

Change Data capture

Slowly changing dimensions

Data governance

Data quality

Partitioning and tuning

Data Stewardship

Survivorship

Fuzzy Matching

Concurrency

Vertical and horizontal scaling

ELT, ETL

Spark, Hadoop, MPP, RDBMS

Experience with Dev/OPS architecture, implementation and operation

Hand's on working knowledge of Unix/Linux

Building Complex SQL Queries. Expert SQL and data analysis skills, ability to debug and fix data issue.

Complex ETL program design coding

Experience in Shell Scripting, Batch Scripting.

Good communication (oral & written) and inter-personal skills

Expert SQL and data analysis skill, ability to debug and fix data issue Work closely with business teams to understand their business needs and participate in requirements gathering, while creating artifacts and seek business approval.

Helping business define new requirements, Participating in End user meetings to derive and define the business requirement, propose cost effective solutions for data analytics and familiarize the team with the customer needs, specifications, design targets & techniques to support task performance and delivery.

Propose good design & solutions and adherence to the best Design & Standard practices.

Review & Propose industry best tools & technology for ever changing business rules and data set. Conduct Proof of Concepts (POC) with new tools & technologies to derive convincing benchmarks.

Prepare the plan, design and document the architecture, High-Level Topology Design, Functional Design, and review the same with customer IT managers and provide detailed knowledge to the development team to familiarize them with customer requirements, specifications, design standards and techniques.

Review code developed by other programmers, mentor, guide and monitor their work ensuring adherence to programming and documentation policies.

Work with functional business analysts to ensure that application programs are functioning as defined. 

Capture user-feedback/comments on the delivered systems and document it for the client and project manager’s review. Review all deliverables before final delivery to client for quality adherence.

Technologies (Select based on requirement)

Databases - Oracle, Teradata, Postgres, SQL Server, Big Data, Snowflake, or Redshift

Tools – Talend, Informatica, SSIS, Matillion, Glue, or Azure Data Factory

Utilities for bulk loading and extracting

Languages – SQL, PL-SQL, T-SQL, Python, Java, or Scala

J/ODBC, JSON

Data Virtualization Data services development

Service Delivery - REST, Web Services

Data Virtualization Delivery – Denodo

 

ELT, ETL

Cloud certification Azure

Complex SQL Queries

 

Data Ingestion, Data Modeling (Domain), Consumption(RDMS)
Read more
icon
Pune, Bengaluru (Bangalore)
icon
5 - 10 yrs
icon
₹10L - ₹20L / yr
DevOps
CI/CD
Software deployment
Automation
Python
+16 more
What you will do
• Develop and maintain CI/CD tools to build and deploy scalable web and responsive applications in production environment
• Design and implement monitoring solutions that identify both system bottlenecks and production issues
• Design and implement workflows for continuous integration, including provisioning, deployment, testing, and version control of the software.
• Develop self-service solutions for the engineering team in order to deliver sites/software with great speed and quality
o Automating Infra creation
o Provide easy to use solutions to engineering team
• Conduct research, tests, and implements new metrics collection systems that can be reused and applied as engineering best practices
o Update our processes and design new processes as needed.
o Establish DevOps Engineer team best practices.
o Stay current with industry trends and source new ways for our business to improve.
• Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
• Manage timely resolution of all critical and/or complex problems
• Maintain, monitor, and establish best practices for containerized environments.
• Mentor new DevOps engineers
What you will bring
• The desire to work in fast-paced environment.
• 5+ years’ experience building, maintaining, and deploying production infrastructures in AWS or other cloud providers
• Containerization experience with applications deployed on Docker and Kubernetes
• Understanding of NoSQL and Relational Database with respect to deployment and horizontal scalability
• Demonstrated knowledge of Distributed and Scalable systems Experience with maintaining and deployment of critical infrastructure components through Infrastructure-as-Code and configuration management tooling across multiple environments (Ansible, Terraform etc)
• Strong knowledge of DevOps and CI/CD pipeline (GitHub, BitBucket, Artifactory etc)
• Strong understanding of cloud and infrastructure components (server, storage, network, data, and applications) to deliver end-to-end cloud Infrastructure architectures and designs and recommendations
o AWS services like S3, CloudFront, Kubernetes, RDS, Data Warehouses to come up with architecture/suggestions for new use cases.
• Test our system integrity, implemented designs, application developments and other processes related to infrastructure, making improvements as needed
Good to have
• Experience with code quality tools, static or dynamic code analysis and compliance and undertaking and resolving issues identified from vulnerability and compliance scans of our infrastructure
• Good knowledge of REST/SOAP/JSON web service API implementation
Read more

MNC

Agency job
via Fragma Data Systems by Priyanka U
icon
Bengaluru (Bangalore)
icon
5 - 9 yrs
icon
₹15L - ₹25L / yr
Data Warehouse (DWH)
dwh
warehousing
Datawarehousing
SQL
+1 more
Work Days: Sunday through Thursday
Week off: Friday & Saurday
Day Shift.


Key responsibilities:

  • Creating, designing and developing data models
  • Prepare plans for all ETL (Extract/Transformation/Load) procedures and architectures
  • Validating results and creating business reports
  • Monitoring and tuning data loads and queries
  • Develop and prepare a schedule for a new data warehouse
  • Analyze large databases and recommend appropriate optimization for the same
  • Administer all requirements and design various functional specifications for data
  • Provide support to the Software Development Life cycle
  • Prepare various code designs and ensure efficient implementation of the same
  • Evaluate all codes and ensure the quality of all project deliverables
  • Monitor data warehouse work and provide subject matter expertise
  • Hands-on BI practices, data structures, data modeling, SQL skills


Hard Skills for a Data Warehouse Developer:

  • Hands-on experience with ETL tools e.g., DataStage, Informatica, Pentaho, Talend
  • Sound knowledge of https://www.freelancermap.com/it-projects/sql-1084" target="_blank">SQL
  • Experience with SQL databases such as Oracle, DB2, and SQL
  • Experience using Data Warehouse platforms e.g., SAP, Birst
  • Experience designing, developing, and implementing Data Warehouse solutions
  • Project management and system development methodology
  • Ability to proactively research solutions and best practices

Soft Skills for Data Warehouse Developers:

  • Excellent Analytical skills
  • Excellent verbal and written communications
  • Strong organization skills
  • Ability to work on a team, as well as independently
Read more
DP
Posted by Jennifer Jocelyn
icon
Bengaluru (Bangalore)
icon
9 - 15 yrs
icon
₹50L - ₹70L / yr
Technical Architecture
Team Management
Web Development
Data engineering
Team building
+15 more
Main responsibilities: + Management of a growing technical team + Continued technical Architecture design based on product roadmap + Annual performance reviews + Work with DevOps to design and implement the product infrastructure Strategic: + Testing strategy + Security policy + Performance and performance testing policy + Logging policy Experience: + 9-15 years of experience including that of managing teams of developers + Technical & architectural expertise, and have evolved a growing code base, technology stack and architecture over many years + Have delivered distributed cloud applications + Understand the value of high quality code and can effectively manage technical debt + Stakeholder management + Work experience in consumer focused early stage (Series A, B) startups is a big plus Other innate skills: + Great motivator of people and able to lead by example + Understand how to get the most out of people + Delivery of products to tight deadlines but with a focus on high quality code + Up to date knowledge of technical applications
Read more
DP
Posted by Rahul Malani
icon
Bengaluru (Bangalore)
icon
5 - 10 yrs
icon
₹10L - ₹20L / yr
Data Warehouse (DWH)
Apache Hive
ETL
DWH Cloud
Hadoop
+3 more
candidate will be responsible for all aspects of data acquisition, data transformation, and analytics scheduling and operationalization to drive high-visibility, cross-division outcomes. Expected deliverables will include the development of Big Data ELT jobs using a mix of technologies, stitching together complex and seemingly unrelated data sets for mass consumption, and automating and scaling analytics into the GRAND's Data Lake. Key Responsibilities : - Create a GRAND Data Lake and Warehouse which pools all the data from different regions and stores of GRAND in GCC - Ensure Source Data Quality Measurement, enrichment and reporting of Data Quality - Manage All ETL and Data Model Update Routines - Integrate new data sources into DWH - Manage DWH Cloud (AWS/AZURE/Google) and Infrastructure Skills Needed : - Very strong in SQL. Demonstrated experience with RDBMS, Unix Shell scripting preferred (e.g., SQL, Postgres, Mongo DB etc) - Experience with UNIX and comfortable working with the shell (bash or KRON preferred) - Good understanding of Data warehousing concepts. Big data systems : Hadoop, NoSQL, HBase, HDFS, MapReduce - Aligning with the systems engineering team to propose and deploy new hardware and software environments required for Hadoop and to expand existing environments. - Working with data delivery teams to set up new Hadoop users. This job includes setting up Linux users, setting up and testing HDFS, Hive, Pig and MapReduce access for the new users. - Cluster maintenance as well as creation and removal of nodes using tools like Ganglia, Nagios, Cloudera Manager Enterprise, and other tools. - Performance tuning of Hadoop clusters and Hadoop MapReduce routines. - Screen Hadoop cluster job performances and capacity planning - Monitor Hadoop cluster connectivity and security - File system management and monitoring. - HDFS support and maintenance. - Collaborating with application teams to install operating system and - Hadoop updates, patches, version upgrades when required. - Defines, develops, documents and maintains Hive based ETL mappings and scripts
Read more
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort