Cutshort logo
Data warehouse dwh jobs

50+ Data Warehouse (DWH) Jobs in India

Apply to 50+ Data Warehouse (DWH) Jobs on CutShort.io. Find your next job, effortlessly. Browse Data Warehouse (DWH) Jobs and apply today!

icon
Branch International

at Branch International

4 candid answers
1 video
Reshika Mendiratta
Posted by Reshika Mendiratta
Remote only
7yrs+
₹50L - ₹70L / yr
Data Structures
Algorithms
Object Oriented Programming (OOPs)
ETL
ETL architecture
+5 more

Branch Overview


Imagine a world where every person has improved access to financial services. People could start new businesses, pay for their children’s education, cover their emergency medical bills – the possibilities to improve life are endless. 


Branch is a global technology company revolutionizing financial access for millions of underserved banking customers today across Africa and India. By leveraging the rapid adoption of smartphones, machine learning and other technology, Branch is pioneering new ways to improve access and value for those overlooked by banks. From instant loans to market-leading investment yields, Branch offers a variety of products that help our customers be financially empowered.


Branch’s mission-driven team is led by the co-founders of Kiva.org and one of the earliest product leaders of PayPal. Branch has raised over $100 million from leading Silicon Valley investors, including Andreessen Horowitz (a16z) and Visa. 

 

With over 32 million downloads, Branch is one of the most popular finance apps in the world.

 

Job Overview

Branch launched in India in early 2019 and has seen rapid adoption and growth. In 2020 we started building out a full Engineering team in India to accelerate our success here. This team is working closely with our engineering team (based in the United States, Nigeria, and Kenya) to strengthen the capabilities of our existing product and build out new product lines for the company.


You will work closely with our Product and Data Science teams to design and maintain multiple technologies, including our API backend, credit scoring and underwriting systems, payments integrations, and operations tools. We face numerous interesting technical challenges ranging from maintaining complex financial systems to accessing and processing creative data sources for our algorithmic credit model. 


As a company, we are passionate about our customers, fearless in the face of barriers, and driven by data. As an engineering team, we value bottom-up innovation and decentralized decision-making: We believe the best ideas can come from anyone in the company, and we are working hard to create an environment where everyone feels empowered to propose solutions to the challenges we face. We are looking for individuals who thrive in a fast-moving, innovative, and customer-focused setting.


Responsibilities

  • Make significant contributions to Branch’s data platform including data models, transformations, warehousing, and BI systems by bringing in best practices.
  • Build customer facing and internal products and APIs with industry best practices around security and performance in mind.
  • Influence and shape the company’s technical and product roadmap by providing timely and accurate inputs and owning various outcomes.
  • Collaborate with peers in other functional areas (Machine Learning, DevOps, etc.) to identify potential growth areas and systems needed.
  • Guide and mentor other younger engineers around you.
  • Scale our systems to ever-growing levels of traffic and handle complexity.


Qualifications

  • You have strong experience (8+ years) of designing, coding, and shipping data and backend software for web-based or mobile products.
  • Experience coordinating and collaborating with various business stakeholders and company leadership on critical functional decisions and technical roadmap.
  • You have strong knowledge of software development fundamentals, including relevant background in computer science fundamentals, distributed systems, data storage and processing, and agile development methodologies.
  • Have experience designing maintainable and scalable data architecture for ETL and BI purposes.
  • You are able to utlize your knowledge and expertise to code and ship quality products in a timely manner.
  • You are pragmatic and combine a strong understanding of technology and product needs to arrive at the best solution for a given problem.
  • You are highly entrepreneurial and thrive in taking ownership of your own impact. You take the initiative to solve problems before they arise.
  • You are an excellent collaborator & communicator. You know that startups are a team sport. You listen to others, aren’t afraid to speak your mind and always try to ask the right questions. 
  • You are excited by the prospect of working in a distributed team and company, working with teammates from all over the world.

Benefits of Joining

  • Mission-driven, fast-paced and entrepreneurial environment
  • Competitive salary and equity package
  • A collaborative and flat company culture
  • Remote first, with the option to work in-person occasionally
  • Fully-paid Group Medical Insurance and Personal Accidental Insurance
  • Unlimited paid time off including personal leave, bereavement leave, sick leave
  • Fully paid parental leave - 6 months maternity leave and 3 months paternity leave
  • Monthly WFH stipend alongside a one time home office set-up budget
  • $500 Annual professional development budget 
  • Discretionary trips to our offices across the globe, with global travel medical insurance 
  • Team meals and social events- Virtual and In-person

Branch International is an Equal Opportunity Employer. The company does not and will not discriminate in employment on any basis prohibited by applicable law. We’re looking for more than just qualifications -- so if you’re unsure that you meet the criteria, please do not hesitate to apply!

 

Read more
DataGrokr

at DataGrokr

4 candid answers
5 recruiters
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
5yrs+
Upto ₹30L / yr (Varies
)
Data engineering
Python
SQL
ETL
Data Warehouse (DWH)
+12 more

Lightning Job By Cutshort⚡

 

As part of this feature, you can expect status updates about your application and replies within 72 hours (once the screening questions are answered).


About DataGrokr

DataGrokr (https://www.datagrokr.com) is a cloud native technology consulting organization providing the next generation of big data analytics, cloud and enterprise solutions. We solve complex technology problems for our global clients who rely on us for our deep technical knowledge and delivery excellence.

If you are unafraid of technology, believe in your learning ability and are looking to work amongst smart, driven colleagues whom you can look up to and learn from, you might want to check us out. 


About the role

We are looking for a Senior Data Engineer to join our growing engineering team. As a member of the team,

• You will work on enterprise data platforms, architect and implement data lakes both on-prem and in the cloud.

• You will be responsible for evolving technical architecture, design and implementation of data solutions using a variety of big data technologies. You will work extensively on all major public cloud platforms - AWS, Azure and GCP.

• You will work with senior technical architects on our client side to evolve an effective technology architecture and development strategy to implement the solution.

• You will work with extremely talented peers and follow modern engineering practices using agile methodologies.

• You will coach, mentor and lead other engineers and provide guidance to ensure the quality of and consistency of the solution.


Must-have skills and attitudes:

• Passion for data engineering, in-depth knowledge of some of the following technologies – SQL (expert level), Python (expert level), Spark (intermediate level), Big data stack of one of AWS/GCP.

• Hands on experience in data wrangling, data munging and ETL. Should be able to source data from anywhere and transform data to any shape using SQL, Python or Spark.

• Hands on experience working with variable data structures like XML/JSON/AVRO etc

• Ability to create data models and architect data warehouse components

• Experience with Version control (GIT/BIT BUCKET etc)

• Strong understanding of Agile, experience with CI/CD pipelines and processes

• Ability to communicate with technical as well as non-technical audience

• Collaborating with various stakeholders

• Have led scrum teams, participated in Sprint grooming and planning sessions, work / effort sizing and estimation


Desired Skills & Experience:

• At least 5 years of industry experience

• Working knowledge of any of the following - AWS Big Data Stack (S3, Redshift, Athena, Glue, etc.), GCP Big Data Stack (Cloud Storage, Workflow, Dataflow, Cloud Functions, Big Query, Pub Sub, etc.).

• Working knowledge of traditional enterprise data warehouse architectures and migrating them to the Cloud.

• Experience with Data Visualization tool (Tableau / Power BI etc)

• Experience with JIRA / Azure DevOps etc


How will DataGrokr support you in your growth:

• You will be groomed and mentored by senior leaders to take on leadership positions in the company

• You will be actively encouraged to attain certifications, lead technical workshops and conduct meetups to grow your own technology acumen and personal brand

• You will work in an open culture that promotes commitment over compliance, individual responsibility over rules and bringing out the best in everyone.

Read more
Vola Finance

at Vola Finance

1 video
5 recruiters
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
3yrs+
Upto ₹20L / yr (Varies
)
Amazon Web Services (AWS)
Data engineering
Spark
SQL
Data Warehouse (DWH)
+4 more

Lightning Job By Cutshort⚡

 

As part of this feature, you can expect status updates about your application and replies within 72 hours (once the screening questions are answered)


Roles & Responsibilities


Basic Qualifications:

● The position requires a four-year degree from an accredited college or university.

● Three years of data engineering / AWS Architecture and security experience.


Top candidates will also have:

Proven/Strong understanding and/or experience in many of the following:-

● Experience designing Scalable AWS architecture.

● Ability to create modern data pipelines and data processing using AWS PAAS components (Glue, etc.) or open source tools (Spark, Hbase, Hive, etc.).

● Ability to develop SQL structures that support high volumes and scalability using

RDBMS such as SQL Server, MySQL, Aurora, etc.

● Ability to model and design modern data structures, SQL/NoSQL databases, Data Lakes, Cloud Data Warehouse

● Experience in creating Network Architecture for secured scalable solution.

● Experience with Message brokers such as Kinesis, Kafka, Rabbitmq, AWS SQS, AWS SNS, and Apache ActiveMQ. Hands-on experience on AWS serverless architectures such as Glue,Lamda, Redshift etc.

● Working knowledge of Load balancers, AWS shield, AWS guard, VPC, Subnets, Network gateway Route53 etc.

● Knowledge of building Disaster management systems and security logs notification system

● Knowledge of building scalable microservice architectures with AWS.

● To create a framework for monthly security checks and wide knowledge on AWS services

● Deploying software using CI/CD tools such CircleCI, Jenkins, etc.

● ML/ AI model deployment and production maintainanace experience is mandatory.

● Experience with API tools such as REST, Swagger, Postman and Assertible.

● Versioning management tools such as github, bitbucket, GitLab.

● Debugging and maintaining software in Linux or Unix platforms.

● Test driven development

● Experience building transactional databases.

● Python, PySpark programming experience .

● Must experience engineering solutions in AWS.

● Working AWS experience, AWS certification is required prior to hiring

● Working in Agile Framework/Kanban Framework

● Must demonstrate solid knowledge of computer science fundamentals like data structures & algorithms.

● Passion for technology and an eagerness to contribute to a team-oriented environment.

● Demonstrated leadership on medium to large-scale projects impacting strategic priorities.

● Bachelor’s degree in Computer science or Electrical engineering or related field is required

Read more
Technogen India PvtLtd

at Technogen India PvtLtd

4 recruiters
Mounika G
Posted by Mounika G
Hyderabad
11 - 16 yrs
₹24L - ₹27L / yr
Data Warehouse (DWH)
Informatica
ETL
Amazon Web Services (AWS)
SQL
+1 more

Daily and monthly responsibilities

  • Review and coordinate with business application teams on data delivery requirements.
  • Develop estimation and proposed delivery schedules in coordination with development team.
  • Develop sourcing and data delivery designs.
  • Review data model, metadata and delivery criteria for solution.
  • Review and coordinate with team on test criteria and performance of testing.
  • Contribute to the design, development and completion of project deliverables.
  • Complete in-depth data analysis and contribution to strategic efforts
  • Complete understanding of how we manage data with focus on improvement of how data is sourced and managed across multiple business areas.

 

Basic Qualifications

  • Bachelor’s degree.
  • 5+ years of data analysis working with business data initiatives.
  • Knowledge of Structured Query Language (SQL) and use in data access and analysis.
  • Proficient in data management including data analytical capability.
  • Excellent verbal and written communications also high attention to detail.
  • Experience with Python.
  • Presentation skills in demonstrating system design and data analysis solutions.


Read more
Fatakpay

at Fatakpay

2 recruiters
Ajit Kumar
Posted by Ajit Kumar
Mumbai
2 - 4 yrs
₹8L - ₹12L / yr
SQL
Python
Problem solving
Data Warehouse (DWH)
Excel VBA

1. Bridging the gap between IT and the business using data analytics to assess processes, determine requirements and deliver data-driven recommendations and reports to executives and stakeholders.


2. Ability to search, extract, transform and load data from various databases, cleanse and refine data until it is fit-for-purpose


3. Work within various time constraints to meet critical business needs, while measuring and identifying activities performed and ensuring service requirements are met


4. Prioritization of issues to meet deadlines while ensuring high-quality delivery


5. Ability to pull data and to perform ad hoc reporting and analysis as needed


6. Ability to adapt quickly to new and changing technical environments as well as strong analytical and problem-solving abilities


7. Strong interpersonal and presentation skills


SKILLS:


1. Advanced skills in designing reporting interfaces and interactive dashboards in Google Sheets and Excel


2. Experience working with senior decision-makers


3. Strong advanced SQL/MySQL and Python skills with the ability to fetch data from the Data Warehouse as per the stakeholder's requirement


4. Good Knowledge and experience in Excel VBA and advanced excel


5. Good Experience in building Tableau analytical Dashboards as per the stake holder's reporting requirements


6. Strong communication/interpersonal skills


PERSONA:


1. Experience in working on adhoc requirements


2. Ability to toggle around with shifting priorities


3. Experience in working for Fintech or E-commerce industry is preferable


4. Engineering 2+ years of experience as a Business Analyst for the finance processes

Read more
AxionConnect Infosolutions Pvt Ltd
Shweta Sharma
Posted by Shweta Sharma
Pune, Bengaluru (Bangalore), Hyderabad, Nagpur, Chennai
5.5 - 7 yrs
₹20L - ₹25L / yr
Django
Flask
Snowflake
Snow flake schema
SQL
+4 more

Job Location: Hyderabad/Bangalore/ Chennai/Pune/Nagpur

Notice period: Immediate - 15 days

 

1.      Python Developer with Snowflake

 

Job Description :


  1. 5.5+ years of Strong Python Development Experience with Snowflake.
  2. Strong hands of experience with SQL ability to write complex queries.
  3. Strong understanding of how to connect to Snowflake using Python, should be able to handle any type of files
  4.  Development of Data Analysis, Data Processing engines using Python
  5. Good Experience in Data Transformation using Python. 
  6.  Experience in Snowflake data load using Python.
  7.  Experience in creating user-defined functions in Snowflake.
  8.  Snowsql implementation.
  9.  Knowledge of query performance tuning will be added advantage.
  10. Good understanding of Datawarehouse (DWH) concepts.
  11.  Interpret/analyze business requirements & functional specification
  12.  Good to have DBT, FiveTran, and AWS Knowledge.
Read more
Personal Care Product Manufacturing
Mumbai
3 - 8 yrs
₹12L - ₹30L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+9 more

DATA ENGINEER


Overview

They started with a singular belief - what is beautiful cannot and should not be defined in marketing meetings. It's defined by the regular people like us, our sisters, our next-door neighbours, and the friends we make on the playground and in lecture halls. That's why we stand for people-proving everything we do. From the inception of a product idea to testing the final formulations before launch, our consumers are a part of each and every process. They guide and inspire us by sharing their stories with us. They tell us not only about the product they need and the skincare issues they face but also the tales of their struggles, dreams and triumphs. Skincare goes deeper than skin. It's a form of self-care for many. Wherever someone is on this journey, we want to cheer them on through the products we make, the content we create and the conversations we have. What we wish to build is more than a brand. We want to build a community that grows and glows together - cheering each other on, sharing knowledge, and ensuring people always have access to skincare that really works.

 

Job Description:

We are seeking a skilled and motivated Data Engineer to join our team. As a Data Engineer, you will be responsible for designing, developing, and maintaining the data infrastructure and systems that enable efficient data collection, storage, processing, and analysis. You will collaborate with cross-functional teams, including data scientists, analysts, and software engineers, to implement data pipelines and ensure the availability, reliability, and scalability of our data platform.


Responsibilities:

Design and implement scalable and robust data pipelines to collect, process, and store data from various sources.

Develop and maintain data warehouse and ETL (Extract, Transform, Load) processes for data integration and transformation.

Optimize and tune the performance of data systems to ensure efficient data processing and analysis.

Collaborate with data scientists and analysts to understand data requirements and implement solutions for data modeling and analysis.

Identify and resolve data quality issues, ensuring data accuracy, consistency, and completeness.

Implement and maintain data governance and security measures to protect sensitive data.

Monitor and troubleshoot data infrastructure, perform root cause analysis, and implement necessary fixes.

Stay up-to-date with emerging technologies and industry trends in data engineering and recommend their adoption when appropriate.


Qualifications:

Bachelor’s or higher degree in Computer Science, Information Systems, or a related field.

Proven experience as a Data Engineer or similar role, working with large-scale data processing and storage systems.

Strong programming skills in languages such as Python, Java, or Scala.

Experience with big data technologies and frameworks like Hadoop, Spark, or Kafka.

Proficiency in SQL and database management systems (e.g., MySQL, PostgreSQL, or Oracle).

Familiarity with cloud platforms like AWS, Azure, or GCP, and their data services (e.g., S3, Redshift, BigQuery).

Solid understanding of data modeling, data warehousing, and ETL principles.

Knowledge of data integration techniques and tools (e.g., Apache Nifi, Talend, or Informatica).

Strong problem-solving and analytical skills, with the ability to handle complex data challenges.

Excellent communication and collaboration skills to work effectively in a team environment.


Preferred Qualifications:

Advanced knowledge of distributed computing and parallel processing.

Experience with real-time data processing and streaming technologies (e.g., Apache Kafka, Apache Flink).

Familiarity with machine learning concepts and frameworks (e.g., TensorFlow, PyTorch).

Knowledge of containerization and orchestration technologies (e.g., Docker, Kubernetes).

Experience with data visualization and reporting tools (e.g., Tableau, Power BI).

Certification in relevant technologies or data engineering disciplines.



Read more
Data Semantics
Deepu Vijayan
Posted by Deepu Vijayan
Remote, Hyderabad, Bengaluru (Bangalore)
4 - 15 yrs
₹3L - ₹30L / yr
ETL
Informatica
Data Warehouse (DWH)
SQL Server Analysis Services (SSAS)
SQL Server Reporting Services (SSRS)
+4 more

It's regarding a permanent opening with Data Semantics

Data Semantics 


We are Product base company and Microsoft Gold Partner

Data Semantics is an award-winning Data Science company with a vision to empower every organization to harness the full potential of its data assets. In order to achieve this, we provide Artificial Intelligence, Big Data and Data Warehousing solutions to enterprises across the globe. Data Semantics was listed as one of the top 20 Analytics companies by Silicon India 2018 & CIO Review India 2014 as one of the Top 20 BI companies.  We are headquartered in Bangalore, India with our offices in 6 global locations including USA United Kingdom, Canada, United Arab Emirates (Dubai Abu Dhabi), and Mumbai. Our mission is to enable our people to learn the art of data management and visualization to help our customers make quick and smart decisions. 

 

Our Services include: 

Business Intelligence & Visualization

App and Data Modernization

Low Code Application Development

Artificial Intelligence

Internet of Things

Data Warehouse Modernization

Robotic Process Automation

Advanced Analytics

 

Our Products:

Sirius – World’s most agile conversational AI platform

Serina

Conversational Analytics

Contactless Attendance Management System

 

 

Company URL:   https://datasemantics.co 


JD:

MSBI

SSAS

SSRS

SSIS

Datawarehousing

SQL

Read more
Gipfel & Schnell Consultings Pvt Ltd
TanmayaKumar Pattanaik
Posted by TanmayaKumar Pattanaik
Bengaluru (Bangalore)
3 - 9 yrs
₹9L - ₹30L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+10 more

Qualifications & Experience:


▪ 2 - 4 years overall experience in ETLs, data pipeline, Data Warehouse development and database design

▪ Software solution development using Hadoop Technologies such as MapReduce, Hive, Spark, Kafka, Yarn/Mesos etc.

▪ Expert in SQL, worked on advanced SQL for at least 2+ years

▪ Good development skills in Java, Python or other languages

▪ Experience with EMR, S3

▪ Knowledge and exposure to BI applications, e.g. Tableau, Qlikview

▪ Comfortable working in an agile environment

Read more
InfoCepts
Lalsaheb Bepari
Posted by Lalsaheb Bepari
Chennai, Pune, Nagpur
7 - 10 yrs
₹5L - ₹15L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+5 more

Responsibilities:

 

• Designing Hive/HCatalog data model includes creating table definitions, file formats, compression techniques for Structured & Semi-structured data processing

• Implementing Spark processing based ETL frameworks

• Implementing Big data pipeline for Data Ingestion, Storage, Processing & Consumption

• Modifying the Informatica-Teradata & Unix based data pipeline

• Enhancing the Talend-Hive/Spark & Unix based data pipelines

• Develop and Deploy Scala/Python based Spark Jobs for ETL processing

• Strong SQL & DWH concepts.

 

Preferred Background:

 

• Function as integrator between business needs and technology solutions, helping to create technology solutions to meet clients’ business needs

• Lead project efforts in defining scope, planning, executing, and reporting to stakeholders on strategic initiatives

• Understanding of EDW system of business and creating High level design document and low level implementation document

• Understanding of Big Data Lake system of business and creating High level design document and low level implementation document

• Designing Big data pipeline for Data Ingestion, Storage, Processing & Consumption

Read more
Exponentia.ai

at Exponentia.ai

1 product
1 recruiter
Vipul Tiwari
Posted by Vipul Tiwari
Mumbai
7 - 10 yrs
₹13L - ₹19L / yr
Project Management
IT project management
Software project management
Business Intelligence (BI)
Data Warehouse (DWH)
+8 more

Role: Project Manager

Experience: 8-10 Years

Location: Mumbai


Company Profile:



Exponentia.ai is an AI tech organization with a presence across India, Singapore, the Middle East, and the UK. We are an innovative and disruptive organization, working on cutting-edge technology to help our clients transform into the enterprises of the future. We provide artificial intelligence-based products/platforms capable of automated cognitive decision-making to improve productivity, quality, and economics of the underlying business processes. Currently, we are rapidly expanding across machine learning, Data Engineering and Analytics functions. Exponentia.ai has developed long-term relationships with world-class clients such as PayPal, PayU, SBI Group, HDFC Life, Kotak Securities, Wockhardt and Adani Group amongst others.

One of the top partners of Data bricks, Azure, Cloudera (leading analytics player) and Qlik (leader in BI technologies), Exponentia.ai has recently been awarded the ‘Innovation Partner Award’ by Qlik and "Excellence in Business Process Automation Award" (IMEA) by Automation Anywhere.

Get to know more about us at http://www.exponentia.ai and https://in.linkedin.com/company/exponentiaai 


Role Overview:


·        Project manager shall be responsible to oversee and take responsibility for the successful delivery of a range of projects in Business Intelligence, Data warehousing, and Analytics/AI-ML.

·        Project manager is expected to manage the project and lead the teams of BI engineers, data engineers, data scientists and application developers.


Job Responsibilities:


·        Efforts estimation, creating a project plan, planning milestones, activities and tracking the progress.

·        Identify risks and issues. Come up with a mitigation plan.

·        Status reporting to both internal and external stakeholders.

·        Communicate with all stakeholders.

·        Manage end-to-end project lifecycle - requirements gathering, design, development, testing and go-live.

·        Manage end-to-end BI or data warehouse projects.

·        Must have experience in running Agile-based project development.


Technical skills


·        Experience in Business Intelligence Data warehousing or Analytics projects.

·        Understand data lake and data warehouse solutions including ETL pipelines.

·        Good to have - Knowledge of Azure blob storage, azure data factory and Synapse analytics.

·        Good to have - Knowledge of Qlik Sense or Power BI

·        Good to have - Certified in PMP/Prince 2 / Agile Project management.

·        Excellent written and verbal communication skills.


 Education:

MBA, B.E. or B. Tech. or MCA degree

Read more
Hyderabad
6 - 9 yrs
₹10L - ₹15L / yr
SQL
Databases
SQL Server Reporting Services (SSRS)
SQL Server Integration Services (SSIS)
SQL Server Analysis Services (SSAS)
+11 more

Designation: Senior - DBA

Experience: 6-9 years

CTC: INR 17-20 LPA

Night Allowance: INR 800/Night

Location: Hyderabad,Hybrid

Notice Period: NA

Shift Timing : 6:30 pm to 3:30 am

Openings: 3

Roles and Responsibilities:

As a Senior Database Administrator is responsible for the physical design development

administration and optimization of properly engineered database systems to meet agreed

business and technical requirements.

The candidate will work as part of but not limited to the Onsite/Offsite DBA

group-Administration and management of databases in Dev Stage and Production

environments

Performance tuning of database schema stored procedures etc.

Providing technical input on the setup configuration of database servers and SAN disk

subsystem on all database servers.

Troubleshooting and handling all database related issues and tracking them through to

resolution.

Pro-active monitoring of databases both from a performance and capacity management

perspective.

Performing database maintenance activities such as backup/recovery rebuilding and

reorganizing indexes.

Ensuring that all database releases are properly assessed and measured from a

functionality and performance perspective.

Ensuring that all databases are up to date with the latest service packs patches &

security fixes.

Take ownership and ensure high quality timely delivery of projects on hand.

Collaborate with application/database developers quality assurance and

operations/support staff

Will help manage large high transaction rate SQL Server production

Eligibility:

Bachelors/Master Degree (BE/BTech/MCA/MTect/MS)

6 - 8 years of solid experience in SQL Server 2016/2019 Database administration and

maintenance on Azure and AWS cloud.

Experience handling and managing large SQL Server databases in a real time production

environment with sizes greater than 200+ GB

Experience in troubleshooting and resolving database integrity issues performance

issues blocking/deadlocking issues connectivity issues data replication issues etc.

Experience on Configuration Trouble shoot on SQL Server HA

Ability to detect and troubleshoot database related CPUmemoryI/Odisk space and other

resource contention issues.

Experience with database maintenance activities such as backup/recovery & capacity

monitoring/management and Azure Backup Services.

Experience with HA/Failover technologies such as Clustering SAN Replication Log

shipping & mirroring.

Experience collaborating with development teams on physical database design activities

and performance tuning.

Experience in managing and making software deployments/changes in real time

production environments.

Ability to work on multiple projects at one time with minimal supervision and ensure high

quality timely delivery.

Knowledge on tools like SQL Lite speed SQL Diagnostic Manager App Dynamics.

Strong understanding of Data Warehousing concepts and SQL server Architecture

Certified DBA Proficient in TSQL Proficient in the various Storage technologies such as

ASM SAN NAS RAID Multi patching

Strong analytical and problem solving skills Proactive independent and proven ability to

work under tight target and pressure

Experience working in a highly regulated environment such as a financial services

institutions

Expertise in SSIS SSRS

Skills:

SSIS

SSRS


Read more
AArete Technosoft Pvt Ltd
Pune
7 - 12 yrs
₹25L - ₹30L / yr
Snowflake
Snow flake schema
ETL
Data Warehouse (DWH)
Python
+8 more
Help us modernize our data platforms, with a specific focus on Snowflake
• Work with various stakeholders, understand requirements, and build solutions/data pipelines
that address the needs at scale
• Bring key workloads to the clients’ Snowflake environment using scalable, reusable data
ingestion and processing frameworks to transform a variety of datasets
• Apply best practices for Snowflake architecture, ELT and data models
Skills - 50% of below:
• A passion for all things data; understanding how to work with it at scale, and more importantly,
knowing how to get the most out of it
• Good understanding of native Snowflake capabilities like data ingestion, data sharing, zero-copy
cloning, tasks, Snowpipe etc
• Expertise in data modeling, with a good understanding of modeling approaches like Star
schema and/or Data Vault
• Experience in automating deployments
• Experience writing code in Python, Scala or Java or PHP
• Experience in ETL/ELT either via a code-first approach or using low-code tools like AWS Glue,
Appflow, Informatica, Talend, Matillion, Fivetran etc
• Experience in one or more of the AWS especially in relation to integration with Snowflake
• Familiarity with data visualization tools like Tableau or PowerBI or Domo or any similar tool
• Experience with Data Virtualization tools like Trino, Starburst, Denodo, Data Virtuality, Dremio
etc.
• Certified SnowPro Advanced: Data Engineer is a must.
Read more
ZF india
Sagar Sthawarmath
Posted by Sagar Sthawarmath
Hyderabad, Chennai
4 - 9 yrs
₹3L - ₹15L / yr
SAS
Data Analytics
Data Visualization
Data integration
Data Warehouse (DWH)

In this role, you will: 

As part of a team focused on the preserving the customer experience across the organization, this Analytic Consultant will be responsible for: 

    • Understand business objectives and provide credible challenge to analysis requirements. 
    • Verify sound analysis practices and data decisions were leveraged throughout planning and data sourcing phases. 
    • Conduct in-depth research within complex data environments to identify data integrity issues and propose solutions to improve analysis accuracy. 
    • Applying critical evaluation to challenge assumptions, formulate a defendable hypothesis, and ensuring high quality analysis results. 
    • Ensure adherence to data management/ data governance regulations and policies. 
    • Performing and testing highly complex data analytics for customer remediation. 
    • Designing analysis projects flow and documentation that is structured for consistency, easy to understand, and to be offered to multiple levels of reviewers, partners, and regulatory agents demonstrating research and analysis completed. 
    • Investigate and ensure data integrity from multiple sources. 
    • Ensure data recommended and used is the best “source of truth”. 
    • Applies knowledge of business, customers, and products to synthesize data to 'form a story' and align information to contrast/compare to industry perspective. Data involved typically very large, structured or unstructured, and from multiple sources. 
    • Must have a strong attention to detail and be able to meet high quality standards consistently. 
    • Other duties as assigned by manager. 
    • Willing to assist on high priority work outside of regular business hours or weekend as needed. 


Essential Qualifications: 
 

    • Around 5+ years in similar analytics roles 
    • Bachelors, M.A./M.Sc. College Degree or Higher in applied mathematics, statistics, engineering, physics, accounting, finance, economics, econometrics, computer sciences, or business/social and behavioral sciences with a quantitative emphasis. 
    • Preferred programming knowledge SQL/SAS  
    • Knowledge of PVSI, Non-Lending, Student Loans, Small Business and Personal Lines and Loans is a plus. 
    • Strong experience with data integration, database structures and data warehouses. 
    • Persuasive written and verbal communication skills. 


Desired Qualifications: 

    • Certifications in Data Science, or BI Reporting tools. 
    • Ability to prioritize work, meet deadlines, achieve goals and work under pressure in a dynamic and complex environment – Soft Skills. 
    • Detail oriented, results driven, and has the ability to navigate in a quickly changing and high demand environment while balancing multiple priorities. 
    • Ability to research and report on a variety of issues using problem solving skills. 
    • Ability to act with integrity and a high level of professionalism with all levels of team members and management. 
    • Ability to make timely and independent judgment decisions while working in a fast-paced and results-driven environment. 
    • Ability to learn the business aspects quickly, multitask and prioritize between projects. 
    • Exhibits appropriate sense of urgency in managing responsibilities. 
    • Ability to accurately process high volumes of work within established deadlines. 
    • Available to flex schedule periodically based on business need. 
    • Demonstrate strong negotiation, communication & presentation skills. 
    • Demonstrates a high degree of reliability, integrity and trustworthiness. 
    • Takes ownership of assignments and helps drive assignments of the team. 
    • Dedicated, enthusiastic, driven and performance-oriented; possesses a strong work ethic and good team player. 
    • Be proactive and get engaged in organizational initiatives.
Read more
Softobiz Technologies Private limited

at Softobiz Technologies Private limited

2 candid answers
1 recruiter
Swati Sharma
Posted by Swati Sharma
Hyderabad
5 - 13 yrs
₹10L - ₹25L / yr
azure data factory
SQL server
SSIS
SQL Server Integration Services (SSIS)
Data Warehouse (DWH)
+7 more

Responsibilities


  • Design and implement Azure BI infrastructure, ensure overall quality of delivered solution 
  • Develop analytical & reporting tools, promote and drive adoption of developed BI solutions 
  • Actively participate in BI community 
  • Establish and enforce technical standards and documentation 
  • Participate in daily scrums  
  • Record progress daily in assigned Devops items 


Ideal Candidates should have


  • 5 + years of experience in a similar senior business intelligence development position 
  • To be successful in the role you will require a high level of expertise across all facets of the Microsoft BI stack and prior experience in designing and developing well-performing data warehouse solutions 
  • Demonstrated experience using development tools such as Azure SQL database, Azure Data Factory, Azure Data Lake, Azure Synapse, and Azure DevOps. 
  • Experience with development methodologies including Agile, DevOps, and CICD patterns 
  • Strong oral and written communication skills in English 
  • Ability and willingness to learn quickly and continuously 
  • Bachelor's Degree in computer science 


Read more
People Impact
Agency job
via People Impact by Pruthvi K
Remote only
4 - 10 yrs
₹10L - ₹20L / yr
Amazon Redshift
Datawarehousing
Amazon Web Services (AWS)
Snow flake schema
Data Warehouse (DWH)

Job Title: Data Warehouse/Redshift Admin

Location: Remote

Job Description

AWS Redshift Cluster Planning

AWS Redshift Cluster Maintenance

AWS Redshift Cluster Security

AWS Redshift Cluster monitoring.

Experience managing day to day operations of provisioning, maintaining backups, DR and monitoring of AWS RedShift/RDS clusters

Hands-on experience with Query Tuning in high concurrency environment

Expertise setting up and managing AWS Redshift

AWS certifications Preferred (AWS Certified SysOps Administrator)

Read more
Tredence
Rohit S
Posted by Rohit S
Chennai, Pune, Bengaluru (Bangalore), Gurugram
11 - 16 yrs
₹20L - ₹32L / yr
Data Warehouse (DWH)
Google Cloud Platform (GCP)
Amazon Web Services (AWS)
Data engineering
Data migration
+1 more
• Engages with Leadership of Tredence’s clients to identify critical business problems, define the need for data engineering solutions and build strategy and roadmap
• S/he possesses a wide exposure to complete lifecycle of data starting from creation to consumption
• S/he has in the past built repeatable tools / data-models to solve specific business problems
• S/he should have hand-on experience of having worked on projects (either as a consultant or with in a company) that needed them to
o Provide consultation to senior client personnel o Implement and enhance data warehouses or data lakes.
o Worked with business teams or was a part of the team that implemented process re-engineering driven by data analytics/insights
• Should have deep appreciation of how data can be used in decision-making
• Should have perspective on newer ways of solving business problems. E.g. external data, innovative techniques, newer technology
• S/he must have a solution-creation mindset.
Ability to design and enhance scalable data platforms to address the business need
• Working experience on data engineering tool for one or more cloud platforms -Snowflake, AWS/Azure/GCP
• Engage with technology teams from Tredence and Clients to create last mile connectivity of the solutions
o Should have experience of working with technology teams
• Demonstrated ability in thought leadership – Articles/White Papers/Interviews
Mandatory Skills Program Management, Data Warehouse, Data Lake, Analytics, Cloud Platform
Read more
Talent500
Agency job
via Talent500 by ANSR by Raghu R
Bengaluru (Bangalore)
1 - 10 yrs
₹5L - ₹30L / yr
Python
ETL
SQL
SQL Server Reporting Services (SSRS)
Data Warehouse (DWH)
+6 more

A proficient, independent contributor that assists in technical design, development, implementation, and support of data pipelines; beginning to invest in less-experienced engineers.

Responsibilities:

- Design, Create and maintain on premise and cloud based data integration pipelines. 
- Assemble large, complex data sets that meet functional/non functional business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources.
- Build analytics tools that utilize the data pipeline to provide actionable insights into key business performance metrics.
- Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
- Create data pipelines to enable BI, Analytics and Data Science teams that assist them in building and optimizing their systems
- Assists in the onboarding, training and development of team members.
- Reviews code changes and pull requests for standardization and best practices
- Evolve existing development to be automated, scalable, resilient, self-serve platforms
- Assist the team in the design and requirements gathering for technical and non technical work to drive the direction of projects

 

Technical & Business Expertise:

-Hands on integration experience in SSIS/Mulesoft
- Hands on experience Azure Synapse
- Proven advanced level of writing database experience in SQL Server
- Proven advanced level of understanding about Data Lake
- Proven intermediate level of writing Python or similar programming language
- Intermediate understanding of Cloud Platforms (GCP) 
- Intermediate understanding of Data Warehousing
- Advanced Understanding of Source Control (Github)

Read more
Number Theory

at Number Theory

3 recruiters
Nidhi Mishra
Posted by Nidhi Mishra
Gurugram
10 - 12 yrs
₹10L - ₹40L / yr
Artificial Intelligence (AI)
Data Science
Windows Azure
Cloud Computing
Java
+2 more
Project Role Description:
  • Manages the delivery of large, complex Data Science projects using appropriate frameworks and collaborating with stake holders to manage scope and risk. Help the AI/ML Solution
  • Analyst to build solution as per customer need on our platform Newgen AI Cloud. Drives profitability and continued success by managing service quality and cost and leading delivery. Proactively support sales through innovative solutions and delivery excellence.
  •  
Work Experience:12+ years
Work location: Gurugram

Key Responsibilities:
1 Collaborate/contribute to all project phases, technical know to design, develop solutions and deploy at customer end.
2 End-to-end implementations i.e. gathering requirements, analysing, designing, coding, deployment to Production
3 Client facing role talking to client on regular basis to get requirement clarification
4. Lead the team

Core Tech Skills: Azure, Cloud Computing, Java/Scala, Python, Design Patterns and fair knowledge of Data Science. Fair Knowledge of Data Lake/DWH
Educational Qualification: Engineering graduate preferably Computer since graduate
Read more
Sixt R&D
Bengaluru (Bangalore)
5 - 8 yrs
₹11L - ₹14L / yr
SQL
Python
RESTful APIs
Business Intelligence (BI)
QuickSight
+6 more

Technical-Requirements: 

  • Bachelor's Degree in Computer Science or a related technical field, and solid years of relevant experience. 
  • A strong grasp of SQL/Presto and at least one scripting (Python, preferable) or programming language. 
  • Experience with an enterprise class BI tools and it's auditing along with automations using REST API's. 
  • Experience with reporting tools – QuickSight (preferred, at least 2 years hands on). 
  • Tableau/Looker (both or anyone would suffice with at least 5+ years of hands on). 
  • 5+ years of experience with and detailed knowledge of data warehouse technical architectures, data modelling, infrastructure components, ETL/ ELT and reporting/analytic tools and environments, data structures and hands-on SQL coding. 
  • 5+ years of demonstrated quantitative and qualitative business intelligence. 
  • Experience with significant product analysis based business impact. 
  • 4+ years of large IT project delivery for BI oriented projects using agile framework. 
  • 2+ years of working with very large data warehousing environment. 
  • Experience in designing and delivering cross functional custom reporting solutions. 
  • Excellent oral and written communication skills including the ability to communicate effectively with both technical and non-technical stakeholders. 
  • Proven ability to meet tight deadlines, multi-task, and prioritize workload. 
  • A work ethic based on a strong desire to exceed expectations. 
  • Strong analytical and challenge process skills.
Read more
xpressbees
Alfiya Khan
Posted by Alfiya Khan
Pune, Bengaluru (Bangalore)
6 - 8 yrs
₹15L - ₹25L / yr
Big Data
Data Warehouse (DWH)
Data modeling
Apache Spark
Data integration
+10 more
Company Profile
XpressBees – a logistics company started in 2015 – is amongst the fastest growing
companies of its sector. While we started off rather humbly in the space of
ecommerce B2C logistics, the last 5 years have seen us steadily progress towards
expanding our presence. Our vision to evolve into a strong full-service logistics
organization reflects itself in our new lines of business like 3PL, B2B Xpress and cross
border operations. Our strong domain expertise and constant focus on meaningful
innovation have helped us rapidly evolve as the most trusted logistics partner of
India. We have progressively carved our way towards best-in-class technology
platforms, an extensive network reach, and a seamless last mile management
system. While on this aggressive growth path, we seek to become the one-stop-shop
for end-to-end logistics solutions. Our big focus areas for the very near future
include strengthening our presence as service providers of choice and leveraging the
power of technology to improve efficiencies for our clients.

Job Profile
As a Lead Data Engineer in the Data Platform Team at XpressBees, you will build the data platform
and infrastructure to support high quality and agile decision-making in our supply chain and logistics
workflows.
You will define the way we collect and operationalize data (structured / unstructured), and
build production pipelines for our machine learning models, and (RT, NRT, Batch) reporting &
dashboarding requirements. As a Senior Data Engineer in the XB Data Platform Team, you will use
your experience with modern cloud and data frameworks to build products (with storage and serving
systems)
that drive optimisation and resilience in the supply chain via data visibility, intelligent decision making,
insights, anomaly detection and prediction.

What You Will Do
• Design and develop data platform and data pipelines for reporting, dashboarding and
machine learning models. These pipelines would productionize machine learning models
and integrate with agent review tools.
• Meet the data completeness, correction and freshness requirements.
• Evaluate and identify the data store and data streaming technology choices.
• Lead the design of the logical model and implement the physical model to support
business needs. Come up with logical and physical database design across platforms (MPP,
MR, Hive/PIG) which are optimal physical designs for different use cases (structured/semi
structured). Envision & implement the optimal data modelling, physical design,
performance optimization technique/approach required for the problem.
• Support your colleagues by reviewing code and designs.
• Diagnose and solve issues in our existing data pipelines and envision and build their
successors.

Qualifications & Experience relevant for the role

• A bachelor's degree in Computer Science or related field with 6 to 9 years of technology
experience.
• Knowledge of Relational and NoSQL data stores, stream processing and micro-batching to
make technology & design choices.
• Strong experience in System Integration, Application Development, ETL, Data-Platform
projects. Talented across technologies used in the enterprise space.
• Software development experience using:
• Expertise in relational and dimensional modelling
• Exposure across all the SDLC process
• Experience in cloud architecture (AWS)
• Proven track record in keeping existing technical skills and developing new ones, so that
you can make strong contributions to deep architecture discussions around systems and
applications in the cloud ( AWS).

• Characteristics of a forward thinker and self-starter that flourishes with new challenges
and adapts quickly to learning new knowledge
• Ability to work with a cross functional teams of consulting professionals across multiple
projects.
• Knack for helping an organization to understand application architectures and integration
approaches, to architect advanced cloud-based solutions, and to help launch the build-out
of those systems
• Passion for educating, training, designing, and building end-to-end systems.
Read more
Bengaluru (Bangalore)
5 - 10 yrs
₹30L - ₹45L / yr
Google Adwords
PPC
Data Warehouse (DWH)
SQL
Web Analytics
+7 more

What is the role?

We are looking for a Senior Performance Marketing manager (PPC/SEM) who will be responsible for paid advertising for this company across Google Ads, Social ads and other demand-gen channels.

Our ideal candidate has a blend of analytical and creative mindset, passionate about driving metrics while also being very sensitive to brand and user experience, and distinctive at operating in highly collaborative and cross-functional settings. This role partners closely with our Sales, product, design, and broader marketing teams.

Key responsibilities

  • Strategise, execute, monitor, and manage campaigns across multiple platforms such as Google Ads (incl. Search, Display & Youtube),, Facebook Ads & Linkedin Ads.
  • Oversee growth in performance campaigns to meet the brand's business goals and strategies.
  • Should have hands-on experience on managing landing pages, keyword plans, ad copies, display ads etc.
  • Should have extremely good analytical skills to figure out the signals in the campaigns and optimize the campaigns using these insights. 
  • Implement ongoing A/B and user experience testing for ads, quality score, placements, dynamic landing pages and measure their effectiveness.
  • Monitor campaign performance & budget pacing on a day-to-day basis. 
  • Measure the Campaign performance parameters methodically, analyze campaign performance, compile and present detailed reports with proactive insights.
  • Be informed on the latest trends, best practices, and standards in online advertising across demand-gen channels.
  • Perform Media Mix modeling. Design, develop, and monitor other digital media buying campaigns.

What are we looking for?

  • 5-10 years of pure PPC experience, preferably in a SaaS company managing annual budgets of more than 2 mn USD.
  • Highly comfortable with Google Ads Editor, Linkedin Ads, Facebook Business Manager & such.
  • Strong working knowledge of PPC Automations/Rules/Scripts and best practices with the ability to analyze Campaign metrics on Excel/Data Studio and optimize campaigns with insights.
  • Experience working with Ad channel APIs and other data APIs to deep-dive into metrics & make data-informed optimisations.
  • [Good to have] Working knowledge of SQL, Data Warehouses (Bigquery), Data connectors/pipelines, blends/joints (for blending multiple data sources) etc.
  • In-depth experience with GA4. Clear understanding of Web Analytics.
  • Comfortable writing and editing content for ad copies, landing page snippets, descriptions, etc.
  • Experience with running Campaigns for US, European, and the global markets.

What can you look for?

A wholesome opportunity in a fast-paced environment that will enable you to juggle between concepts, yet maintain the quality of content, interact, and share your ideas and have loads of learning while at work. Work with a team of highly talented young professionals and enjoy the benefits of being at this company

We are

It  is a rapidly growing fintech SaaS firm that propels business growth while focusing on human motivation. Backed by Giift and Apis Partners Growth Fund II, Company offers a suite of three products - Plum, Empuls, and Compass. The company works with more than 2000 clients across 10+ countries and over 2.5 million users. Headquartered in Bengaluru, Company is a 300+ strong team with four global offices in Dubai, San Francisco, Dublin, Singapore, New Delhi.

Way forward

We look forward to connecting with you. As you may take time to review this opportunity, we will wait for a reasonable time of around 3-5 days before we screen the collected applications and start lining up job discussions with the hiring manager. We however assure you that we will attempt to maintain a reasonable time window for successfully closing this requirement. The candidates will be kept informed and updated on the feedback and application status.

Read more
GradMener Technology Pvt. Ltd.
Pune, Chennai
5 - 9 yrs
₹15L - ₹20L / yr
Scala
PySpark
Spark
SQL Azure
Hadoop
+4 more
  • 5+ years of experience in a Data Engineering role on cloud environment
  • Must have good experience in Scala/PySpark (preferably on data-bricks environment)
  • Extensive experience with Transact-SQL.
  • Experience in Data-bricks/Spark.
  • Strong experience in Dataware house projects
  • Expertise in database development projects with ETL processes.
  • Manage and maintain data engineering pipelines
  • Develop batch processing, streaming and integration solutions
  • Experienced in building and operationalizing large-scale enterprise data solutions and applications
  • Using one or more of Azure data and analytics services in combination with custom solutions
  • Azure Data Lake, Azure SQL DW (Synapse), and SQL Database products or equivalent products from other cloud services providers
  • In-depth understanding of data management (e. g. permissions, security, and monitoring).
  • Cloud repositories for e.g. Azure GitHub, Git
  • Experience in an agile environment (Prefer Azure DevOps).

Good to have

  • Manage source data access security
  • Automate Azure Data Factory pipelines
  • Continuous Integration/Continuous deployment (CICD) pipelines, Source Repositories
  • Experience in implementing and maintaining CICD pipelines
  • Power BI understanding, Delta Lake house architecture
  • Knowledge of software development best practices.
  • Excellent analytical and organization skills.
  • Effective working in a team as well as working independently.
  • Strong written and verbal communication skills.
  • Expertise in database development projects and ETL processes.
Read more
Velocity Services

at Velocity Services

2 recruiters
Newali Hazarika
Posted by Newali Hazarika
Bengaluru (Bangalore)
4 - 9 yrs
₹15L - ₹35L / yr
ETL
Informatica
Data Warehouse (DWH)
Data engineering
Oracle
+7 more

We are an early stage start-up, building new fintech products for small businesses. Founders are IIT-IIM alumni, with prior experience across management consulting, venture capital and fintech startups. We are driven by the vision to empower small business owners with technology and dramatically improve their access to financial services. To start with, we are building a simple, yet powerful solution to address a deep pain point for these owners: cash flow management. Over time, we will also add digital banking and 1-click financing to our suite of offerings.

 

We have developed an MVP which is being tested in the market. We have closed our seed funding from marquee global investors and are now actively building a world class tech team. We are a young, passionate team with a strong grip on this space and are looking to on-board enthusiastic, entrepreneurial individuals to partner with us in this exciting journey. We offer a high degree of autonomy, a collaborative fast-paced work environment and most importantly, a chance to create unparalleled impact using technology.

 

Reach out if you want to get in on the ground floor of something which can turbocharge SME banking in India!

 

Technology stack at Velocity comprises a wide variety of cutting edge technologies like, NodeJS, Ruby on Rails, Reactive Programming,, Kubernetes, AWS, NodeJS, Python, ReactJS, Redux (Saga) Redis, Lambda etc. 

 

Key Responsibilities

  • Responsible for building data and analytical engineering pipelines with standard ELT patterns, implementing data compaction pipelines, data modelling and overseeing overall data quality

  • Work with the Office of the CTO as an active member of our architecture guild

  • Writing pipelines to consume the data from multiple sources

  • Writing a data transformation layer using DBT to transform millions of data into data warehouses.

  • Implement Data warehouse entities with common re-usable data model designs with automation and data quality capabilities

  • Identify downstream implications of data loads/migration (e.g., data quality, regulatory)

 

What To Bring

  • 5+ years of software development experience, a startup experience is a plus.

  • Past experience of working with Airflow and DBT is preferred

  • 5+ years of experience working in any backend programming language. 

  • Strong first-hand experience with data pipelines and relational databases such as Oracle, Postgres, SQL Server or MySQL

  • Experience with DevOps tools (GitHub, Travis CI, and JIRA) and methodologies (Lean, Agile, Scrum, Test Driven Development)

  • Experienced with the formulation of ideas; building proof-of-concept (POC) and converting them to production-ready projects

  • Experience building and deploying applications on on-premise and AWS or Google Cloud cloud-based infrastructure

  • Basic understanding of Kubernetes & docker is a must.

  • Experience in data processing (ETL, ELT) and/or cloud-based platforms

  • Working proficiency and communication skills in verbal and written English.

 

Read more
Mumbai
2 - 5 yrs
₹2L - ₹8L / yr
Data Warehouse (DWH)
Informatica
ETL
Microsoft Windows Azure
Big Data
+1 more
Responsible for the evaluation of cloud strategy and program architecture
2. Responsible for gathering system requirements working together with application architects
and owners
3. Responsible for generating scripts and templates required for the automatic provisioning of
resources
4. Discover standard cloud services offerings, install, and execute processes and standards for
optimal use of cloud service provider offerings
5. Incident Management on IaaS, PaaS, SaaS.
6. Responsible for debugging technical issues inside a complex stack involving virtualization,
containers, microservices, etc.
7. Collaborate with the engineering teams to enable their applications to run
on Cloud infrastructure.
8. Experience with OpenStack, Linux, Amazon Web Services, Microsoft Azure, DevOps, NoSQL
etc will be plus.
9. Design, implement, configure, and maintain various Azure IaaS, PaaS, SaaS services.
10. Deploy and maintain Azure IaaS Virtual Machines and Azure Application and Networking
Services.
11. Optimize Azure billing for cost/performance (VM optimization, reserved instances, etc.)
12. Implement, and fully document IT projects.
13. Identify improvements to IT documentation, network architecture, processes/procedures,
and tickets.
14. Research products and new technologies to increase efficiency of business and operations
15. Keep all tickets and projects updated and track time in a detailed format
16. Should be able to multi-task and work across a range of projects and issues with various
timelines and priorities
Technical:
• Minimum 1 year experience Azure and knowledge on Office365 services preferred.
• Formal education in IT preferred
• Experience with Managed Service business model a major plus
• Bachelor’s degree preferred
Read more
Impetus

at Impetus

3 recruiters
Agency job
via Impetus by Gangadhar TM
Bengaluru (Bangalore), Pune, Hyderabad, Indore, Noida, Gurugram
10 - 16 yrs
₹30L - ₹50L / yr
Big Data
Data Warehouse (DWH)
Product Management

Job Title: Product Manager

 

Job Description

Bachelor or master’s degree in computer science or equivalent experience.
Worked as Product Owner before and took responsibility for a product or project delivery.
Well-versed with data warehouse modernization to Big Data and Cloud environments.
Good knowledge* of any of the Cloud (AWS/Azure/GCP) – Must Have
Practical experience with continuous integration and continuous delivery workflows.
Self-motivated with strong organizational/prioritization skills and ability to multi-task with close attention to detail.
Good communication skills
Experience in working within a distributed agile team
Experience in handling migration projects – Good to Have
 

*Data Ingestion, Processing, and Orchestration knowledge

 

Roles & Responsibilities


Responsible for coming up with innovative and novel ideas for the product.
Define product releases, features, and roadmap.
Collaborate with product teams on defining product objectives, including creating a product roadmap, delivery, market research, customer feedback, and stakeholder inputs.
Work with the Engineering teams to communicate release goals and be a part of the product lifecycle. Work closely with the UX and UI team to create the best user experience for the end customer.
Work with the Marketing team to define GTM activities.
Interface with Sales & Customer teams to identify customer needs and product gaps
Market and competition analysis activities.
Participate in the Agile ceremonies with the team, define epics, user stories, acceptance criteria
Ensure product usability from the end-user perspective

 

Mandatory Skills

Product Management, DWH, Big Data

Read more
Remote only
5 - 8 yrs
₹20L - ₹25L / yr
Tableau
SQL
Relational Database (RDBMS)
Amazon Redshift
PostgreSQL
+3 more

What is the role?

You will be responsible for building and maintaining highly scalable data infrastructure for our cloud-hosted SAAS product. You will work closely with the Product Managers and Technical team to define and implement data pipelines for customer-facing and internal reports.

Key Responsibilities

  • Understanding of the business process and requirements thoroughly and convert them to the reports.
  • Should be able to suggest the right way to the users of the reports.
  • Developing, maintaining, and managing advanced reporting, analytics, dashboards and other BI solutions.

What are we looking for?

An enthusiastic individual with the following skills. Please do not hesitate to apply if you do not match all of it. We are open to promising candidates who are passionate about their work and are team players.

  • Education - BE/MCA or equivalent
  • Good experience in working on the performance side of the reports.
  • Expert level knowlege of querying in any RDBMS and preferrably in Redshift or Postgress
  • Expert level knowledge of Datawarehousing concepts
  • Advanced level scripting to create calculated fields, sets, parameters, etc
  • Degree in mathematics, computer science, information systems, or related field.
  • 5-7 years of exclusive experience Tableau and Dataware house.

Whom will you work with?

You will work with a top-notch tech team, working closely with the CTO and product team.  

What can you look for?

A wholesome opportunity in a fast-paced environment that will enable you to juggle between concepts, yet maintain the quality on content, interact and share your ideas and have loads of learning while at work. Work with a team of highly talented young professionals and enjoy the benefits.

We are

A fast-growing SaaS commerce company based in Bangalore with offices in Delhi, Mumbai, SF, Dubai, Singapore and Dublin. We have three products in our portfolio: Plum, Empuls and Compass. Works with over 1000 global clients. We help our clients in engaging and motivating their employees, sales teams, channel partners or consumers for better business results.

 

Read more
Hanu

at Hanu

Agency job
Gurgaon/Gurugram, Delhi, Gurugram, Noida, Ghaziabad, Faridabad
7 - 15 yrs
₹25L - ₹45L / yr
Data Warehouse (DWH)
ETL
ADF
Business Intelligence (BI)
Data architecture
+2 more
Responsibilities

* Formulates and recommends standards for achieving maximum performance

and efficiency of the DW ecosystem.

* Participates in the Pre-sales activities for solutions of various customer

problem-statement/situations.

* Develop business cases and ROI for the customer/clients.

* Interview stakeholders and develop BI roadmap for success given project

prioritization

* Evangelize self-service BI and visual discovery while helping to automate any

manual process at the client site.

* Work closely with the Engineering Manager to ensure prioritization of

customer deliverables.

* Champion data quality, integrity, and reliability throughout the organization by

designing and promoting best practices.

 *Implementation 20%

* Help DW/DE team members with issues needing technical expertise or

complex systems and/or programming knowledge.

* Provide on-the-job training for new or less experienced team members.

* Develop a technical excellence team

Requirements

- experience designing business intelligence solutions

- experience with ETL Process, Data warehouse architecture

- experience with Azure Data services i.e., ADF, ADLS Gen 2, Azure SQL dB,

Synapse, Azure Databricks, and Power BI

- Good analytical and problem-solving skills

- Fluent in relational database concepts and flat file processing concepts

- Must be knowledgeable in software development lifecycles/methodologies
Read more
Ganit Business Solutions

at Ganit Business Solutions

3 recruiters
Vijitha VS
Posted by Vijitha VS
Remote only
4 - 7 yrs
₹10L - ₹30L / yr
Scala
ETL
Informatica
Data Warehouse (DWH)
Big Data
+4 more

Job Description:

We are looking for a Big Data Engineer who have worked across the entire ETL stack. Someone who has ingested data in a batch and live stream format, transformed large volumes of daily and built Data-warehouse to store the transformed data and has integrated different visualization dashboards and applications with the data stores.    The primary focus will be on choosing optimal solutions to use for these purposes, then maintaining, implementing, and monitoring them.

Responsibilities:

  • Develop, test, and implement data solutions based on functional / non-functional business requirements.
  • You would be required to code in Scala and PySpark daily on Cloud as well as on-prem infrastructure
  • Build Data Models to store the data in a most optimized manner
  • Identify, design, and implement process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Implementing the ETL process and optimal data pipeline architecture
  • Monitoring performance and advising any necessary infrastructure changes.
  • Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
  • Work with data and analytics experts to strive for greater functionality in our data systems.
  • Proactively identify potential production issues and recommend and implement solutions
  • Must be able to write quality code and build secure, highly available systems.
  • Create design documents that describe the functionality, capacity, architecture, and process.
  • Review peer-codes and pipelines before deploying to Production for optimization issues and code standards

Skill Sets:

  • Good understanding of optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and ‘big data’ technologies.
  • Proficient understanding of distributed computing principles
  • Experience in working with batch processing/ real-time systems using various open-source technologies like NoSQL, Spark, Pig, Hive, Apache Airflow.
  • Implemented complex projects dealing with the considerable data size (PB).
  • Optimization techniques (performance, scalability, monitoring, etc.)
  • Experience with integration of data from multiple data sources
  • Experience with NoSQL databases, such as HBase, Cassandra, MongoDB, etc.,
  • Knowledge of various ETL techniques and frameworks, such as Flume
  • Experience with various messaging systems, such as Kafka or RabbitMQ
  • Creation of DAGs for data engineering
  • Expert at Python /Scala programming, especially for data engineering/ ETL purposes

 

 

 

Read more
Hiring for a leading client
New Delhi
3 - 5 yrs
₹10L - ₹15L / yr
Big Data
Apache Kafka
Business Intelligence (BI)
Data Warehouse (DWH)
Coding
+15 more
Job Description:
Senior Software Engineer - Data Team

We are seeking a highly motivated Senior Software Engineer with hands-on experience and build scalable, extensible data solutions, identifying and addressing performance bottlenecks, collaborating with other team members, and implementing best practices for data engineering. Our engineering process is fully agile, and has a really fast release cycle - which keeps our environment very energetic and fun.

What you'll do:

Design and development of scalable applications.
Work with Product Management teams to get maximum value out of existing data.
Contribute to continual improvement by suggesting improvements to the software system.
Ensure high scalability and performance
You will advocate for good, clean, well documented and performing code; follow standards and best practices.
We'd love for you to have:

Education: Bachelor/Master Degree in Computer Science.
Experience: 3-5 years of relevant experience in BI/DW with hands-on coding experience.

Mandatory Skills

Strong in problem-solving
Strong experience with Big Data technologies, Hive, Hadoop, Impala, Hbase, Kafka, Spark
Strong experience with orchestration framework like Apache oozie, Airflow
Strong experience of Data Engineering
Strong experience with Database and Data Warehousing technologies and ability to understand complex design, system architecture
Experience with the full software development lifecycle, design, develop, review, debug, document, and deliver (especially in a multi-location organization)
Good knowledge of Java
Desired Skills

Experience with Python
Experience with reporting tools like Tableau, QlikView
Experience of Git and CI-CD pipeline
Awareness of cloud platform ex:- AWS
Excellent communication skills with team members, Business owners, across teams
Be able to work in a challenging, dynamic environment and meet tight deadlines
Read more
Accion Labs

at Accion Labs

14 recruiters
Anjali Mohandas
Posted by Anjali Mohandas
Remote, Bengaluru (Bangalore), Pune, Hyderabad, Mumbai
4 - 8 yrs
₹15L - ₹28L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+5 more

4-6 years of total experience in data warehousing and business intelligence

3+ years of solid Power BI experience (Power Query, M-Query, DAX, Aggregates)

2 years’ experience building Power BI using cloud data (Snowflake, Azure Synapse, SQL DB, data lake)

Strong experience building visually appealing UI/UX in Power BI

Understand how to design Power BI solutions for performance (composite models, incremental refresh, analysis services)

Experience building Power BI using large data in direct query mode

Expert SQL background (query building, stored procedure, optimizing performance)

Read more
Remote, Bengaluru (Bangalore)
6 - 9 yrs
₹25L - ₹40L / yr
Amazon Web Services (AWS)
Data Warehouse (DWH)
MySQL
NOSQL Databases
PostgreSQL
+4 more

Job Description :

A candidate who has a strong background in the design and implementation of scalable architecture and a good understanding of Algorithms, Data structures, and design patterns. Candidate must be ready to learn new tools, languages, and technologies

Skills :

Microservices, MySQL/Postgres, Kafka/Message Queues, Elasticsearch, Data pipelines, AWS Cloud, Clickhouse/Redshift

What you need to succeed in this role

  • Minimum 6 years of experience
  • Good understanding of various database types: RDBMS, NoSQL, GraphDB, etc
  • Ability to build highly stable reliable APIs and backend services.
  • Should be familiar with distributed, high availability database systems
  • Experience with queuing systems like Kafka
  • Hands-on in cloud infrastructure AWS/GCP/Azure
  • Highly plus if know one or more of the following: Confluent ksql, Kafka connect, Kafka streams
  • Hands-on experience with data warehouse/OLAP systems such as Redshift, click house and added plus. 
  • Good communication and interpersonal skills

Benefits of joining us

  • Ability to join a small and growing team, and work with some of the coolest people you've ever met
  • Opportunity to make an impact, and leave your mark on this organization.
  • Competitive compensation, with the ability to shape your own career trajectory
  • Go Extra Mile with Learning and Development

What can you look for?

A wholesome opportunity in a fast-paced environment that will enable you to juggle between concepts, yet maintain the quality on content, interact and share your ideas and have loads of learning while at work. Work with a team of highly talented young professionals and enjoy the benefits of being at Xoxoday.

We are

A fast-growing SaaS commerce company based in Bangalore with offices in Delhi, Mumbai, SF, Dubai, Singapore and Dublin. We have three products in our portfolio: Plum, Empuls and Compass. Works with over 1000 global clients. We help our clients in engaging and motivating their employees, sales teams, channel partners or consumers for better business results.

Read more
CoStrategix Technologies

at CoStrategix Technologies

1 video
1 recruiter
Jayasimha Kulkarni
Posted by Jayasimha Kulkarni
Remote, Bengaluru (Bangalore)
4 - 8 yrs
₹10L - ₹28L / yr
Data engineering
Data Structures
Programming
Python
C#
+3 more

 

Job Description - Sr Azure Data Engineer

 

 

Roles & Responsibilities:

  1. Hands-on programming in C# / .Net,
  2. Develop serverless applications using Azure Function Apps.
  3. Writing complex SQL Queries, Stored procedures, and Views. 
  4. Creating Data processing pipeline(s).
  5. Develop / Manage large-scale Data Warehousing and Data processing solutions.
  6. Provide clean, usable data and recommend data efficiency, quality, and data integrity.

 

Skills

  1. Should have working experience on C# /.Net.
  2. Proficient with writing SQL queries, Stored Procedures, and Views
  3. Should have worked on Azure Cloud Stack.
  4. Should have working experience ofin developing serverless code.
  5. Must have MANDATORILY worked on Azure Data Factory.

 

Experience 

  1. 4+ years of relevant experience

 

Read more
Impetus Technologies

at Impetus Technologies

1 recruiter
Gangadhar T.M
Posted by Gangadhar T.M
Bengaluru (Bangalore), Hyderabad, Pune, Indore, Gurugram, Noida
10 - 17 yrs
₹25L - ₹50L / yr
Product Management
Big Data
Data Warehouse (DWH)
ETL
Hi All, 
Greetings! We are looking for Product Manager for our Data modernization product. We need a resource with good knowledge on Big Data/DWH. should have strong Stakeholders management and Presentation skills
Read more
AES Technologies

at AES Technologies

3 recruiters
Archana KS
Posted by Archana KS
Remote only
3 - 5 yrs
₹5L - ₹7L / yr
IBM Cognos
IBM Cognos TM1
IBM Cognos administration
Cognos
SQL
+1 more

The role provides L2 and L3 support and IS services to F&PA community using the Cognos Controller Applications

  1. Problem determination / troubleshooting and root cause analysis skills for our IBM Cognos Controller Cloud and On-Premises offerings 
  2. Create or enhance knowledge assets for our Knowledge Base (how-to guides, technical notes or similar).
  3. Contribute to key operational metrics: for example: NPS, initial response times, backlog management, Time to Resolution

Technical expertise:

  1. Previous experience with IBM Cognos Controller, Planning Analytics, or equivalent financial consolidation software (e.g. Anaplan, Tagetek, OneStream, Vena Financial Close Management, Oracle Financial Consolidation, ) 
  2. Web tier technologies 
  3. Network administration 
  4. Web applications 
  5. Databases (SQL Server Oracle, DB2) 
  6. Strong troubleshooting communication skills 
  7. Demonstrated organization and time management skills 
  8. Demonstrated verbal and written communication skills 
  9. Must be self-motivated and disciplined 
  10. Ability to recognize and prioritize critical tasks independently

 

1. Strong in IBM Cognos Controller 10x expertise 2. Strong SQL knowledge (Oracle, MS SQL Server) 3. Good in Data-warehousing concepts with cognos schema  4. Preferred Knowledge on Cognos finance products - Planning/TM1 5. Excellent communication skills  

cognos Controller 10.4.2 -On premise experience - this is the keyword for the search for Cognos Controller.

 

Read more
Bengaluru (Bangalore)
3 - 6 yrs
₹15L - ₹30L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+6 more

Responsibilities:

  • Ensure and own Data integrity across distributed systems.
  • Extract, Transform and Load data from multiple systems for reporting into BI platform.
  • Create Data Sets and Data models to build intelligence upon.
  • Develop and own various integration tools and data points.
  • Hands-on development and/or design within the project in order to maintain timelines.
  • Work closely with the Project manager to deliver on business requirements OTIF (on time in full)
  • Understand the cross-functional business data points thoroughly and be SPOC for all data-related queries.
  • Work with both Web Analytics and Backend Data analytics.
  • Support the rest of the BI team in generating reports and analysis
  • Quickly learn and use Bespoke & third party SaaS reporting tools with little documentation.
  • Assist in presenting demos and preparing materials for Leadership.

 Requirements:

  • Strong experience in Datawarehouse modeling techniques and SQL queries
  • A good understanding of designing, developing, deploying, and maintaining Power BI report solutions
  • Ability to create KPIs, visualizations, reports, and dashboards based on business requirement
  • Knowledge and experience in prototyping, designing, and requirement analysis
  • Be able to implement row-level security on data and understand application security layer models in Power BI
  • Proficiency in making DAX queries in Power BI desktop.
  • Expertise in using advanced level calculations on data sets
  • Experience in the Fintech domain and stakeholder management.
Read more
Fintech Company
Agency job
via Jobdost by Sathish Kumar
Bengaluru (Bangalore)
2 - 4 yrs
₹7L - ₹12L / yr
Python
SQL
Data Warehouse (DWH)
Hadoop
Amazon Web Services (AWS)
+7 more

Purpose of Job:

Responsible for drawing insights from many sources of data to answer important business
questions and help the organization make better use of data in their daily activities.


Job Responsibilities:

We are looking for a smart and experienced Data Engineer 1 who can work with a senior
manager to
⮚ Build DevOps solutions and CICD pipelines for code deployment
⮚ Build unit test cases for APIs and Code in Python
⮚ Manage AWS resources including EC2, RDS, Cloud Watch, Amazon Aurora etc.
⮚ Build and deliver high quality data architecture and pipelines to support business
and reporting needs
⮚ Deliver on data architecture projects and implementation of next generation BI
solutions
⮚ Interface with other teams to extract, transform, and load data from a wide variety
of data sources
Qualifications:
Education: MS/MTech/Btech graduates or equivalent with focus on data science and
quantitative fields (CS, Eng, Math, Eco)
Work Experience: Proven 1+ years of experience in data mining (SQL, ETL, data
warehouse, etc.) and using SQL databases

 

Skills
Technical Skills
⮚ Proficient in Python and SQL. Familiarity with statistics or analytical techniques
⮚ Data Warehousing Experience with Big Data Technologies (Hadoop, Hive,
Hbase, Pig, Spark, etc.)
⮚ Working knowledge of tools and utilities - AWS, DevOps with Git, Selenium,
Postman, Airflow, PySpark
Soft Skills
⮚ Deep Curiosity and Humility
⮚ Excellent storyteller and communicator
⮚ Design Thinking

Read more
JetSynthesys Pvt. Ltd.

at JetSynthesys Pvt. Ltd.

1 recruiter
Agency job
via Jobdost by Mamatha A
Remote, Pune
5 - 7 yrs
₹12L - ₹16L / yr
Javascript
NodeJS (Node.js)
PHP
SQL
C#
+12 more
leading marketing software provides mobile app developers a powerful set of solutions to grow their mobile apps.  technology platform enables developers to market, monetize, analyze and publish their apps. The company’s first-party content includes over 200+ popular, engaging apps and its technology brings that content to millions of users around the world. Company is headquartered in Palo Alto, California with several offices globally.
 
Company is a Certified Great Place to Work, one of Inc.’s Best Workplaces and a recipient of the 2019 Glassdoor Top CEO employee’s choice award. The San Francisco Business Times and Silicon Valley Business Journal awarded Company one of the Bay Area’s Best Places to Work in 2019, 2020 and 2021, and the Workplace Wellness Award in 2019 which recognizes businesses that are leaders in improving worker well-being.
 
We are looking for a Senior Full Stack Engineer to join our team! As a Senior Full Stack Engineer, you will lead our dashboards and backend engineering team and help oversee our AWS Cloud infrastructure. You will help build marketing tools to help our growth team, develop our web portal to interface with third-party developers and support backend services and data pipelines for our games. If you are a driven, results-focused engineer looking to work with a fun, collaborative team - we’d love to talk to you!
 
Location / time zone preferences: Mostly overlap with West Coast / PST time zone
Language preferences: English B1-B2 or above
 

Responsibilities:

    • Take the lead in building tools to increase the productivity of our business and product teams
    • Build client facing portal to support the submission and integration of games from external developers
    • Collaborate with teams in a range of disciplines
    • Clearly communicate challenges and progress to stakeholders
    • Adopt and learn new technologies

Basic Qualifications:

    • 5+ years professional experience in software development and a BS or MS in Computer Science or related field
    • Solid understanding of Javascript, NodeJS, PHP, SQL, C#
    • Strong knowledge of AWS Cloud architecture, services, and DevOps
    • Adhere to software design patterns and have knowledge of algorithms
    • Experience with databases and database programming (MySQL, NoSQL, etc.)
    • Comfortable understanding and implementing REST APIs, knowledge of AJAX patterns and principles
    • In-depth knowledge of modern HTML/CSS
    • Strong understanding of web architecture, security, cookies, reverse-proxies
    • Have a solid knowledge of web debugging tools (Firebug or Chrome Developer Console)

Pluses:

    • Bonus points for data warehouse experience (Snowflake, Redshift)
    • Experience in game programming and Unity development
    • Knowledge of unit testing and test driven development
    • A passion for games (of any type) as well as a passion for code
    • Knowledge of mobile gaming metrics and the mobile gaming industry

Perks:

    • Free medical, dental, and vision insurance
    • Work from home stipend on each paycheck
    • Competitive Salary
    • Flexible Time Off - work hard and take time when you need it
    • Interested? Send us your resume and let's talk!
 is an equal opportunity employer and considers qualified applicants without regard to race, gender, sexual orientation, gender identity or expression, genetic information, national origin, age, disability, medical condition, religion, marital status or veteran status, or any other basis protected by law.
Read more
Semperfi Solution

at Semperfi Solution

1 recruiter
Ambika Jituri
Posted by Ambika Jituri
Bengaluru (Bangalore)
0 - 1 yrs
₹3L - ₹6L / yr
Go Programming (Golang)
Ruby on Rails (ROR)
Ruby
Python
Java
+7 more

Job Description

Job Description SQL DBA - Trainee Analyst/Engineer

Experience : 0 to 1 Years

No.of Positions: 2

Job Location: Bangalore

Notice Period: Immediate / Max 15 Days

The candidate should have strong SQL knowledge, Here are few points

    • Implement and maintain the database design
    • Create database objects (tables, indexes, etc.)
    • Write database procedures, functions, and triggers

Good soft skills are a must (written and verbal communication)

Good team player

Ability to work in a 24x7 support model (rotation basis)

Strong fundamentals in Algorithms, OOPs and Data Structure

Should be flexible to support multiple IT platform

Analytical Thinking

 

Additional Information :

Functional Area: IT Software - DBA, Datawarehousing

Role Category: Admin/ Maintenance/ Security/ Datawarehousing

Role: DBA

 

Education :

B.Tech/ B.E

Skills

SQL DBA, IMPLEMENTATION, SQL, DBMS, DATA WAREHOUSING

Read more
Amagi Media Labs

at Amagi Media Labs

3 recruiters
Rajesh C
Posted by Rajesh C
Bengaluru (Bangalore), Chennai
12 - 15 yrs
₹50L - ₹60L / yr
Data Science
Machine Learning (ML)
ETL
Data Warehouse (DWH)
Amazon Web Services (AWS)
+5 more
Job Title: Data Architect
Job Location: Chennai

Job Summary
The Engineering team is seeking a Data Architect. As a Data Architect, you will drive a
Data Architecture strategy across various Data Lake platforms. You will help develop
reference architecture and roadmaps to build highly available, scalable and distributed
data platforms using cloud based solutions to process high volume, high velocity and
wide variety of structured and unstructured data. This role is also responsible for driving
innovation, prototyping, and recommending solutions. Above all, you will influence how
users interact with Conde Nast’s industry-leading journalism.
Primary Responsibilities
Data Architect is responsible for
• Demonstrated technology and personal leadership experience in architecting,
designing, and building highly scalable solutions and products.
• Enterprise scale expertise in data management best practices such as data integration,
data security, data warehousing, metadata management and data quality.
• Extensive knowledge and experience in architecting modern data integration
frameworks, highly scalable distributed systems using open source and emerging data
architecture designs/patterns.
• Experience building external cloud (e.g. GCP, AWS) data applications and capabilities is
highly desirable.
• Expert ability to evaluate, prototype and recommend data solutions and vendor
technologies and platforms.
• Proven experience in relational, NoSQL, ELT/ETL technologies and in-memory
databases.
• Experience with DevOps, Continuous Integration and Continuous Delivery technologies
is desirable.
• This role requires 15+ years of data solution architecture, design and development
delivery experience.
• Solid experience in Agile methodologies (Kanban and SCRUM)
Required Skills
• Very Strong Experience in building Large Scale High Performance Data Platforms.
• Passionate about technology and delivering solutions for difficult and intricate
problems. Current on Relational Databases and No sql databases on cloud.
• Proven leadership skills, demonstrated ability to mentor, influence and partner with
cross teams to deliver scalable robust solutions..
• Mastery of relational database, NoSQL, ETL (such as Informatica, Datastage etc) /ELT
and data integration technologies.
• Experience in any one of Object Oriented Programming (Java, Scala, Python) and
Spark.
• Creative view of markets and technologies combined with a passion to create the
future.
• Knowledge on cloud based Distributed/Hybrid data-warehousing solutions and Data
Lake knowledge is mandate.
• Good understanding of emerging technologies and its applications.
• Understanding of code versioning tools such as GitHub, SVN, CVS etc.
• Understanding of Hadoop Architecture and Hive SQL
• Knowledge in any one of the workflow orchestration
• Understanding of Agile framework and delivery

Preferred Skills:
● Experience in AWS and EMR would be a plus
● Exposure in Workflow Orchestration like Airflow is a plus
● Exposure in any one of the NoSQL database would be a plus
● Experience in Databricks along with PySpark/Spark SQL would be a plus
● Experience with the Digital Media and Publishing domain would be a
plus
● Understanding of Digital web events, ad streams, context models

About Condé Nast

CONDÉ NAST INDIA (DATA)
Over the years, Condé Nast successfully expanded and diversified into digital, TV, and social
platforms - in other words, a staggering amount of user data. Condé Nast made the right
move to invest heavily in understanding this data and formed a whole new Data team
entirely dedicated to data processing, engineering, analytics, and visualization. This team
helps drive engagement, fuel process innovation, further content enrichment, and increase
market revenue. The Data team aimed to create a company culture where data was the
common language and facilitate an environment where insights shared in real-time could
improve performance.
The Global Data team operates out of Los Angeles, New York, Chennai, and London. The
team at Condé Nast Chennai works extensively with data to amplify its brands' digital
capabilities and boost online revenue. We are broadly divided into four groups, Data
Intelligence, Data Engineering, Data Science, and Operations (including Product and
Marketing Ops, Client Services) along with Data Strategy and monetization. The teams built
capabilities and products to create data-driven solutions for better audience engagement.
What we look forward to:
We want to welcome bright, new minds into our midst and work together to create diverse
forms of self-expression. At Condé Nast, we encourage the imaginative and celebrate the
extraordinary. We are a media company for the future, with a remarkable past. We are
Condé Nast, and It Starts Here.
Read more
TekSystems
Agency job
via NDSSG DataSync Pvt Ltd by Hamza Bootwala
Bengaluru (Bangalore), Hyderabad
4 - 8 yrs
₹10L - ₹20L / yr
WebFOCUS
Snow flake schema
Sybase
Data Warehouse (DWH)
Oracle
+2 more

Required:
1) WebFOCUS BI Reporting
2) WebFOCUS Adminstration
3) Sybase or Oracle or SQL Server or Snowflake
4) DWH Skills

Nice to have:
1) Experience in SAP BO / Crystal report / SSRS / Power BI
2) Experience in Informix
3) Experience in ETL

Responsibilities:

• Technical knowledge regarding best practices of BI development / integration.
• Candidate must understand business processes, be a detailed-oriented person and quickly grasp new concepts.
• Additionally the candidate will have strong presentation, interpersonal, software development and work management skills.
• Strong Advanced SQL programming skills are required
• Proficient in MS Word, Excel, Access, and PowerPoint
• Experience working with one or more BI Reporting tools as Analyst/Developer.
• Knowledge of data mining techniques and procedures and knowing when their use is appropriate
• Ability to present complex information in an understandable and compelling manner.
• Experience converting reports from one reporting tool to another

Read more
Bengaluru (Bangalore), UK
5 - 10 yrs
₹15L - ₹25L / yr
Data Visualization
PowerBI
ADF
Business Intelligence (BI)
PySpark
+11 more

Power BI Developer

Senior visualization engineer with 5 years’ experience in Power BI to develop and deliver solutions that enable delivery of information to audiences in support of key business processes. In addition, Hands-on experience on Azure data services like ADF and databricks is a must.

Ensure code and design quality through execution of test plans and assist in development of standards & guidelines working closely with internal and external design, business, and technical counterparts.

Candidates should have worked in agile development environments.

Desired Competencies:

  • Should have minimum of 3 years project experience using Power BI on Azure stack.
  • Should have good understanding and working knowledge of Data Warehouse and Data Modelling.
  • Good hands-on experience of Power BI
  • Hands-on experience T-SQL/ DAX/ MDX/ SSIS
  • Data Warehousing on SQL Server (preferably 2016)
  • Experience in Azure Data Services – ADF, DataBricks & PySpark
  • Manage own workload with minimum supervision.
  • Take responsibility of projects or issues assigned to them
  • Be personable, flexible and a team player
  • Good written and verbal communications
  • Have a strong personality who will be able to operate directly with users
Read more
Bengaluru (Bangalore)
8 - 15 yrs
₹12L - ₹16L / yr
MariaDB
Relational Database (RDBMS)
Databases
MySQL
Microsoft Windows Azure
+6 more
As a Senior Database Developer, you will design stable and reliable databases,
according to our company’s needs. You will be responsible for planning, developing,
testing, improving and maintaining new and existing databases to help users retrieve
data effectively. As part of our IT team, you will work closely with developers to ensure
system consistency. You will also collaborate with administrators and clients to
provide technical support and identify new requirements. Communication and
organization skills are keys for this position, along with a problem-solution attitude.
You get to work with some of the best minds in the industry at a place where
opportunity lurks everywhere and in everything.
Responsibilities
Your responsibilities are as follows.
• Working cross functional teams to develop robust solutions aligned with the
business needs
• Maintaining communication, providing regular updates to the development
team ensuring solutions provided are fit for purpose
• Training other developers in the team on best practices and technologies
• Troubleshooting issues in the production environment understanding the root
cause and developing robust solutions
• Developing, implement, maintain and solutions that are both reliable and
scalable
• Capture data analysis requirements effectively and represent them formerly
and visually through our data models.
• Maintaining effective database performance by identifying and resolving
production and application development problems
• Optimise the integration and installation of new releases
• Monitoring system performance, test, troubleshoot and integrating new
features
• Proactively recommending solutions to improve new and existing database
systems
• Providing database support by coding utilities, resolving user questions and
problems
• Ensuring compliance to database implementation procedures
• Performing code and design reviews as per the code review process
• Installing and organising information systems to guarantee company
functionality
• Preparing accurate documentation and reports
• Migration of data from legacy systems to new solutions
• Stakeholders’ analysis of our current clients, company operations and
applications, and programming requirements
• Collaborate with functional teams across the business to deliver end-to-end
products, features enabling enhanced performance
Required Qualifications
We are looking for individuals who are curious, excited about learning, and navigating
through the uncertainties and complexities that are associated with a growing
company. Some qualifications that we think would help you thrive in this role are:
• Minimum 8 Years of experience as a Database Administrator
• Strong knowledge of data structures and database systems
• In depth expertise and hands on experience with MySQL/MariaDB Database
Management System
• In depth expertise and hands on experience in database design, data
maintenance, database security, data analysis and mining
• Hands-on experience with at least one web-hosting platform such as Microsoft
Azure, AWS (Amazon Web Services) etc.
• Strong understanding of security principles and how they apply to web
applications
• Basic knowledge of networking, Desirable knowledge of business intelligence
• Desirable knowledge of data architectures related to data warehouse
implementations
• Strong interpersonal skills and a desire to work collaboratively to achieve
objectives
• Understanding of Agile methodologies
• Bachelor/Masters of CS/IT Engineering, BCA/MCA, B Sc/M Sc in CS/IT
Preferred Qualifications
• Sense of ownership and pride in your performance and its impact on company’s
success
• Critical thinker, Team player
• Excellent trouble-shooting and problem-solving skills
• Excellent analytical and Strong organisational skills
• Good time-management skills
• Great interpersonal and communication skills
Read more
Discite Analytics Private Limited
Uma Sravya B
Posted by Uma Sravya B
Ahmedabad
4 - 7 yrs
₹12L - ₹20L / yr
Hadoop
Big Data
Data engineering
Spark
Apache Beam
+13 more
Responsibilities:
1. Communicate with the clients and understand their business requirements.
2. Build, train, and manage your own team of junior data engineers.
3. Assemble large, complex data sets that meet the client’s business requirements.
4. Identify, design and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
5. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources, including the cloud.
6. Assist clients with data-related technical issues and support their data infrastructure requirements.
7. Work with data scientists and analytics experts to strive for greater functionality.

Skills required: (experience with at least most of these)
1. Experience with Big Data tools-Hadoop, Spark, Apache Beam, Kafka etc.
2. Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
3. Experience in ETL and Data Warehousing.
4. Experience and firm understanding of relational and non-relational databases like MySQL, MS SQL Server, Postgres, MongoDB, Cassandra etc.
5. Experience with cloud platforms like AWS, GCP and Azure.
6. Experience with workflow management using tools like Apache Airflow.
Read more
Mobile Programming India Pvt Ltd

at Mobile Programming India Pvt Ltd

1 video
17 recruiters
Inderjit Kaur
Posted by Inderjit Kaur
Bengaluru (Bangalore), Chennai, Pune, Gurugram
4 - 8 yrs
₹9L - ₹14L / yr
ETL
Data Warehouse (DWH)
Data engineering
Data modeling
BRIEF JOB RESPONSIBILITIES:

• Responsible for designing, deploying, and maintaining analytics environment that processes data at scale
• Contribute design, configuration, deployment, and documentation for components that manage data ingestion, real time streaming, batch processing, data extraction, transformation, enrichment, and loading of data into a variety of cloud data platforms, including AWS and Microsoft Azure
• Identify gaps and improve the existing platform to improve quality, robustness, maintainability, and speed
• Evaluate new and upcoming big data solutions and make recommendations for adoption to extend our platform to meet advanced analytics use cases, such as predictive modeling and recommendation engines
• Data Modelling , Data Warehousing on Cloud Scale using Cloud native solutions.
• Perform development, QA, and dev-ops roles as needed to ensure total end to end responsibility of solutions

COMPETENCIES
• Experience building, maintaining, and improving Data Models / Processing Pipeline / routing in large scale environments
• Fluency in common query languages, API development, data transformation, and integration of data streams
• Strong experience with large dataset platforms such as (e.g. Amazon EMR, Amazon Redshift, AWS Lambda & Fargate, Amazon Athena, Azure SQL Database, Azure Database for PostgreSQL, Azure Cosmos DB , Data Bricks)
• Fluency in multiple programming languages, such as Python, Shell Scripting, SQL, Java, or similar languages and tools appropriate for large scale data processing.
• Experience with acquiring data from varied sources such as: API, data queues, flat-file, remote databases
• Understanding of traditional Data Warehouse components (e.g. ETL, Business Intelligence Tools)
• Creativity to go beyond current tools to deliver the best solution to the problem
Read more
Velocity Services
Bengaluru (Bangalore)
4 - 8 yrs
₹20L - ₹35L / yr
Data engineering
Data Engineer
Big Data
Big Data Engineer
Python
+10 more

We are an early stage start-up, building new fintech products for small businesses. Founders are IIT-IIM alumni, with prior experience across management consulting, venture capital and fintech startups. We are driven by the vision to empower small business owners with technology and dramatically improve their access to financial services. To start with, we are building a simple, yet powerful solution to address a deep pain point for these owners: cash flow management. Over time, we will also add digital banking and 1-click financing to our suite of offerings.

 

We have developed an MVP which is being tested in the market. We have closed our seed funding from marquee global investors and are now actively building a world class tech team. We are a young, passionate team with a strong grip on this space and are looking to on-board enthusiastic, entrepreneurial individuals to partner with us in this exciting journey. We offer a high degree of autonomy, a collaborative fast-paced work environment and most importantly, a chance to create unparalleled impact using technology.

 

Reach out if you want to get in on the ground floor of something which can turbocharge SME banking in India!

 

Technology stack at Velocity comprises a wide variety of cutting edge technologies like, NodeJS, Ruby on Rails, Reactive Programming,, Kubernetes, AWS, NodeJS, Python, ReactJS, Redux (Saga) Redis, Lambda etc. 

 

Key Responsibilities

  • Responsible for building data and analytical engineering pipelines with standard ELT patterns, implementing data compaction pipelines, data modelling and overseeing overall data quality

  • Work with the Office of the CTO as an active member of our architecture guild

  • Writing pipelines to consume the data from multiple sources

  • Writing a data transformation layer using DBT to transform millions of data into data warehouses.

  • Implement Data warehouse entities with common re-usable data model designs with automation and data quality capabilities

  • Identify downstream implications of data loads/migration (e.g., data quality, regulatory)

 

What To Bring

  • 3+ years of software development experience, a startup experience is a plus.

  • Past experience of working with Airflow and DBT is preferred

  • 2+ years of experience working in any backend programming language. 

  • Strong first-hand experience with data pipelines and relational databases such as Oracle, Postgres, SQL Server or MySQL

  • Experience with DevOps tools (GitHub, Travis CI, and JIRA) and methodologies (Lean, Agile, Scrum, Test Driven Development)

  • Experienced with the formulation of ideas; building proof-of-concept (POC) and converting them to production-ready projects

  • Experience building and deploying applications on on-premise and AWS or Google Cloud cloud-based infrastructure

  • Basic understanding of Kubernetes & docker is a must.

  • Experience in data processing (ETL, ELT) and/or cloud-based platforms

  • Working proficiency and communication skills in verbal and written English.

 

 

 

Read more
Abu Dhabi, Dubai
6 - 12 yrs
₹18L - ₹25L / yr
PySpark
Big Data
Spark
Data Warehouse (DWH)
SQL
+2 more
Must-Have Skills:
• Good experience in Pyspark - Including Dataframe core functions and Spark SQL
• Good experience in SQL DBs - Be able to write queries including fair complexity.
• Should have excellent experience in Big Data programming for data transformation and aggregations
• Good at ELT architecture. Business rules processing and data extraction from Data Lake into data streams for business consumption.
• Good customer communication.
• Good Analytical skill
 
 
Technology Skills (Good to Have):
  • Building and operationalizing large scale enterprise data solutions and applications using one or more of AZURE data and analytics services in combination with custom solutions - Azure Synapse/Azure SQL DWH, Azure Data Lake, Azure Blob Storage, Spark, HDInsights, Databricks, CosmosDB, EventHub/IOTHub.
  • Experience in migrating on-premise data warehouses to data platforms on AZURE cloud. 
  • Designing and implementing data engineering, ingestion, and transformation functions
  • Azure Synapse or Azure SQL data warehouse
  • Spark on Azure is available in HD insights and data bricks
Read more
Gurugram, Pune, Mumbai, Bengaluru (Bangalore), Chennai, Nashik
4 - 12 yrs
₹12L - ₹15L / yr
Data engineering
Data modeling
data pipeline
Data integration
Data Warehouse (DWH)
+12 more

 

 

Designation – Deputy Manager - TS


Job Description

  1. Total of  8/9 years of development experience Data Engineering . B1/BII role
  2. Minimum of 4/5 years in AWS Data Integrations and should be very good on Data modelling skills.
  3. Should be very proficient in end to end AWS Data solution design, that not only includes strong data ingestion, integrations (both Data @ rest and Data in Motion) skills but also complete DevOps knowledge.
  4. Should have experience in delivering at least 4 Data Warehouse or Data Lake Solutions on AWS.
  5. Should be very strong experience on Glue, Lambda, Data Pipeline, Step functions, RDS, CloudFormation etc.
  6. Strong Python skill .
  7. Should be an expert in Cloud design principles, Performance tuning and cost modelling. AWS certifications will have an added advantage
  8. Should be a team player with Excellent communication and should be able to manage his work independently with minimal or no supervision.
  9. Life Science & Healthcare domain background will be a plus

Qualifications

BE/Btect/ME/MTech

 

Read more
DataMetica

at DataMetica

1 video
7 recruiters
Sayali Kachi
Posted by Sayali Kachi
Pune, Hyderabad
6 - 12 yrs
₹11L - ₹25L / yr
PL/SQL
MySQL
SQL server
SQL
Linux/Unix
+4 more

We at Datametica Solutions Private Limited are looking for an SQL Lead / Architect who has a passion for the cloud with knowledge of different on-premises and cloud Data implementation in the field of Big Data and Analytics including and not limiting to Teradata, Netezza, Exadata, Oracle, Cloudera, Hortonworks and alike.

Ideal candidates should have technical experience in migrations and the ability to help customers get value from Datametica's tools and accelerators.



Job Description :

Experience: 6+ Years

Work Location: Pune / Hyderabad



Technical Skills :

  • Good programming experience as an Oracle PL/SQL, MySQL, and SQL Server Developer
  • Knowledge of database performance tuning techniques
  • Rich experience in a database development
  • Experience in Designing and Implementation Business Applications using the Oracle Relational Database Management System
  • Experience in developing complex database objects like Stored Procedures, Functions, Packages and Triggers using SQL and PL/SQL
  •  

Required Candidate Profile :

  • Excellent communication, interpersonal, analytical skills and strong ability to drive teams
  • Analyzes data requirements and data dictionary for moderate to complex projects • Leads data model related analysis discussions while collaborating with Application Development teams, Business Analysts, and Data Analysts during joint requirements analysis sessions
  • Translate business requirements into technical specifications with an emphasis on highly available and scalable global solutions
  • Stakeholder management and client engagement skills
  • Strong communication skills (written and verbal)

About Us!

A global leader in the Data Warehouse Migration and Modernization to the Cloud, we empower businesses by migrating their Data/Workload/ETL/Analytics to the Cloud by leveraging Automation.

We have expertise in transforming legacy Teradata, Oracle, Hadoop, Netezza, Vertica, Greenplum along with ETLs like Informatica, Datastage, AbInitio & others, to cloud-based data warehousing with other capabilities in data engineering, advanced analytics solutions, data management, data lake and cloud optimization.

Datametica is a key partner of the major cloud service providers - Google, Microsoft, Amazon, Snowflake.

We have our own products!

Eagle Data warehouse Assessment & Migration Planning Product

Raven Automated Workload Conversion Product

Pelican Automated Data Validation Product, which helps automate and accelerate data migration to the cloud.



Why join us!

Datametica is a place to innovate, bring new ideas to live, and learn new things. We believe in building a culture of innovation, growth, and belonging. Our people and their dedication over these years are the key factors in achieving our success.



Benefits we Provide!

Working with Highly Technical and Passionate, mission-driven people

Subsidized Meals & Snacks

Flexible Schedule

Approachable leadership

Access to various learning tools and programs

Pet Friendly

Certification Reimbursement Policy



Check out more about us on our website below!

www.datametica.com

Read more
DataMetica

at DataMetica

1 video
7 recruiters
Shivani Mahale
Posted by Shivani Mahale
Pune
4 - 7 yrs
₹5L - ₹15L / yr
ETL
Informatica PowerCenter
Teradata
Data Warehouse (DWH)
IBM InfoSphere DataStage
Requirement -
  • Must have 4 to 7 years of experience in ETL Design and Development using Informatica Components.
  • Should have extensive knowledge in Unix shell scripting.
  • Understanding of DW principles (Fact, Dimension tables, Dimensional Modelling and Data warehousing concepts).
  • Research, development, document and modification of ETL processes as per data architecture and modeling requirements.
  • Ensure appropriate documentation for all new development and modifications of the ETL processes and jobs.
  • Should be good in writing complex SQL queries.
Opportunities-
  • • Selected candidates will be provided training opportunities on one or more of following: Google Cloud, AWS, DevOps Tools, Big Data technologies like Hadoop, Pig, Hive, Spark, Sqoop, Flume and
  • Kafka would get chance to be part of the enterprise-grade implementation of Cloud and Big Data systems
  • Will play an active role in setting up the Modern data platform based on Cloud and Big Data
  • Would be part of teams with rich experience in various aspects of distributed systems and computing.
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort