29+ Datawarehousing Jobs in India
Apply to 29+ Datawarehousing Jobs on CutShort.io. Find your next job, effortlessly. Browse Datawarehousing Jobs and apply today!
a leading Data & Analytics intelligence technology solutions provider to companies that value insights from information as a competitive advantage. We are the partner of choice for enterprises on their digital transformation journey. Our teams offer solutions and services at the intersection of Advanced Data, Analytics, and AI.
Role: MuleSoft Developer.
Skills: MuleSoft, Snowflake, Data Lineage experience suing Collibra, Data Warehousing
Location: Bangalore/ Mangalore Hybrid
Notice Period - Immediate to 15 days
Responsibilities:
• 5+ years of experience in MuleSoft
• Strong experience in Snowflake
• Data Lineage experience suing Collibra
• Data Warehousing: Experience with developing data warehouse, data mart, data lake
type of solution
• Problem-Solving: Strong analytical skills and the ability to combine data from different
sources
• Communication: Excellent communication skills to work effectively with cross-
functional teams.
• Good to have – Experience with open-source data ingestion
Skills & Experience:
❖ At least 5+ years of experience as a Data Engineer
❖ Hands-on and in-depth experience with Star / Snowflake schema design, data modeling,
data pipelining and MLOps.
❖ Experience in Data Warehouse technologies (e.g. Snowflake, AWS Redshift, etc)
❖ Experience in AWS data pipelines (Lambda, AWS glue, Step functions, etc)
❖ Proficient in SQL
❖ At least one major programming language (Python / Java)
❖ Experience with Data Analysis Tools such as Looker or Tableau
❖ Experience with Pandas, Numpy, Scikit-learn, and Jupyter notebooks preferred
❖ Familiarity with Git, GitHub, and JIRA.
❖ Ability to locate & resolve data quality issues
❖ Ability to demonstrate end to ed data platform support experience
Other Skills:
❖ Individual contributor
❖ Hands-on with the programming
❖ Strong analytical and problem solving skills with meticulous attention to detail
❖ A positive mindset and can-do attitude
❖ To be a great team player
❖ To have an eye for detail
❖ Looking for opportunities to simplify, automate tasks, and build reusable components.
❖ Ability to judge suitability of new technologies for solving business problems
❖ Build strong relationships with analysts, business, and engineering stakeholders
❖ Task Prioritization
❖ Familiar with agile methodologies.
❖ Fintech or Financial services industry experience
❖ Eagerness to learn, about the Private Equity/Venture Capital ecosystem and associated
secondary market
Responsibilities:
o Design, develop and maintain a data platform that is accurate, secure, available, and fast.
o Engineer efficient, adaptable, and scalable data pipelines to process data.
o Integrate and maintain a variety of data sources: different databases, APIs, SAASs, files, logs,
events, etc.
o Create standardized datasets to service a wide variety of use cases.
o Develop subject-matter expertise in tables, systems, and processes.
o Partner with product and engineering to ensure product changes integrate well with the
data platform.
o Partner with diverse stakeholder teams, understand their challenges and empower them
with data solutions to meet their goals.
o Perform data quality on data sources and automate and maintain a quality control
capability.
at hopscotch
About the role:
Hopscotch is looking for a passionate Data Engineer to join our team. You will work closely with other teams like data analytics, marketing, data science and individual product teams to specify, validate, prototype, scale, and deploy data pipelines features and data architecture.
Here’s what will be expected out of you:
➢ Ability to work in a fast-paced startup mindset. Should be able to manage all aspects of data extraction transfer and load activities.
➢ Develop data pipelines that make data available across platforms.
➢ Should be comfortable in executing ETL (Extract, Transform and Load) processes which include data ingestion, data cleaning and curation into a data warehouse, database, or data platform.
➢ Work on various aspects of the AI/ML ecosystem – data modeling, data and ML pipelines.
➢ Work closely with Devops and senior Architect to come up with scalable system and model architectures for enabling real-time and batch services.
What we want:
➢ 5+ years of experience as a data engineer or data scientist with a focus on data engineering and ETL jobs.
➢ Well versed with the concept of Data warehousing, Data Modelling and/or Data Analysis.
➢ Experience using & building pipelines and performing ETL with industry-standard best practices on Redshift (more than 2+ years).
➢ Ability to troubleshoot and solve performance issues with data ingestion, data processing & query execution on Redshift.
➢ Good understanding of orchestration tools like Airflow.
➢ Strong Python and SQL coding skills.
➢ Strong Experience in distributed systems like spark.
➢ Experience with AWS Data and ML Technologies (AWS Glue,MWAA, Data Pipeline,EMR,Athena, Redshift,Lambda etc).
➢ Solid hands on with various data extraction techniques like CDC or Time/batch based and the related tools (Debezium, AWS DMS, Kafka Connect, etc) for near real time and batch data extraction.
Note :
Product based companies, Ecommerce companies is added advantage
Description
About us
Welcome to Decision Foundry!
We are both a high growth startup and one of the longest tenured Salesforce Marketing Cloud Implementation Partners in the ecosystem. Forged from a 19-year-old web analytics company, Decision Foundry is the leader in Salesforce intelligence solutions.
We win as an organization through our core tenets. They include:
- One Team. One Theme.
- We sign it. We deliver it.
- Be Accountable and Expect Accountability.
- Raise Your Hand or Be Willing to Extend it
Requirements
• Strong understanding of data management principles and practices (Preferred experience: AWS Redshift).
• Experience with Tableau server administration, including user management and permissions (preferred, not mandatory).
• Ability to monitor alerts and application logs for data processing issues and troubleshooting.
• Ability to handle and monitor support tickets queues and act accordingly based on SLAs and priority.
• Ability to work collaboratively with cross-functional teams, including Data Engineers and BI team.
• Strong analytical and problem-solving skills.
• Familiar with data warehousing concept and ETL processes.
• Experience with SQL, DBT and database technologies such as Redshift, Postgres, MongoDB, etc.
• Familiar with data integration tools such as Fivetran or Funnel.io
• Familiar with programming languages such as Python.
• Familiar with cloud-based data technologies such as AWS.
• Experience with data ingestion and orchestration tools such as AWS Glue.
• Excellent communication and interpersonal skills.
• Should possess experience of 2+ years.
Role: Project Manager
Experience: 8-10 Years
Location: Mumbai
Company Profile:
Exponentia.ai is an AI tech organization with a presence across India, Singapore, the Middle East, and the UK. We are an innovative and disruptive organization, working on cutting-edge technology to help our clients transform into the enterprises of the future. We provide artificial intelligence-based products/platforms capable of automated cognitive decision-making to improve productivity, quality, and economics of the underlying business processes. Currently, we are rapidly expanding across machine learning, Data Engineering and Analytics functions. Exponentia.ai has developed long-term relationships with world-class clients such as PayPal, PayU, SBI Group, HDFC Life, Kotak Securities, Wockhardt and Adani Group amongst others.
One of the top partners of Data bricks, Azure, Cloudera (leading analytics player) and Qlik (leader in BI technologies), Exponentia.ai has recently been awarded the ‘Innovation Partner Award’ by Qlik and "Excellence in Business Process Automation Award" (IMEA) by Automation Anywhere.
Get to know more about us at http://www.exponentia.ai and https://in.linkedin.com/company/exponentiaai
Role Overview:
· Project manager shall be responsible to oversee and take responsibility for the successful delivery of a range of projects in Business Intelligence, Data warehousing, and Analytics/AI-ML.
· Project manager is expected to manage the project and lead the teams of BI engineers, data engineers, data scientists and application developers.
Job Responsibilities:
· Efforts estimation, creating a project plan, planning milestones, activities and tracking the progress.
· Identify risks and issues. Come up with a mitigation plan.
· Status reporting to both internal and external stakeholders.
· Communicate with all stakeholders.
· Manage end-to-end project lifecycle - requirements gathering, design, development, testing and go-live.
· Manage end-to-end BI or data warehouse projects.
· Must have experience in running Agile-based project development.
Technical skills
· Experience in Business Intelligence Data warehousing or Analytics projects.
· Understand data lake and data warehouse solutions including ETL pipelines.
· Good to have - Knowledge of Azure blob storage, azure data factory and Synapse analytics.
· Good to have - Knowledge of Qlik Sense or Power BI
· Good to have - Certified in PMP/Prince 2 / Agile Project management.
· Excellent written and verbal communication skills.
Education:
MBA, B.E. or B. Tech. or MCA degree
Designation: Senior - DBA
Experience: 6-9 years
CTC: INR 17-20 LPA
Night Allowance: INR 800/Night
Location: Hyderabad,Hybrid
Notice Period: NA
Shift Timing : 6:30 pm to 3:30 am
Openings: 3
Roles and Responsibilities:
As a Senior Database Administrator is responsible for the physical design development
administration and optimization of properly engineered database systems to meet agreed
business and technical requirements.
The candidate will work as part of but not limited to the Onsite/Offsite DBA
group-Administration and management of databases in Dev Stage and Production
environments
Performance tuning of database schema stored procedures etc.
Providing technical input on the setup configuration of database servers and SAN disk
subsystem on all database servers.
Troubleshooting and handling all database related issues and tracking them through to
resolution.
Pro-active monitoring of databases both from a performance and capacity management
perspective.
Performing database maintenance activities such as backup/recovery rebuilding and
reorganizing indexes.
Ensuring that all database releases are properly assessed and measured from a
functionality and performance perspective.
Ensuring that all databases are up to date with the latest service packs patches &
security fixes.
Take ownership and ensure high quality timely delivery of projects on hand.
Collaborate with application/database developers quality assurance and
operations/support staff
Will help manage large high transaction rate SQL Server production
Eligibility:
Bachelors/Master Degree (BE/BTech/MCA/MTect/MS)
6 - 8 years of solid experience in SQL Server 2016/2019 Database administration and
maintenance on Azure and AWS cloud.
Experience handling and managing large SQL Server databases in a real time production
environment with sizes greater than 200+ GB
Experience in troubleshooting and resolving database integrity issues performance
issues blocking/deadlocking issues connectivity issues data replication issues etc.
Experience on Configuration Trouble shoot on SQL Server HA
Ability to detect and troubleshoot database related CPUmemoryI/Odisk space and other
resource contention issues.
Experience with database maintenance activities such as backup/recovery & capacity
monitoring/management and Azure Backup Services.
Experience with HA/Failover technologies such as Clustering SAN Replication Log
shipping & mirroring.
Experience collaborating with development teams on physical database design activities
and performance tuning.
Experience in managing and making software deployments/changes in real time
production environments.
Ability to work on multiple projects at one time with minimal supervision and ensure high
quality timely delivery.
Knowledge on tools like SQL Lite speed SQL Diagnostic Manager App Dynamics.
Strong understanding of Data Warehousing concepts and SQL server Architecture
Certified DBA Proficient in TSQL Proficient in the various Storage technologies such as
ASM SAN NAS RAID Multi patching
Strong analytical and problem solving skills Proactive independent and proven ability to
work under tight target and pressure
Experience working in a highly regulated environment such as a financial services
institutions
Expertise in SSIS SSRS
Skills:
SSIS
SSRS
Technical & Business Expertise:
-Hands on integration experience in SSIS/Mulesoft
- Hands on experience Azure Synapse
- Proven advanced level of writing database experience in SQL Server
- Proven advanced level of understanding about Data Lake
- Proven intermediate level of writing Python or similar programming language
- Intermediate understanding of Cloud Platforms (GCP)
- Intermediate understanding of Data Warehousing
- Advanced Understanding of Source Control (Github)
• Problem Solving:. Resolving production issues to fix service P1-4 issues. Problems relating to
introducing new technology, and resolving major issues in the platform and/or service.
• Software Development Concepts: Understands and is experienced with the use of a wide range of
programming concepts and is also aware of and has applied a range of algorithms.
• Commercial & Risk Awareness: Able to understand & evaluate both obvious and subtle commercial
risks, especially in relation to a programme.
Experience you would be expected to have
• Cloud: experience with one of the following cloud vendors: AWS, Azure or GCP
• GCP : Experience prefered, but learning essential.
• Big Data: Experience with Big Data methodology and technologies
• Programming : Python or Java worked with Data (ETL)
• DevOps: Understand how to work in a Dev Ops and agile way / Versioning / Automation / Defect
Management – Mandatory
• Agile methodology - knowledge of Jira
About our Client :-
Our Client is a global data and measurement-driven media agency whose mission is to make brands more valuable to the world. Clients include Google, Flipkart, NBCUniversal, L'Oréal and the Financial Times. The agency is more than 2,000 people strong, manages $4.5B in annualized media spend, and deploys campaigns in 121 markets via 22 offices in APAC, EMEA and the Americas.
About the role :-
Accountable for quantifying and measuring the success of our paid media campaigns and for delivering insights that enable us to innovate the work we deliver at MFG. Leading multi-product projects, developing best practices, being the main point of contact for other teams and direct line management for multiple team members.
Some of the things we’d like you to do -
● Build a deep understanding of marketing plans and their objectives to help Account teams (Activation, Planning, etc) build comprehensive measurement, and test & learn plans
● Play an instrumental role in evolving and designing new, innovative measurement tools. Managing the process through to delivery and take ownership of global roll out
● Recruit, manage and mentor analytical resource(s), ensuring the efficient flow of work through the team, the timely delivery of high-quality outputs and their continuing development as professionals
● Lead the creation of clear, robust and thought-provoking campaign reviews and insights
● Work with Account teams (Activation, Planning, etc) to help define the correct questions to understand correct metrics for quantifying campaign performance
● To help deliver “best in class” analytical capabilities across the agency with the wider Analytics team, including the use of new methods, techniques, tools and systems
● Develop innovative marketing campaigns and assist clients to define objectives
● Develop deep understanding of marketing platform testing and targeting abilities, and act in a consultative capacity in their implementation
● Provide hands-on leadership, mentorship, and coaching in the expert delivery of data strategies, AdTech solutions, audiences solutions and data management solutions to our clients
● Leading stakeholder management on certain areas of the client portfolio
● Coordination and communication with 3rd party vendors to critically assess new/bespoke measurement solutions. Includes development and management of contracts and SOWs.
A bit about yourself -
● 8+ years of experience in a data & insight role; practical experience on how analytical techniques/models are used in marketing. Previous agency, media, or consultancy background is desirable.
● A proven track record in working with a diverse array of clients to solve complex problems and delivering demonstrable business success. Including (but not limited to) the development of compelling and sophisticated data strategies and AdTech / martech strategies to enable
marketing objectives.
● Ideally you have worked with Ad Platforms, DMPs, CDPs, Clean Rooms, Measurement Platforms, Business Intelligence Tools, Data Warehousing and Big Data Solutions to some degree
● 3+ years of management experience and ability to delegate effectively
● Proficiency with systems such as SQL, Social Analytics tools, Python, and ‘R’
● Understand measurement for both Direct Response and Brand Awareness campaigns desired
● Excellent at building and presenting data in a visually engaging and insightful manner that cuts through the noise
● Strong organizational and project management skills including team resourcing
● Strong understanding of what data points can be collected and analyzed in a digital campaign, and how each data point should be analyzed
● Established and professional communication, presentation, and motivational skills
Deep-Rooted.Co is on a mission to get Fresh, Clean, Community (Local farmer) produce from harvest to reach your home with a promise of quality first! Our values are rooted in trust, convenience, and dependability, with a bunch of learning & fun thrown in.
Founded out of Bangalore by Arvind, Avinash, Guru and Santosh, with the support of our Investors Accel, Omnivore & Mayfield, we raised $7.5 million in Seed, Series A and Debt funding till date from investors include ACCEL, Omnivore, Mayfield among others. Our brand Deep-Rooted.Co which was launched in August 2020 was the first of its kind as India’s Fruits & Vegetables (F&V) which is present in Bangalore & Hyderabad and on a journey of expansion to newer cities which will be managed seamlessly through Tech platform that has been designed and built to transform the Agri-Tech sector.
Deep-Rooted.Co is committed to building a diverse and inclusive workplace and is an equal-opportunity employer.
How is this possible? It’s because we work with smart people. We are looking for Engineers in Bangalore to work with thehttps://www.linkedin.com/in/gururajsrao/"> Product Leader (Founder) andhttps://www.linkedin.com/in/sriki77/"> CTO and this is a meaningful project for us and we are sure you will love the project as it touches everyday life and is fun. This will be a virtual consultation.
We want to start the conversation about the project we have for you, but before that, we want to connect with you to know what’s on your mind. Do drop a note sharing your mobile number and letting us know when we can catch up.
Purpose of the role:
* As a startup we have data distributed all across various sources like Excel, Google Sheets, Databases etc. We need swift decision making based a on a lot of data that exists as we grow. You help us bring together all this data and put it in a data model that can be used in business decision making. * Handle nuances of Excel and Google Sheets API. * Pull data in and manage it growth, freshness and correctness. * Transform data in a format that aids easy decision-making for Product, Marketing and Business Heads. * Understand the business problem, solve the same using the technology and take it to production - no hand offs - full path to production is yours.
Technical expertise:
* Good Knowledge And Experience with Programming languages - Java, SQL,Python. * Good Knowledge of Data Warehousing, Data Architecture. * Experience with Data Transformations and ETL; * Experience with API tools and more closed systems like Excel, Google Sheets etc. * Experience AWS Cloud Platform and Lambda * Experience with distributed data processing tools. * Experiences with container-based deployments on cloud.
Skills:
Java, SQL, Python, Data Build Tool, Lambda, HTTP, Rest API, Extract Transform Load.
-5+ years hands on experience with penetration testing would be added plus
-Strong Knowledge of programming or scripting languages, such as Python, PowerShell, Bash
-Industry certifications like OSCP and AWS are highly desired for this role
-Well-rounded knowledge in security tools, software and processes
Required Skills:
- Proven work experience as an Enterprise / Data / Analytics Architect - Data Platform in HANA XSA, XS, Data Intelligence and SDI
- Can work on new and existing architecture decision in HANA XSA, XS, Data Intelligence and SDI
- Well versed with data architecture principles, software / web application design, API design, UI / UX capabilities, XSA / Cloud foundry architecture
- In-depth understand of database structure (HANA in-memory) principles.
- In-depth understand of ETL solutions and data integration strategy.
- Excellent knowledge of Software and Application design, API, XSA, and microservices concepts
Roles & Responsibilities:
- Advise and ensure compliance of the defined Data Architecture principle.
- Identifies new technologies update and development tools including new release/upgrade/patch as required.
- Analyzes technical risks and advises on risk mitigation strategy.
- Advise and ensures compliance to existing and development required data and reporting standard including naming convention.
The time window is ideally AEST (8 am till 5 pm) which means starting at 3:30 am IST. We understand it can be very early for an SME supporting from India. Hence, we can consider the candidates who can support from at least 7 am IST (earlier is possible).
We are looking for a Senior Data Engineer to join the Customer Innovation team, who will be responsible for acquiring, transforming, and integrating customer data onto our Data Activation Platform from customers’ clinical, claims, and other data sources. You will work closely with customers to build data and analytics solutions to support their business needs, and be the engine that powers the partnership that we build with them by delivering high-fidelity data assets.
In this role, you will work closely with our Product Managers, Data Scientists, and Software Engineers to build the solution architecture that will support customer objectives. You'll work with some of the brightest minds in the industry, work with one of the richest healthcare data sets in the world, use cutting-edge technology, and see your efforts affect products and people on a regular basis. The ideal candidate is someone that
- Has healthcare experience and is passionate about helping heal people,
- Loves working with data,
- Has an obsessive focus on data quality,
- Is comfortable with ambiguity and making decisions based on available data and reasonable assumptions,
- Has strong data interrogation and analysis skills,
- Defaults to written communication and delivers clean documentation, and,
- Enjoys working with customers and problem solving for them.
A day in the life at Innovaccer:
- Define the end-to-end solution architecture for projects by mapping customers’ business and technical requirements against the suite of Innovaccer products and Solutions.
- Measure and communicate impact to our customers.
- Enabling customers on how to activate data themselves using SQL, BI tools, or APIs to solve questions they have at speed.
What You Need:
- 4+ years of experience in a Data Engineering role, a Graduate degree in Computer Science, Statistics, Informatics, Information Systems, or another quantitative field.
- 4+ years of experience working with relational databases like Snowflake, Redshift, or Postgres.
- Intermediate to advanced level SQL programming skills.
- Data Analytics and Visualization (using tools like PowerBI)
- The ability to engage with both the business and technical teams of a client - to document and explain technical problems or concepts in a clear and concise way.
- Ability to work in a fast-paced and agile environment.
- Easily adapt and learn new things whether it’s a new library, framework, process, or visual design concept.
What we offer:
- Industry certifications: We want you to be a subject matter expert in what you do. So, whether it’s our product or our domain, we’ll help you dive in and get certified.
- Quarterly rewards and recognition programs: We foster learning and encourage people to take risks. We recognize and reward your hard work.
- Health benefits: We cover health insurance for you and your loved ones.
- Sabbatical policy: We encourage people to take time off and rejuvenate, learn new skills, and pursue their interests so they can generate new ideas with Innovaccer.
- Pet-friendly office and open floor plan: No boring cubicles.
• Key Skillset: -
• Advanced SQL Skills and good Communication Skills are mandatory.
• Develop and execute detailed Data warehouse related functional, integration and regression test cases, and documentation
• Prioritize testing tasks based on goals and risks of projects and ensure testing milestones, activities and tasks are completed as scheduled.
• Develop and Design Datawarehouse testcases, scenarios, and scripts to ensure quality Data warehouse /BI applications.
• Report the status of test planning, defects and execution activities, including regular status updates to the project team.
• Hands on Experience on any SQL Too
Data Integration, Preparation & Management Solutions
Technical Project Manager
Exp : 10 to 15 Years
Responsibilities
- Participate in meetings with US client teams to understand business requirements and project milestones. Provide technical suggestions and strategy in project planning.
- Prepare Project Plan and track the project progress of deliverables and Milestones. Report the status to higher management regularly.
- Monitor Budget and Timeline at regular Intervals and plan proactive steps to control them.
- Identifies opportunities for improving business prospects with the client.
- Help team in resolving technical and functional aspects across project life cycle.
- Planning and execution of training, mentoring, and coaching team members.
- Hold regular project reviews with internal & client stakeholders.
- Prepare organized and informative presentations whenever required.
- Resolve and/or escalate issues as and when it is imperative.
Required Skill
- At least 2 years’ experience in managing large technology engineering team or L2/L3 Technology Support team with an overall experience of at least 10 years in IT industry.
- Experience in BI Tools like MicroStrategy, OBIEE, Tableau or ETL Tools like Informatica, Talend, DataStage, and SSIS.
- Experience in Datawarehouse and BI Reporting projects as developer or Lead or Architect
- Experience in generating reports on SLAs, KPIs, metrics and reporting to senior leadership.
- Experience in attracting and hiring excellent talent Ability to mentor, and bring best out of the team. Flexible with working hours based on the service requirement.
- Demonstrate organizational and leadership skills
- Excellent communication (written and spoken) skills
- Experience or knowledge in tools such as JIRA, Confluence, ServiceNow, Splunk, Other monitoring tools etc . Exp on ETL,DWH concepts, L2/L3 Exp
We’re hiring a talented Data Engineer and Big Data enthusiast to work in our platform to help ensure that our data quality is flawless. As a company, we have millions of new data points every day that come into our system. You will be working with a passionate team of engineers to solve challenging problems and ensure that we can deliver the best data to our customers, on-time. You will be using the latest cloud data warehouse technology to build robust and reliable data pipelines. Duties/Responsibilities Include:
|
Requirements:
Exceptional candidates will have:
|
ABOUT US:
Pingahla was founded by a group of people passionate about making the world a better place by harnessing
the power of Data. We are a data management firm with offices in New York and India.Our mission is to help transform the way companies operate and think about their business. We make it easier
to adopt and stay ahead of the curve in the ever-changing digital landscape. One of our core beliefs is
excellence in everything we do!
JOB DESCRIPTION:
Pingahla is recruiting ETL & BI Test Manager who can build and lead a team, establish infrastructure, processes
and best practices for our Quality Assurance vertical. The candidates are expected to have at least 5+ years of experience with ETL Testing and working in Data Management project testing. Being a growing company, we will be able to provide very good career opportunities and a very attractive remuneration.
JOB ROLE & RESPONSIBILITIES:
• Plans and manages the testing activities;
• Defect Management and Weekly & Monthly Test report Generation;
• Work as a Test Manager to design Test Strategy and approach for DW&BI - (ETL & BI) solution;
• Provide leadership and directions to the team on quality standards and testing best practices;
• Ensured that project deliverables are produced, including, but not limited to: quality assurance plans,
test plans, testing priorities, status reports, user documentation, online help, etc. Managed and
motivated teams to accomplish significant deliverables within tight deadlines.
• Test Data Management; Reviews and approves all test cases prior to execution;
• Coordinates and reviews offshore work efforts for projects and maintenance activities.
REQUIRED SKILLSET:
• Experience in Quality Assurance Management, Program Management, DW - (ETL & BI)
Management
• Minimum 5 years in ETL Testing, at least 2 years in the Team Lead role
• Technical abilities complemented by sound communication skills, user interaction abilities,
requirement gathering and analysis, and skills in data migration and conversion strategies.
• Proficient in test definition, capable of developing test plans and test cases from technical
specifications
• Single handedly looking after complete delivery from testing side.
• Experience working with remote teams, across multiple time zones.
• Must have a strong knowledge of QA processes and methodologies
• Strong UNIX and PERL scripting skills
• Expertise with ETL testing & hands-on with working on ETL tool like Informatica, DataStage
PL/SQL is a plus.
• Excellent problem solving, analytical and technical troubleshooting skills.
• Familiar with Data Management projects.
• Eager to learn, adopt and apply rapidly changing new technologies and methodologies.
• Efficient and effective at approaching and escalating quality issues when appropriate.
- Create and maintain optimal data pipeline architecture
- Assemble large, complex data sets that meet business requirements
- Identifying, designing, and implementing internal process improvements including redesigning infrastructure for greater scalability, optimizing data delivery, and automating manual processes
- Work with Data, Analytics & Tech team to extract, arrange and analyze data
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS technologies
- Building analytical tools to utilize the data pipeline, providing actionable insight into key business performance metrics including operational efficiency and customer acquisition
- Works closely with all business units and engineering teams to develop a strategy for long-term data platform architecture.
- Working with stakeholders including data, design, product, and executive teams, and assisting them with data-related technical issues
- Working with stakeholders including the Executive, Product, Data, and Design teams to support their data infrastructure needs while assisting with data-related technical issues.
- SQL
- Ruby or Python(Ruby preferred)
- Apache-Hadoop based analytics
- Data warehousing
- Data architecture
- Schema design
- ML
- Prior experience of 2 to 5 years as a Data Engineer.
- Ability in managing and communicating data warehouse plans to internal teams.
- Experience designing, building, and maintaining data processing systems.
- Ability to perform root cause analysis on external and internal processes and data to identify opportunities for improvement and answer questions.
- Excellent analytic skills associated with working on unstructured datasets.
- Ability to build processes that support data transformation, workload management, data structures, dependency, and metadata.
Symansys Technologies India Pvt Ltd
Experience: 6-9 yrs
Location: NoidaJob Description:
- Must Have 3-4 Experience in SSIS, Mysql
- Good Experience in Tableau
- Experience in SQL Server.
- 1+ year of Experience in Tableau
- Knowledge of ETL Tool
- Knowledge of Dataware Housing
Looking to hire Data Engineers for a client in Bangalore.
We are looking for a savvy Data Engineer to join our growing team of analytics experts.
The hire will be responsible for:
- Expanding and optimizing our data and data pipeline architecture
- Optimizing data flow and collection for cross functional teams.
- Will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects.
- Must be self-directed and comfortable supporting the data needs of multiple teams, systems and products.
- Experience with Azure : ADLS, Databricks, Stream Analytics, SQL DW, COSMOS DB, Analysis Services, Azure Functions, Serverless Architecture, ARM Templates
- Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
- Experience with object-oriented/object function scripting languages: Python, SQL, Scala, Spark-SQL etc.
Nice to have experience with :
- Big data tools: Hadoop, Spark and Kafka
- Data pipeline and workflow management tools: Azkaban, Luigi, Airflow
- Stream-processing systems: Storm
Database : SQL DB
Programming languages : PL/SQL, Spark SQL
Looking for candidates with Data Warehousing experience, strong domain knowledge & experience working as a Technical lead.
The right candidate will be excited by the prospect of optimizing or even re-designing our company's data architecture to support our next generation of products and data initiatives.
We at Datametica Solutions Private Limited are looking for SQL Engineers who have a passion for cloud with knowledge of different on-premise and cloud Data implementation in the field of Big Data and Analytics including and not limiting to Teradata, Netezza, Exadata, Oracle, Cloudera, Hortonworks and alike.
Ideal candidates should have technical experience in migrations and the ability to help customers get value from Datametica's tools and accelerators.
Job Description
Experience : 4-10 years
Location : Pune
Mandatory Skills -
- Strong in ETL/SQL development
- Strong Data Warehousing skills
- Hands-on experience working with Unix/Linux
- Development experience in Enterprise Data warehouse projects
- Good to have experience working with Python, shell scripting
Opportunities -
- Selected candidates will be provided training opportunities on one or more of the following: Google Cloud, AWS, DevOps Tools, Big Data technologies like Hadoop, Pig, Hive, Spark, Sqoop, Flume and Kafka
- Would get chance to be part of the enterprise-grade implementation of Cloud and Big Data systems
- Will play an active role in setting up the Modern data platform based on Cloud and Big Data
- Would be part of teams with rich experience in various aspects of distributed systems and computing
About Us!
A global Leader in the Data Warehouse Migration and Modernization to the Cloud, we empower businesses by migrating their Data/Workload/ETL/Analytics to the Cloud by leveraging Automation.
We have expertise in transforming legacy Teradata, Oracle, Hadoop, Netezza, Vertica, Greenplum along with ETLs like Informatica, Datastage, AbInitio & others, to cloud-based data warehousing with other capabilities in data engineering, advanced analytics solutions, data management, data lake and cloud optimization.
Datametica is a key partner of the major cloud service providers - Google, Microsoft, Amazon, Snowflake.
We have our own products!
Eagle – Data warehouse Assessment & Migration Planning Product
Raven – Automated Workload Conversion Product
Pelican - Automated Data Validation Product, which helps automate and accelerate data migration to the cloud.
Why join us!
Datametica is a place to innovate, bring new ideas to live and learn new things. We believe in building a culture of innovation, growth and belonging. Our people and their dedication over these years are the key factors in achieving our success.
Benefits we Provide!
Working with Highly Technical and Passionate, mission-driven people
Subsidized Meals & Snacks
Flexible Schedule
Approachable leadership
Access to various learning tools and programs
Pet Friendly
Certification Reimbursement Policy
Check out more about us on our website below!
http://www.datametica.com/">www.datametica.com
We are looking for an outstanding Big Data Engineer with experience setting up and maintaining Data Warehouse and Data Lakes for an Organization. This role would closely collaborate with the Data Science team and assist the team build and deploy machine learning and deep learning models on big data analytics platforms.
Roles and Responsibilities:
- Develop and maintain scalable data pipelines and build out new integrations and processes required for optimal extraction, transformation, and loading of data from a wide variety of data sources using 'Big Data' technologies.
- Develop programs in Scala and Python as part of data cleaning and processing.
- Assemble large, complex data sets that meet functional / non-functional business requirements and fostering data-driven decision making across the organization.
- Responsible to design and develop distributed, high volume, high velocity multi-threaded event processing systems.
- Implement processes and systems to validate data, monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it.
- Perform root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
- Provide high operational excellence guaranteeing high availability and platform stability.
- Closely collaborate with the Data Science team and assist the team build and deploy machine learning and deep learning models on big data analytics platforms.
Skills:
- Experience with Big Data pipeline, Big Data analytics, Data warehousing.
- Experience with SQL/No-SQL, schema design and dimensional data modeling.
- Strong understanding of Hadoop Architecture, HDFS ecosystem and eexperience with Big Data technology stack such as HBase, Hadoop, Hive, MapReduce.
- Experience in designing systems that process structured as well as unstructured data at large scale.
- Experience in AWS/Spark/Java/Scala/Python development.
- Should have Strong skills in PySpark (Python & SPARK). Ability to create, manage and manipulate Spark Dataframes. Expertise in Spark query tuning and performance optimization.
- Experience in developing efficient software code/frameworks for multiple use cases leveraging Python and big data technologies.
- Prior exposure to streaming data sources such as Kafka.
- Should have knowledge on Shell Scripting and Python scripting.
- High proficiency in database skills (e.g., Complex SQL), for data preparation, cleaning, and data wrangling/munging, with the ability to write advanced queries and create stored procedures.
- Experience with NoSQL databases such as Cassandra / MongoDB.
- Solid experience in all phases of Software Development Lifecycle - plan, design, develop, test, release, maintain and support, decommission.
- Experience with DevOps tools (GitHub, Travis CI, and JIRA) and methodologies (Lean, Agile, Scrum, Test Driven Development).
- Experience building and deploying applications on on-premise and cloud-based infrastructure.
- Having a good understanding of machine learning landscape and concepts.
Qualifications and Experience:
Engineering and post graduate candidates, preferably in Computer Science, from premier institutions with proven work experience as a Big Data Engineer or a similar role for 3-5 years.
Certifications:
Good to have at least one of the Certifications listed here:
AZ 900 - Azure Fundamentals
DP 200, DP 201, DP 203, AZ 204 - Data Engineering
AZ 400 - Devops Certification
- 4-8 years of experience in BI/DW
- 3+ years of experience with Microstrategy schema, design and development
- Experience in Microstrategy Cloud for Azure and connecting with Azure Synapse as Data Source
- Extensive experience in developing reports, dashboards and cubes in Microstrategy
- Advanced SQL coding skills
- Hands on development in BI reporting and performance tuning
- Should be able to prepare unit test cases and execute unit testing
• Responsible for developing and maintaining applications with PySpark
Must Have Skills:
Next gen BI platform for data driven performance marketers
This leads to a very interesting and challenging use case in the emerging field of large scale distributed HTAP, which is still not mature enough to provide a solution out of the box that works for our scale and SLAs. So, we are building a solution that can handle the complexity of our use case and scale to several trillions of rows. As a "Database Engineer", you will evolve, architect, build and scale the core data warehouse that sits at the heart of Clarisights enabling large scale distributed, interactive analytics on near realtime data.
What you'll do
- Understanding and gaining expertise in existing data warehouse.
- Use the above knowledge to identify gaps in the current system and formulate strategies around what can be done to fill them
- Avail KPIs around the data warehouse.
- Find solutions to evolve and scale the data warehouse. This will involve a lot of technical research, benchmarking and testing of existing and candidate replacement systems.
- Bulid from scratch all or parts of the data warehouse to improve the KPIs.
- Ensure the SLAs and SLOs of data warehouse, which will require assuming ownership and being oncall for the same.
- Gain deep understanding into Linux and understand concepts that drive performance characteristics like IO scheduling, paging, processing scheduling, CPU instruction pipelining etc.
- Adopt/build tooling and tune the systems to extract maximum performance out of the underlying hardware.
- Build wrappers/microservices for improving visibility, control, adoption and ease of use for the data warehouse.
- Build tooling and automation for monitoring, debugging and deployment of the warehouse.
- Contribute to open source database technologies that are used at or are potential candidates for use.
What you bring
We are looking for engineers with a strong passion for solving challenging engineering problems and a burning desire to learn and grow in a fast growing startup. This is not an easy gig, it will require strong technical chops and an insatiable curiosity to make things better. We need passionate and mature engineers who can do wonders with some mentoring and don't need to be managed.
- Distributed systems: You have a good understanding of general patterns of scaling and fault-tolerance in large scale distributed systems.
- Databases: You have a good understanding of database concepts like query optimization, indexing, transactions, sharding, replication etc.
- Data pipelines: You have a working knowledge of distributed data processing systems.
- Engineer at heart: You thrive on writing great code and have a strong appreciation for modular, testable and maintainable code, and make sure to document it. You have the ability to take new initiatives and questioning status quo.
- Passion & Drive to learn and excel: You believe in our vision. You drive the product for the better, always looking to improve things, and soon become the go-to person to talk to on something that you mastered along. You love dabbling in your own side-projects and learning new skills that are not necessarily part of your normal day job.
- Inquisitiveness: You are curious to know how different modules on our platform work. You are not afraid to venture into unknown territories of code. You ask questions.
- Ownership: You are your own manager. You have the ability to implement engineering tasks on your own without a need for micro-management and take responsibility for any task that has been assigned to you.
- Teamwork: You should be helpful and work well with teams. You’re probably someone who enjoys sharing knowledge with team-mates, asking for help when they need it.
- Open Source Contribution: Bonus.
We are looking for a Data Engineer that will be responsible for collecting, storing, processing, and analyzing huge sets of data that is coming from different sources.
Responsibilities
Working with Big Data tools and frameworks to provide requested capabilities Identify development needs in order to improve and streamline operations Develop and manage BI solutions Implementing ETL process and Data Warehousing Monitoring performance and managing infrastructure
Skills
Proficient understanding of distributed computing principles Proficiency with Hadoop and Spark Experience with building stream-processing systems, using solutions such as Kafka and Spark-Streaming Good knowledge of Data querying tools SQL and Hive Knowledge of various ETL techniques and frameworks Experience with Python/Java/Scala (at least one) Experience with cloud services such as AWS or GCP Experience with NoSQL databases, such as DynamoDB,MongoDB will be an advantage Excellent written and verbal communication skills
SQL, Python, Numpy,Pandas,Knowledge of Hive and Data warehousing concept will be a plus point.
JD
- Strong analytical skills with the ability to collect, organise, analyse and interpret trends or patterns in complex data sets and provide reports & visualisations.
- Work with management to prioritise business KPIs and information needs Locate and define new process improvement opportunities.
- Technical expertise with data models, database design and development, data mining and segmentation techniques
- Proven success in a collaborative, team-oriented environment
- Working experience with geospatial data will be a plus.
As a Data Warehouse Engineer in our team, you should have a proven ability to deliver high-quality work on time and with minimal supervision.
Develops or modifies procedures to solve complex database design problems, including performance, scalability, security and integration issues for various clients (on-site and off-site).
Design, develop, test, and support the data warehouse solution.
Adapt best practices and industry standards, ensuring top quality deliverable''s and playing an integral role in cross-functional system integration.
Design and implement formal data warehouse testing strategies and plans including unit testing, functional testing, integration testing, performance testing, and validation testing.
Evaluate all existing hardware's and software's according to required standards and ability to configure the hardware clusters as per the scale of data.
Data integration using enterprise development tool-sets (e.g. ETL, MDM, Quality, CDC, Data Masking, Quality).
Maintain and develop all logical and physical data models for enterprise data warehouse (EDW).
Contributes to the long-term vision of the enterprise data warehouse (EDW) by delivering Agile solutions.
Interact with end users/clients and translate business language into technical requirements.
Acts independently to expose and resolve problems.
Participate in data warehouse health monitoring and performance optimizations as well as quality documentation.
Job Requirements :
2+ years experience working in software development & data warehouse development for enterprise analytics.
2+ years of working with Python with major experience in Red-shift as a must and exposure to other warehousing tools.
Deep expertise in data warehousing, dimensional modeling and the ability to bring best practices with regard to data management, ETL, API integrations, and data governance.
Experience working with data retrieval and manipulation tools for various data sources like Relational (MySQL, PostgreSQL, Oracle), Cloud-based storage.
Experience with analytic and reporting tools (Tableau, Power BI, SSRS, SSAS). Experience in AWS cloud stack (S3, Glue, Red-shift, Lake Formation).
Experience in various DevOps practices helping the client to deploy and scale the systems as per requirement.
Strong verbal and written communication skills with other developers and business clients.
Knowledge of Logistics and/or Transportation Domain is a plus.
Ability to handle/ingest very huge data sets (both real-time data and batched data) in an efficient manner.