Cutshort logo
Data steward Jobs in Delhi, NCR and Gurgaon

11+ Data steward Jobs in Delhi, NCR and Gurgaon | Data steward Job openings in Delhi, NCR and Gurgaon

Apply to 11+ Data steward Jobs in Delhi, NCR and Gurgaon on CutShort.io. Explore the latest Data steward Job opportunities across top companies like Google, Amazon & Adobe.

icon
Infogain
Agency job
via Technogen India PvtLtd by RAHUL BATTA
NCR (Delhi | Gurgaon | Noida), Bengaluru (Bangalore), Mumbai, Pune
7 - 8 yrs
₹15L - ₹16L / yr
Data steward
MDM
Tamr
Reltio
Data engineering
+7 more
  1. Data Steward :

Data Steward will collaborate and work closely within the group software engineering and business division. Data Steward has overall accountability for the group's / Divisions overall data and reporting posture by responsibly managing data assets, data lineage, and data access, supporting sound data analysis. This role requires focus on data strategy, execution, and support for projects, programs, application enhancements, and production data fixes. Makes well-thought-out decisions on complex or ambiguous data issues and establishes the data stewardship and information management strategy and direction for the group. Effectively communicates to individuals at various levels of the technical and business communities. This individual will become part of the corporate Data Quality and Data management/entity resolution team supporting various systems across the board.

 

Primary Responsibilities:

 

  • Responsible for data quality and data accuracy across all group/division delivery initiatives.
  • Responsible for data analysis, data profiling, data modeling, and data mapping capabilities.
  • Responsible for reviewing and governing data queries and DML.
  • Accountable for the assessment, delivery, quality, accuracy, and tracking of any production data fixes.
  • Accountable for the performance, quality, and alignment to requirements for all data query design and development.
  • Responsible for defining standards and best practices for data analysis, modeling, and queries.
  • Responsible for understanding end-to-end data flows and identifying data dependencies in support of delivery, release, and change management.
  • Responsible for the development and maintenance of an enterprise data dictionary that is aligned to data assets and the business glossary for the group responsible for the definition and maintenance of the group's data landscape including overlays with the technology landscape, end-to-end data flow/transformations, and data lineage.
  • Responsible for rationalizing the group's reporting posture through the definition and maintenance of a reporting strategy and roadmap.
  • Partners with the data governance team to ensure data solutions adhere to the organization’s data principles and guidelines.
  • Owns group's data assets including reports, data warehouse, etc.
  • Understand customer business use cases and be able to translate them to technical specifications and vision on how to implement a solution.
  • Accountable for defining the performance tuning needs for all group data assets and managing the implementation of those requirements within the context of group initiatives as well as steady-state production.
  • Partners with others in test data management and masking strategies and the creation of a reusable test data repository.
  • Responsible for solving data-related issues and communicating resolutions with other solution domains.
  • Actively and consistently support all efforts to simplify and enhance the Clinical Trial Predication use cases.
  • Apply knowledge in analytic and statistical algorithms to help customers explore methods to improve their business.
  • Contribute toward analytical research projects through all stages including concept formulation, determination of appropriate statistical methodology, data manipulation, research evaluation, and final research report.
  • Visualize and report data findings creatively in a variety of visual formats that appropriately provide insight to the stakeholders.
  • Achieve defined project goals within customer deadlines; proactively communicate status and escalate issues as needed.

 

Additional Responsibilities:

 

  • Strong understanding of the Software Development Life Cycle (SDLC) with Agile Methodologies
  • Knowledge and understanding of industry-standard/best practices requirements gathering methodologies.
  • Knowledge and understanding of Information Technology systems and software development.
  • Experience with data modeling and test data management tools.
  • Experience in the data integration project • Good problem solving & decision-making skills.
  • Good communication skills within the team, site, and with the customer

 

Knowledge, Skills and Abilities

 

  • Technical expertise in data architecture principles and design aspects of various DBMS and reporting concepts.
  • Solid understanding of key DBMS platforms like SQL Server, Azure SQL
  • Results-oriented, diligent, and works with a sense of urgency. Assertive, responsible for his/her own work (self-directed), have a strong affinity for defining work in deliverables, and be willing to commit to deadlines.
  • Experience in MDM tools like MS DQ, SAS DM Studio, Tamr, Profisee, Reltio etc.
  • Experience in Report and Dashboard development
  • Statistical and Machine Learning models
  • Python (sklearn, numpy, pandas, genism)
  • Nice to Have:
  • 1yr of ETL experience
  • Natural Language Processing
  • Neural networks and Deep learning
  • xperience in keras,tensorflow,spacy, nltk, LightGBM python library

 

Interaction :  Frequently interacts with subordinate supervisors.

Education : Bachelor’s degree, preferably in Computer Science, B.E or other quantitative field related to the area of assignment. Professional certification related to the area of assignment may be required

Experience :  7 years of Pharmaceutical /Biotech/life sciences experience, 5 years of Clinical Trials experience and knowledge, Excellent Documentation, Communication, and Presentation Skills including PowerPoint

 

Read more
Semi Stealth Mode startup in Delhi
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
3 - 6 yrs
₹35L - ₹40L / yr
skill iconData Analytics
skill iconPython
Data Visualization
SQL

A Delhi NCR based Applied AI & Consumer Tech company tackling one of the largest unsolved consumer internet problems of our time. We are a motley crew of smart, passionate and nice people who believe you can build a high performing company with a culture of respect aka a sports team with a heart aka a caring meritocracy.

Our illustrious angels include unicorn founders, serial entrepreneurs with exits, tech & consumer industry stalwarts and investment professionals/bankers.

We are hiring for our founding team (in Delhi NCR only, no remote) that will take the product from prototype to a landing! Opportunity for disproportionate non-linear impact, learning and wealth creation in a classic 0-1 with a Silicon Valley caliber founding team.


Key Responsibilities:

1.   Data Strategy and Vision:

·       Develop and drive the company's data analytics strategy, aligning it with overall business goals.

·       Define the vision for data analytics, outlining clear objectives and key results (OKRs) to measure success.

2.   Data Analysis and Interpretation:

·       Oversee the analysis of complex datasets to extract valuable insights, trends, and patterns.

·       Utilize statistical methods and data visualization techniques to present findings in a clear and compelling manner to both technical and non-technical stakeholders.

3.   Data Infrastructure and Tools:

·       Evaluate, select, and implement advanced analytics tools and platforms to enhance data processing and analysis capabilities.

·       Collaborate with IT teams to ensure a robust and scalable data infrastructure, including data storage, retrieval, and security protocols.

4.   Collaboration and Stakeholder Management:

·       Collaborate cross-functionally with teams such as marketing, sales, and product development to identify opportunities for data-driven optimizations.

·       Act as a liaison between technical and non-technical teams, ensuring effective communication of data insights and recommendations.

5.   Performance Measurement:

·       Establish key performance indicators (KPIs) and metrics to measure the impact of data analytics initiatives on business outcomes.

·       Continuously assess and improve the accuracy and relevance of analytical models and methodologies.


Qualifications:

  • Bachelor's or Master's degree in Data Science, Statistics, Computer Science, or related field.
  • Proven experience (5+ years) in data analytics, with a focus on leading analytics teams and driving strategic initiatives.
  • Proficiency in data analysis tools such as Python, R, SQL, and advanced knowledge of data visualization tools.
  • Strong understanding of statistical methods, machine learning algorithms, and predictive modelling techniques.
  • Excellent communication skills, both written and verbal, to effectively convey complex findings to diverse audie 
Read more
Top IT MNC
Chennai, Bengaluru (Bangalore), Kochi (Cochin), Coimbatore, Hyderabad, Pune, Kolkata, Noida, Gurugram, Mumbai
5 - 13 yrs
₹8L - ₹20L / yr
Snow flake schema
skill iconPython
snowflake
Greetings,

We are looking out for a Snowflake developer for one of our premium clients for their PAN India loaction
Read more
Epik Solutions
Sakshi Sarraf
Posted by Sakshi Sarraf
Bengaluru (Bangalore), Noida
4 - 13 yrs
₹7L - ₹18L / yr
skill iconPython
SQL
databricks
skill iconScala
Spark
+2 more

Job Description:


As an Azure Data Engineer, your role will involve designing, developing, and maintaining data solutions on the Azure platform. You will be responsible for building and optimizing data pipelines, ensuring data quality and reliability, and implementing data processing and transformation logic. Your expertise in Azure Databricks, Python, SQL, Azure Data Factory (ADF), PySpark, and Scala will be essential for performing the following key responsibilities:


Designing and developing data pipelines: You will design and implement scalable and efficient data pipelines using Azure Databricks, PySpark, and Scala. This includes data ingestion, data transformation, and data loading processes.


Data modeling and database design: You will design and implement data models to support efficient data storage, retrieval, and analysis. This may involve working with relational databases, data lakes, or other storage solutions on the Azure platform.


Data integration and orchestration: You will leverage Azure Data Factory (ADF) to orchestrate data integration workflows and manage data movement across various data sources and targets. This includes scheduling and monitoring data pipelines.


Data quality and governance: You will implement data quality checks, validation rules, and data governance processes to ensure data accuracy, consistency, and compliance with relevant regulations and standards.


Performance optimization: You will optimize data pipelines and queries to improve overall system performance and reduce processing time. This may involve tuning SQL queries, optimizing data transformation logic, and leveraging caching techniques.


Monitoring and troubleshooting: You will monitor data pipelines, identify performance bottlenecks, and troubleshoot issues related to data ingestion, processing, and transformation. You will work closely with cross-functional teams to resolve data-related problems.


Documentation and collaboration: You will document data pipelines, data flows, and data transformation processes. You will collaborate with data scientists, analysts, and other stakeholders to understand their data requirements and provide data engineering support.


Skills and Qualifications:


Strong experience with Azure Databricks, Python, SQL, ADF, PySpark, and Scala.

Proficiency in designing and developing data pipelines and ETL processes.

Solid understanding of data modeling concepts and database design principles.

Familiarity with data integration and orchestration using Azure Data Factory.

Knowledge of data quality management and data governance practices.

Experience with performance tuning and optimization of data pipelines.

Strong problem-solving and troubleshooting skills related to data engineering.

Excellent collaboration and communication skills to work effectively in cross-functional teams.

Understanding of cloud computing principles and experience with Azure services.


Read more
Information Solution Provider Company
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
3 - 7 yrs
₹10L - ₹15L / yr
SQL
Hadoop
Spark
skill iconMachine Learning (ML)
skill iconData Science
+3 more

Job Description:

The data science team is responsible for solving business problems with complex data. Data complexity could be characterized in terms of volume, dimensionality and multiple touchpoints/sources. We understand the data, ask fundamental-first-principle questions, apply our analytical and machine learning skills to solve the problem in the best way possible. 

 

Our ideal candidate

The role would be a client facing one, hence good communication skills are a must. 

The candidate should have the ability to communicate complex models and analysis in a clear and precise manner. 

 

The candidate would be responsible for:

  • Comprehending business problems properly - what to predict, how to build DV, what value addition he/she is bringing to the client, etc.
  • Understanding and analyzing large, complex, multi-dimensional datasets and build features relevant for business
  • Understanding the math behind algorithms and choosing one over another
  • Understanding approaches like stacking, ensemble and applying them correctly to increase accuracy

Desired technical requirements

  • Proficiency with Python and the ability to write production-ready codes. 
  • Experience in pyspark, machine learning and deep learning
  • Big data experience, e.g. familiarity with Spark, Hadoop, is highly preferred
  • Familiarity with SQL or other databases.
Read more
Quess Corp Limited

at Quess Corp Limited

6 recruiters
Anjali Singh
Posted by Anjali Singh
Noida, Delhi, Gurugram, Ghaziabad, Faridabad, Bengaluru (Bangalore), Chennai
5 - 8 yrs
₹1L - ₹15L / yr
Google Cloud Platform (GCP)
skill iconPython
Big Data
Data processing
Data Visualization

GCP  Data Analyst profile must have below skills sets :

 

Read more
Octro Inc

at Octro Inc

1 recruiter
Reshma Suleman
Posted by Reshma Suleman
Noida, NCR (Delhi | Gurgaon | Noida)
1 - 7 yrs
₹10L - ₹20L / yr
skill iconData Science
skill iconR Programming
skill iconPython

Octro Inc. is looking for a Data Scientist who will support the product, leadership and marketing teams with insights gained from analyzing multiple sources of data. The ideal candidate is adept at using large data sets to find opportunities for product and process optimization and using models to test the effectiveness of different courses of action. 

 

They must have strong experience using a variety of data mining/data analysis methods, using a variety of data tools, building and implementing models, using/creating algorithms and creating/running simulations. They must have a proven ability to drive business results with their data-based insights. 

 

They must be comfortable working with a wide range of stakeholders and functional teams. The right candidate will have a passion for discovering solutions hidden in large data sets and working with stakeholders to improve business outcomes.

Responsibilities :

- Work with stakeholders throughout the organization to identify opportunities for leveraging company data to drive business solutions.

- Mine and analyze data from multiple databases to drive optimization and improvement of product development, marketing techniques and business strategies.

- Assess the effectiveness and accuracy of new data sources and data gathering techniques.

- Develop custom data models and algorithms to apply to data sets.

- Use predictive modelling to increase and optimize user experiences, revenue generation, ad targeting and other business outcomes.

- Develop various A/B testing frameworks and test model qualities.

- Coordinate with different functional teams to implement models and monitor outcomes.

- Develop processes and tools to monitor and analyze model performance and data accuracy.

Qualifications :

- Strong problem solving skills with an emphasis on product development and improvement.

- Advanced knowledge of SQL and its use in data gathering/cleaning.

- Experience using statistical computer languages (R, Python, etc.) to manipulate data and draw insights from large data sets.

- Experience working with and creating data architectures.

- Knowledge of a variety of machine learning techniques (clustering, decision tree learning, artificial neural networks, etc.) and their real-world advantages/drawbacks.

- Knowledge of advanced statistical techniques and concepts (regression, properties of distributions, statistical tests and proper usage, etc.) and experience with applications.

- Excellent written and verbal communication skills for coordinating across teams.

Read more
Cemtics

at Cemtics

1 recruiter
Tapan Sahani
Posted by Tapan Sahani
Remote, NCR (Delhi | Gurgaon | Noida)
4 - 6 yrs
₹5L - ₹12L / yr
Big Data
Spark
Hadoop
SQL
skill iconPython
+1 more

JD:

Required Skills:

  • Intermediate to Expert level hands-on programming using one of programming language- Java or Python or Pyspark or Scala.
  • Strong practical knowledge of SQL.
    Hands on experience on Spark/SparkSQL
  • Data Structure and Algorithms
  • Hands-on experience as an individual contributor in Design, Development, Testing and Deployment of Big Data technologies based applications
  • Experience in Big Data application tools, such as Hadoop, MapReduce, Spark, etc
  • Experience on NoSQL Databases like HBase, etc
  • Experience with Linux OS environment (Shell script, AWK, SED)
  • Intermediate RDBMS skill, able to write SQL query with complex relation on top of big RDMS (100+ table)
Read more
PayU

at PayU

1 video
6 recruiters
Deeksha Srivastava
Posted by Deeksha Srivastava
gurgaon, NCR (Delhi | Gurgaon | Noida)
1 - 3 yrs
₹7L - ₹15L / yr
skill iconPython
skill iconR Programming
skill iconData Analytics
R

What you will be doing:

As a part of the Global Credit Risk and Data Analytics team, this person will be responsible for carrying out analytical initiatives which will be as follows: -

  • Dive into the data and identify patterns
  • Development of end-to-end Credit models and credit policy for our existing credit products
  • Leverage alternate data to develop best-in-class underwriting models
  • Working on Big Data to develop risk analytical solutions
  • Development of Fraud models and fraud rule engine
  • Collaborate with various stakeholders (e.g. tech, product) to understand and design best solutions which can be implemented
  • Working on cutting-edge techniques e.g. machine learning and deep learning models

Example of projects done in past:

  • Lazypay Credit Risk model using CatBoost modelling technique ; end-to-end pipeline for feature engineering and model deployment in production using Python
  • Fraud model development, deployment and rules for EMEA region

 

Basic Requirements:

  • 1-3 years of work experience as a Data scientist (in Credit domain)
  • 2016 or 2017 batch from a premium college (e.g B.Tech. from IITs, NITs, Economics from DSE/ISI etc)
  • Strong problem solving and understand and execute complex analysis
  • Experience in at least one of the languages - R/Python/SAS and SQL
  • Experience in in Credit industry (Fintech/bank)
  • Familiarity with the best practices of Data Science

 

Add-on Skills : 

  • Experience in working with big data
  • Solid coding practices
  • Passion for building new tools/algorithms
  • Experience in developing Machine Learning models
Read more
The Smart Cube

at The Smart Cube

1 recruiter
Jasmine Batra
Posted by Jasmine Batra
Remote, Noida, NCR (Delhi | Gurgaon | Noida)
2 - 5 yrs
₹2L - ₹5L / yr
skill iconR Programming
Advanced analytics
skill iconPython
Marketing analytics
• Act as a lead analyst on various data analytics projects aiding strategic decision making for Fortune 500 / FTSE 100 companies, Blue Chip Consulting Firms and Global Financial Services companies • Understand the client objectives, and work with the PL to design the analytical solution/framework. Be able to translate the client objectives / analytical plan into clear deliverables with associated priorities and constraints • Collect/Organize/Prepare/Manage data for the analysis and conduct quality checks • Use and implement basic and advanced statistical techniques like frequencies, cross-tabs, correlation, Regression, Decision Trees, Cluster Analysis, etc. to identify key actionable insights from the data • Develop complete sections of final client report in Power Point. Identify trends and evaluate insights in terms of logic and reasoning, and be able to succinctly present them in terms of an executive summary/taglines • Conduct sanity checks of the analysis output based on reasoning and common sense, and be able to do a rigorous self QC, as well as of the work assigned to analysts to ensure an error free output • Aid in decision making related to client management, and also be able to take client calls relatively independently • Support the project leads in managing small teams of 2-3 analysts, independently set targets and communicate to team members • Discuss queries/certain sections of deliverable report over client calls or video conferences Technical Skills: • Hands on experience of one or more statistical tools such as SAS, R and Python • Working knowledge or experience in using SQL Server (or other RDBMS tools) would be an advantage Work Experience: • 2-4 years of relevant experience in Marketing Analytics / MR. • Experience in managing, cleaning and analysis of large datasets using statistical packages like SAS, R, Python, etc. • Experience in data management using SQL queries on tools like Access/ SQL Server
Read more
UpX Academy

at UpX Academy

2 recruiters
Suchit Majumdar
Posted by Suchit Majumdar
Noida, Hyderabad, NCR (Delhi | Gurgaon | Noida)
2 - 6 yrs
₹4L - ₹12L / yr
Spark
Hadoop
skill iconMongoDB
skill iconPython
skill iconScala
+3 more
Looking for a technically sound and excellent trainer on big data technologies. Get an opportunity to become popular in the industry and get visibility. Host regular sessions on Big data related technologies and get paid to learn.
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort