11+ Data steward Jobs in Bangalore (Bengaluru) | Data steward Job openings in Bangalore (Bengaluru)
Apply to 11+ Data steward Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest Data steward Job opportunities across top companies like Google, Amazon & Adobe.
- Data Steward :
Data Steward will collaborate and work closely within the group software engineering and business division. Data Steward has overall accountability for the group's / Divisions overall data and reporting posture by responsibly managing data assets, data lineage, and data access, supporting sound data analysis. This role requires focus on data strategy, execution, and support for projects, programs, application enhancements, and production data fixes. Makes well-thought-out decisions on complex or ambiguous data issues and establishes the data stewardship and information management strategy and direction for the group. Effectively communicates to individuals at various levels of the technical and business communities. This individual will become part of the corporate Data Quality and Data management/entity resolution team supporting various systems across the board.
Primary Responsibilities:
- Responsible for data quality and data accuracy across all group/division delivery initiatives.
- Responsible for data analysis, data profiling, data modeling, and data mapping capabilities.
- Responsible for reviewing and governing data queries and DML.
- Accountable for the assessment, delivery, quality, accuracy, and tracking of any production data fixes.
- Accountable for the performance, quality, and alignment to requirements for all data query design and development.
- Responsible for defining standards and best practices for data analysis, modeling, and queries.
- Responsible for understanding end-to-end data flows and identifying data dependencies in support of delivery, release, and change management.
- Responsible for the development and maintenance of an enterprise data dictionary that is aligned to data assets and the business glossary for the group responsible for the definition and maintenance of the group's data landscape including overlays with the technology landscape, end-to-end data flow/transformations, and data lineage.
- Responsible for rationalizing the group's reporting posture through the definition and maintenance of a reporting strategy and roadmap.
- Partners with the data governance team to ensure data solutions adhere to the organization’s data principles and guidelines.
- Owns group's data assets including reports, data warehouse, etc.
- Understand customer business use cases and be able to translate them to technical specifications and vision on how to implement a solution.
- Accountable for defining the performance tuning needs for all group data assets and managing the implementation of those requirements within the context of group initiatives as well as steady-state production.
- Partners with others in test data management and masking strategies and the creation of a reusable test data repository.
- Responsible for solving data-related issues and communicating resolutions with other solution domains.
- Actively and consistently support all efforts to simplify and enhance the Clinical Trial Predication use cases.
- Apply knowledge in analytic and statistical algorithms to help customers explore methods to improve their business.
- Contribute toward analytical research projects through all stages including concept formulation, determination of appropriate statistical methodology, data manipulation, research evaluation, and final research report.
- Visualize and report data findings creatively in a variety of visual formats that appropriately provide insight to the stakeholders.
- Achieve defined project goals within customer deadlines; proactively communicate status and escalate issues as needed.
Additional Responsibilities:
- Strong understanding of the Software Development Life Cycle (SDLC) with Agile Methodologies
- Knowledge and understanding of industry-standard/best practices requirements gathering methodologies.
- Knowledge and understanding of Information Technology systems and software development.
- Experience with data modeling and test data management tools.
- Experience in the data integration project • Good problem solving & decision-making skills.
- Good communication skills within the team, site, and with the customer
Knowledge, Skills and Abilities
- Technical expertise in data architecture principles and design aspects of various DBMS and reporting concepts.
- Solid understanding of key DBMS platforms like SQL Server, Azure SQL
- Results-oriented, diligent, and works with a sense of urgency. Assertive, responsible for his/her own work (self-directed), have a strong affinity for defining work in deliverables, and be willing to commit to deadlines.
- Experience in MDM tools like MS DQ, SAS DM Studio, Tamr, Profisee, Reltio etc.
- Experience in Report and Dashboard development
- Statistical and Machine Learning models
- Python (sklearn, numpy, pandas, genism)
- Nice to Have:
- 1yr of ETL experience
- Natural Language Processing
- Neural networks and Deep learning
- xperience in keras,tensorflow,spacy, nltk, LightGBM python library
Interaction : Frequently interacts with subordinate supervisors.
Education : Bachelor’s degree, preferably in Computer Science, B.E or other quantitative field related to the area of assignment. Professional certification related to the area of assignment may be required
Experience : 7 years of Pharmaceutical /Biotech/life sciences experience, 5 years of Clinical Trials experience and knowledge, Excellent Documentation, Communication, and Presentation Skills including PowerPoint
Design, implement, and improve the analytics platform
Implement and simplify self-service data query and analysis capabilities of the BI platform
Develop and improve the current BI architecture, emphasizing data security, data quality
and timeliness, scalability, and extensibility
Deploy and use various big data technologies and run pilots to design low latency
data architectures at scale
Collaborate with business analysts, data scientists, product managers, software development engineers,
and other BI teams to develop, implement, and validate KPIs, statistical analyses, data profiling, prediction,
forecasting, clustering, and machine learning algorithms
Educational
At Ganit we are building an elite team, ergo we are seeking candidates who possess the
following backgrounds:
7+ years relevant experience
Expert level skills writing and optimizing complex SQL
Knowledge of data warehousing concepts
Experience in data mining, profiling, and analysis
Experience with complex data modelling, ETL design, and using large databases
in a business environment
Proficiency with Linux command line and systems administration
Experience with languages like Python/Java/Scala
Experience with Big Data technologies such as Hive/Spark
Proven ability to develop unconventional solutions, sees opportunities to
innovate and leads the way
Good experience of working in cloud platforms like AWS, GCP & Azure. Having worked on
projects involving creation of data lake or data warehouse
Excellent verbal and written communication.
Proven interpersonal skills and ability to convey key insights from complex analyses in
summarized business terms. Ability to effectively communicate with multiple teams
Good to have
AWS/GCP/Azure Data Engineer Certification
We are looking out for a Snowflake developer for one of our premium clients for their PAN India loaction
closely with the Kinara management team to investigate strategically important business
questions.
Lead a team through the entire analytical and machine learning model life cycle:
Define the problem statement
Build and clean datasets
Exploratory data analysis
Feature engineering
Apply ML algorithms and assess the performance
Code for deployment
Code testing and troubleshooting
Communicate Analysis to Stakeholders
Manage Data Analysts and Data Scientists
What is the role?
You will be responsible for developing and designing front-end web architecture, ensuring the responsiveness of applications, and working alongside graphic designers for web design features, among other duties. You will be responsible for the functional/technical track of the project
Key Responsibilities
- Develop and automate large-scale, high-performance data processing systems (batch and/or streaming).
- Build high-quality software engineering practices towards building data infrastructure and pipelines at scale.
- Lead data engineering projects to ensure pipelines are reliable, efficient, testable, & maintainable
- Optimize performance to meet high throughput and scale
What are we looking for?
- 4+ years of relevant industry experience.
- Working with data at the terabyte scale.
- Experience designing, building and operating robust distributed systems.
- Experience designing and deploying high throughput and low latency systems with reliable monitoring and logging practices.
- Building and leading teams.
- Working knowledge of relational databases like Postgresql/MySQL.
- Experience with Python / Spark / Kafka / Celery
- Experience working with OLTP and OLAP systems
- Excellent communication skills, both written and verbal.
- Experience working in cloud e.g., AWS, Azure or GCP
Whom will you work with?
You will work with a top-notch tech team, working closely with the architect and engineering head.
What can you look for?
A wholesome opportunity in a fast-paced environment that will enable you to juggle between concepts, yet maintain the quality of content, interact and share your ideas and have loads of learning while at work. Work with a team of highly talented young professionals and enjoy the benefits of being at this company
We are
We strive to make selling fun with our SaaS incentive gamification product. Company is the #1 gamification software that automates and digitizes Sales Contests and Commission Programs. With game-like elements, rewards, recognitions, and complete access to relevant information, Company turbocharges an entire salesforce. Company also empowers Sales Managers with easy-to-publish game templates, leaderboards, and analytics to help accelerate performances and sustain growth.
We are a fun and high-energy team, with people from diverse backgrounds - united under the passion of getting things done. Rest assured that you shall get complete autonomy in your tasks and ample opportunities to develop your strengths.
Way forward
If you find this role exciting and want to join us in Bangalore, India, then apply by clicking below. Provide your details and upload your resume. All received resumes will be screened, shortlisted candidates will be requested to join for a discussion and on mutual alignment and agreement, we will proceed with hiring.
- Participate in full machine learning Lifecycle including data collection, cleaning, preprocessing to training models, and deploying them to Production.
- Discover data sources, get access to them, ingest them, clean them up, and make them “machine learning ready”.
- Work with data scientists to create and refine features from the underlying data and build pipelines to train and deploy models.
- Partner with data scientists to understand and implement machine learning algorithms.
- Support A/B tests, gather data, perform analysis, draw conclusions on the impact of your models.
- Work cross-functionally with product managers, data scientists, and product engineers, and communicate results to peers and leaders.
- Mentor junior team members
Who we have in mind:
- Graduate in Computer Science or related field, or equivalent practical experience.
- 4+ years of experience in software engineering with 2+ years of direct experience in the machine learning field.
- Proficiency with SQL, Python, Spark, and basic libraries such as Scikit-learn, NumPy, Pandas.
- Familiarity with deep learning frameworks such as TensorFlow or Keras
- Experience with Computer Vision (OpenCV), NLP frameworks (NLTK, SpaCY, BERT).
- Basic knowledge of machine learning techniques (i.e. classification, regression, and clustering).
- Understand machine learning principles (training, validation, etc.)
- Strong hands-on knowledge of data query and data processing tools (i.e. SQL)
- Software engineering fundamentals: version control systems (i.e. Git, Github) and workflows, and ability to write production-ready code.
- Experience deploying highly scalable software supporting millions or more users
- Experience building applications on cloud (AWS or Azure)
- Experience working in scrum teams with Agile tools like JIRA
- Strong oral and written communication skills. Ability to explain complex concepts and technical material to non-technical users
- Building and operationalizing large scale enterprise data solutions and applications using one or more of AZURE data and analytics services in combination with custom solutions - Azure Synapse/Azure SQL DWH, Azure Data Lake, Azure Blob Storage, Spark, HDInsights, Databricks, CosmosDB, EventHub/IOTHub.
- Experience in migrating on-premise data warehouses to data platforms on AZURE cloud.
- Designing and implementing data engineering, ingestion, and transformation functions
-
Azure Synapse or Azure SQL data warehouse
-
Spark on Azure is available in HD insights and data bricks
About GlowRoad:
GlowRoad is building India's most profitable social e-commerce platform where resellers share
the catalog of products through their network on Facebook, Whatsapp, Instagram, etc and
convert them to sales. GlowRoad is on a mission to create micro-entrepreneurs (resellers) who can set up their web-store, market their products and track all transactions through its platform.
GlowRoad app has ~15M downloads and 1- million + MAU's.-
GlowRoad has been funded by global VCs like Accel Partners, CDH, KIP and Vertex Ventures and recently raised series C Funding. We are scaling our operations across India.-
GlowRoad is looking for team members passionate about building platforms for next billion
users and reimagining e-commerce for mobile-first users. A great environment, a fun, open,
energetic and creative environment. Approachable leadership, filled with passionate people, Open communication and provides high growth for employees.
Role:
● Gather, process/analyze and report business data across departments
● Report key business data/metrics on a regular basis (daily, weekly and monthly
as relevant)
● Structure concise reports to share with management
● Work closely with Senior Analysts to create data pipelines for Analytical
Databases for Category, Operations, Marketing, Support teams.
● Assist Senior Analysts in projects by learning new reporting tools like Power BI
and advanced analytics with R
Basic Qualifications
● Engineering Graduate
● 6- 24 months of Hands on experience with SQL, Excel, Google Spreadsheets
● Experience in creating MIS/Dashboards in Excel/Google Spreadsheets
● Strong in Mathematics
● Ability to take full ownership in terms of timeline and data sanity with respect to
reports
● Basic Verbal and Written English Communication
- Sr. Data Engineer:
Core Skills – Data Engineering, Big Data, Pyspark, Spark SQL and Python
Candidate with prior Palantir Cloud Foundry OR Clinical Trial Data Model background is preferred
Major accountabilities:
- Responsible for Data Engineering, Foundry Data Pipeline Creation, Foundry Analysis & Reporting, Slate Application development, re-usable code development & management and Integrating Internal or External System with Foundry for data ingestion with high quality.
- Have good understanding on Foundry Platform landscape and it’s capabilities
- Performs data analysis required to troubleshoot data related issues and assist in the resolution of data issues.
- Defines company data assets (data models), Pyspark, spark SQL, jobs to populate data models.
- Designs data integrations and data quality framework.
- Design & Implement integration with Internal, External Systems, F1 AWS platform using Foundry Data Connector or Magritte Agent
- Collaboration with data scientists, data analyst and technology teams to document and leverage their understanding of the Foundry integration with different data sources - Actively participate in agile work practices
- Coordinating with Quality Engineer to ensure the all quality controls, naming convention & best practices have been followed
Desired Candidate Profile :
- Strong data engineering background
- Experience with Clinical Data Model is preferred
- Experience in
- SQL Server ,Postgres, Cassandra, Hadoop, and Spark for distributed data storage and parallel computing
- Java and Groovy for our back-end applications and data integration tools
- Python for data processing and analysis
- Cloud infrastructure based on AWS EC2 and S3
- 7+ years IT experience, 2+ years’ experience in Palantir Foundry Platform, 4+ years’ experience in Big Data platform
- 5+ years of Python and Pyspark development experience
- Strong troubleshooting and problem solving skills
- BTech or master's degree in computer science or a related technical field
- Experience designing, building, and maintaining big data pipelines systems
- Hands-on experience on Palantir Foundry Platform and Foundry custom Apps development
- Able to design and implement data integration between Palantir Foundry and external Apps based on Foundry data connector framework
- Hands-on in programming languages primarily Python, R, Java, Unix shell scripts
- Hand-on experience in AWS / Azure cloud platform and stack
- Strong in API based architecture and concept, able to do quick PoC using API integration and development
- Knowledge of machine learning and AI
- Skill and comfort working in a rapidly changing environment with dynamic objectives and iteration with users.
Demonstrated ability to continuously learn, work independently, and make decisions with minimal supervision