Cutshort logo
Leading Sales Platform logo
Lead Engineer - Data Quality
Leading Sales Platform
Lead Engineer - Data Quality
Leading Sales Platform's logo

Lead Engineer - Data Quality

at Leading Sales Platform

Agency job
via Qrata
5 - 10 yrs
₹30L - ₹45L / yr
Bengaluru (Bangalore)
Skills
Big Data
ETL
Spark
Data engineering
Data governance
Informatica Data Quality
skill iconJava
skill iconScala
skill iconPython
Work with product managers and development leads to create testing strategies · Develop and scale automated data validation framework · Build and monitor key metrics of data health across the entire Big Data pipelines · Early alerting and escalation process to quickly identify and remedy quality issues before something ever goes ‘live’ in front of the customer · Build/refine tools and processes for quick root cause diagnostics · Contribute to the creation of quality assurance standards, policies, and procedures to influence the DQ mind-set across the company
Required skills and experience: · Solid experience working in Big Data ETL environments with Spark and Java/Scala/Python · Strong experience with AWS cloud technologies (EC2, EMR, S3, Kinesis, etc) · Experience building monitoring/alerting frameworks with tools like Newrelic and escalations with slack/email/dashboard integrations, etc · Executive-level communication, prioritization, and team leadership skills
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

Similar jobs

Chennai
4 - 6 yrs
₹18L - ₹23L / yr
skill iconMachine Learning (ML)
skill iconData Science
Natural Language Processing (NLP)
Computer Vision
recommendation algorithm
+6 more

Roles & Responsibilities:


-Adopt novel and breakthrough Deep Learning/Machine Learning technology to fully solve real world problems for different industries. -Develop prototypes of machine learning models based on existing research papers.

-Utilize published/existing models to meet business requirements. Tweak existing implementations to improve efficiencies and adapt for use-case variations.

-Optimize machine learning model training and inference time. -Work closely with development and QA teams in transitioning prototypes to commercial products

-Independently work end-to-end from data collection, preparation/annotation to validation of outcomes.


-Define and develop ML infrastructure to improve efficiency of ML development workflows.


Must Have:

- Experience in productizing and deployment of ML solutions.

- AI/ML expertise areas: Computer Vision with Deep Learning. Experience with object detection, classification, recognition; document layout and understanding tasks, OCR/ICR

. - Thorough understanding of full ML pipeline, starting from data collection to model building to inference.

- Experience with Python, OpenCV and at least a few framework/libraries (TensorFlow / Keras / PyTorch / spaCy / fastText / Scikit-learn etc.)

- Years with relevant experience:


5+ -Experience or Knowledge in ML OPS.


Good to Have: NLP: Text classification, entity extraction, content summarization. AWS, Docker.

Read more
Top Management Consulting Company
Top Management Consulting Company
Agency job
via People First Consultants by Naveed Mohd
Gurugram, Bengaluru (Bangalore)
2 - 9 yrs
Best in industry
skill iconPython
SQL
skill iconAmazon Web Services (AWS)
Microsoft Windows Azure
Google Cloud Platform (GCP)
Greetings!!

We are looking out for a technically driven  "Full-Stack Engineer" for one of our premium client

COMPANY DESCRIPTION:
This Company is a global management consulting firm. We are the trusted advisor to the world's leading businesses, governments, and institutions. We work with leading organizations across the private, public and social sectors. 

Qualifications
• Bachelor's degree in computer science or related field; Master's degree is a plus
• 3+ years of relevant work experience
• Meaningful experience with at least two of the following technologies: Python, Scala, Java
• Strong proven experience on distributed processing frameworks (Spark, Hadoop, EMR) and SQL is very
much expected
• Commercial client-facing project experience is helpful, including working in close-knit teams
• Ability to work across structured, semi-structured, and unstructured data, extracting information and
identifying linkages across disparate data sets
• Confirmed ability in clearly communicating complex solutions
• Understandings on Information Security principles to ensure compliant handling and management of
client data
• Experience and interest in Cloud platforms such as: AWS, Azure, Google Platform or Databricks
• Extraordinary attention to detail
Read more
a global provider of Business Process Management company
a global provider of Business Process Management company
Agency job
via Jobdost by Saida Jabbar
Bengaluru (Bangalore)
4 - 10 yrs
₹15L - ₹22L / yr
SQL Azure
ADF
Business process management
Windows Azure
SQL
+12 more

Desired Competencies:

 

Ø  Expertise in Azure Data Factory V2

Ø  Expertise in other Azure components like Data lake Store, SQL Database, Databricks

Ø  Must have working knowledge of spark programming

Ø  Good exposure to Data Projects dealing with Data Design and Source to Target documentation including defining transformation rules

Ø  Strong knowledge of CICD Process

Ø  Experience in building power BI reports

Ø  Understanding of different components like Pipelines, activities, datasets & linked services

Ø  Exposure to dynamic configuration of pipelines using data sets and linked Services

Ø  Experience in designing, developing and deploying pipelines to higher environments

Ø  Good knowledge on File formats for flexible usage, File location Objects (SFTP, FTP, local, HDFS, ADLS, BLOB, Amazon S3 etc.)

Ø  Strong knowledge in SQL queries

Ø  Must have worked in full life-cycle development from functional design to deployment

Ø  Should have working knowledge of GIT, SVN

Ø  Good experience in establishing connection with heterogeneous sources like Hadoop, Hive, Amazon, Azure, Salesforce, SAP, HANA, API’s, various Databases etc.

Ø  Should have working knowledge of different resources available in Azure like Storage Account, Synapse, Azure SQL Server, Azure Data Bricks, Azure Purview

Ø  Any experience related to metadata management, data modelling, and related tools (Erwin or ER Studio or others) would be preferred

 

Preferred Qualifications:

Ø  Bachelor's degree in Computer Science or Technology

Ø  Proven success in contributing to a team-oriented environment

Ø  Proven ability to work creatively and analytically in a problem-solving environment

Ø  Excellent communication (written and oral) and interpersonal skills

Qualifications

BE/BTECH

KEY RESPONSIBILITIES :

You will join a team designing and building a data warehouse covering both relational and dimensional models, developing reports, data marts and other extracts and delivering these via SSIS, SSRS, SSAS, and PowerBI. It is seen as playing a vital role in delivering a single version of the truth on Client’s data and delivering MI & BI that will feature in enabling both operational and strategic decision making.

You will be able to take responsibility for projects over the entire software lifecycle and work with minimum supervision. This would include technical analysis, design, development, and test support as well as managing the delivery to production.

The initial project being resourced is around the development and implementation of a Data Warehouse and associated MI/BI functions.

 

Principal Activities:

1.       Interpret written business requirements documents

2.       Specify (High Level Design and Tech Spec), code and write automated unit tests for new aspects of MI/BI Service.

3.       Write clear and concise supporting documentation for deliverable items.

4.       Become a member of the skilled development team willing to contribute and share experiences and learn as appropriate.

5.       Review and contribute to requirements documentation.

6.       Provide third line support for internally developed software.

7.       Create and maintain continuous deployment pipelines.

8.       Help maintain Development Team standards and principles.

9.       Contribute and share learning and experiences with the greater Development team.

10.   Work within the company’s approved processes, including design and service transition.

11.   Collaborate with other teams and departments across the firm.

12.   Be willing to travel to other offices when required.
13.You agree to comply with any reasonable instructions or regulations issued by the Company from time to time including those set out in the terms of the dealing and other manuals, including staff handbooks and all other group policies


Location
– Bangalore

 

Read more
Kloud9 Technologies
manjula komala
Posted by manjula komala
Bengaluru (Bangalore)
3 - 6 yrs
₹18L - ₹27L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+6 more

About Kloud9:


Kloud9 exists with the sole purpose of providing cloud expertise to the retail industry. Our team of cloud architects, engineers and developers help retailers launch a successful cloud initiative so you can quickly realise the benefits of cloud technology. Our standardised, proven cloud adoption methodologies reduce the cloud adoption time and effort so you can directly benefit from lower migration costs.


Kloud9 was founded with the vision of bridging the gap between E-commerce and cloud. The E-commerce of any industry is limiting and poses a huge challenge in terms of the finances spent on physical data structures.


At Kloud9, we know migrating to the cloud is the single most significant technology shift your company faces today. We are your trusted advisors in transformation and are determined to build a deep partnership along the way. Our cloud and retail experts will ease your transition to the cloud.


Our sole focus is to provide cloud expertise to retail industry giving our clients the empowerment that will take their business to the next level. Our team of proficient architects, engineers and developers have been designing, building and implementing solutions for retailers for an average of more than 20 years.


We are a cloud vendor that is both platform and technology independent. Our vendor independence not just provides us with a unique perspective into the cloud market but also ensures that we deliver the cloud solutions available that best meet our clients' requirements.



What we are looking for:


●       3+ years’ experience developing Big Data & Analytic solutions

●       Experience building data lake solutions leveraging Google Data Products (e.g. Dataproc, AI Building Blocks, Looker, Cloud Data Fusion, Dataprep, etc.), Hive, Spark

●       Experience with relational SQL/No SQL

●       Experience with Spark (Scala/Python/Java) and Kafka

●       Work experience with using Databricks (Data Engineering and Delta Lake components)

●       Experience with source control tools such as GitHub and related dev process

●       Experience with workflow scheduling tools such as Airflow

●       In-depth knowledge of any scalable cloud vendor(GCP preferred)

●       Has a passion for data solutions

●       Strong understanding of data structures and algorithms

●       Strong understanding of solution and technical design

●       Has a strong problem solving and analytical mindset

●       Experience working with Agile Teams.

●       Able to influence and communicate effectively, both verbally and written, with team members and business stakeholders

●       Able to quickly pick up new programming languages, technologies, and frameworks

●       Bachelor’s Degree in computer science


Why Explore a Career at Kloud9:


With job opportunities in prime locations of US, London, Poland and Bengaluru, we help build your career paths in cutting edge technologies of AI, Machine Learning and Data Science. Be part of an inclusive and diverse workforce that's changing the face of retail technology with their creativity and innovative solutions. Our vested interest in our employees translates to deliver the best products and solutions to our customers!

Read more
Product based company
Product based company
Agency job
via Zyvka Global Services by Ridhima Sharma
Bengaluru (Bangalore)
3 - 12 yrs
₹5L - ₹30L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+6 more

Responsibilities:

  • Should act as a technical resource for the Data Science team and be involved in creating and implementing current and future Analytics projects like data lake design, data warehouse design, etc.
  • Analysis and design of ETL solutions to store/fetch data from multiple systems like Google Analytics, CleverTap, CRM systems etc.
  • Developing and maintaining data pipelines for real time analytics as well as batch analytics use cases.
  • Collaborate with data scientists and actively work in the feature engineering and data preparation phase of model building
  • Collaborate with product development and dev ops teams in implementing the data collection and aggregation solutions
  • Ensure quality and consistency of the data in Data warehouse and follow best data governance practices
  • Analyse large amounts of information to discover trends and patterns
  • Mine and analyse data from company databases to drive optimization and improvement of product development, marketing techniques and business strategies.\

Requirements

  • Bachelor’s or Masters in a highly numerate discipline such as Engineering, Science and Economics
  • 2-6 years of proven experience working as a Data Engineer preferably in ecommerce/web based or consumer technologies company
  • Hands on experience of working with different big data tools like Hadoop, Spark , Flink, Kafka and so on
  • Good understanding of AWS ecosystem for big data analytics
  • Hands on experience in creating data pipelines either using tools or by independently writing scripts
  • Hands on experience in scripting languages like Python, Scala, Unix Shell scripting and so on
  • Strong problem solving skills with an emphasis on product development.
  • Experience using business intelligence tools e.g. Tableau, Power BI would be an added advantage (not mandatory)
Read more
MNC
MNC
Agency job
via Fragma Data Systems by geeti gaurav mohanty
Bengaluru (Bangalore)
3 - 5 yrs
₹6L - ₹12L / yr
Spark
Big Data
Data engineering
Hadoop
Apache Kafka
+5 more
Data Engineer

• Drive the data engineering implementation
• Strong experience in building data pipelines
• AWS stack experience is must
• Deliver Conceptual, Logical and Physical data models for the implementation
teams.

• SQL stronghold is must. Advanced SQL working knowledge and experience
working with a variety of relational databases, SQL query authoring
• AWS Cloud data pipeline experience is must. Data pipelines and data centric
applications using distributed storage platforms like S3 and distributed processing
platforms like Spark, Airflow, Kafka
• Working knowledge of AWS technologies such as S3, EC2, EMR, RDS, Lambda,
Elasticsearch
• Ability to use a major programming (e.g. Python /Java) to process data for
modelling.
Read more
Reval Analytics
at Reval Analytics
2 recruiters
Jyoti Nair
Posted by Jyoti Nair
Pune
3 - 6 yrs
₹5L - ₹9L / yr
skill iconPython
skill iconDjango
Big Data

Position Name: Software Developer

Required Experience: 3+ Years

Number of positions: 4

Qualifications: Master’s or Bachelor s degree in Engineering, Computer Science, or equivalent (BE/BTech or MS in Computer Science).

Key Skills: Python, Django, Ngnix, Linux, Sanic, Pandas, Numpy, Snowflake, SciPy, Data Visualization, RedShift, BigData, Charting

Compensation - As per industry standards.

Joining - Immediate joining is preferrable.

 

Required Skills:

 

  • Strong Experience in Python and web frameworks like Django, Tornado and/or Flask
  • Experience in data analytics using standard python libraries using Pandas, NumPy, MatPlotLib
  • Conversant in implementing charts using charting libraries like Highcharts, d3.js, c3.js, dc.js and data Visualization tools like Plotly, GGPlot
  • Handling and using large databases and Datawarehouse technologies like MongoDB, MySQL, BigData, Snowflake, Redshift.
  • Experience in building APIs, Multi-threading for tasks on Linux platform
  • Exposure to finance and capital markets will be added advantage. 
  • Strong understanding of software design principles, algorithms, data structures, design patterns, and multithreading concepts.
  • Worked on building highly-available distributed systems on cloud infrastructure or have had exposure to architectural pattern of a large, high-scale web application.
  • Strong understanding of software design principles, algorithms, data structures, design patterns, and multithreading concepts.
  • Basic understanding of front-end technologies, such as JavaScript, HTML5, and CSS3

 

Company Description:

Reval Analytical Services is a fully-owned subsidiary of Virtua Research Inc. US. It is a financial services technology company focused on consensus analytics, peer analytics and Web-enabled information delivery. The Company’s unique combination of investment research experience, modeling expertise, and software development capabilities enables it to provide industry-leading financial research tools and services for investors, analysts, and corporate management.

 

Website: http://www.virtuaresearch.com" target="_blank">www.virtuaresearch.com

Read more
Oil & Energy Industry
Oil & Energy Industry
Agency job
via Green Bridge Consulting LLP by Susmita Mishra
NCR (Delhi | Gurgaon | Noida)
1 - 3 yrs
₹8L - ₹12L / yr
skill iconMachine Learning (ML)
skill iconData Science
skill iconDeep Learning
Digital Signal Processing
Statistical signal processing
+6 more
Understanding business objectives and developing models that help to achieve them,
along with metrics to track their progress
Managing available resources such as hardware, data, and personnel so that deadlines
are met
Analysing the ML algorithms that could be used to solve a given problem and ranking
them by their success probability
Exploring and visualizing data to gain an understanding of it, then identifying
differences in data distribution that could affect performance when deploying the model
in the real world
Verifying data quality, and/or ensuring it via data cleaning
Supervising the data acquisition process if more data is needed
Defining validation strategies
Defining the pre-processing or feature engineering to be done on a given dataset
Defining data augmentation pipelines
Training models and tuning their hyper parameters
Analysing the errors of the model and designing strategies to overcome them
Deploying models to production
Read more
Computer Power Group Pvt Ltd
Bengaluru (Bangalore), Chennai, Pune, Mumbai
7 - 13 yrs
₹14L - ₹20L / yr
skill iconR Programming
skill iconPython
skill iconData Science
SQL server
Business Analysis
+3 more
Requirement Specifications: Job Title:: Data Scientist Experience:: 7 to 10 Years Work Location:: Mumbai, Bengaluru, Chennai Job Role:: Permanent Notice Period :: Immediate to 60 days Job description: • Support delivery of one or more data science use cases, leading on data discovery and model building activities Conceptualize and quickly build POC on new product ideas - should be willing to work as an individual contributor • Open to learn, implement newer tools\products • Experiment & identify best methods\techniques, algorithms for analytical problems • Operationalize – Work closely with the engineering, infrastructure, service management and business teams to operationalize use cases Essential Skills • Minimum 2-7 years of hands-on experience with statistical software tools: SQL, R, Python • 3+ years’ experience in business analytics, forecasting or business planning with emphasis on analytical modeling, quantitative reasoning and metrics reporting • Experience working with large data sets in order to extract business insights or build predictive models • Proficiency in one or more statistical tools/languages – Python, Scala, R, SPSS or SAS and related packages like Pandas, SciPy/Scikit-learn, NumPy etc. • Good data intuition / analysis skills; sql, plsql knowledge must • Manage and transform variety of datasets to cleanse, join, aggregate the datasets • Hands-on experience running in running various methods like Regression, Random forest, k-NN, k-Means, boosted trees, SVM, Neural Network, text mining, NLP, statistical modelling, data mining, exploratory data analysis, statistics (hypothesis testing, descriptive statistics) • Deep domain (BFSI, Manufacturing, Auto, Airlines, Supply Chain, Retail & CPG) knowledge • Demonstrated ability to work under time constraints while delivering incremental value. • Education Minimum a Masters in Statistics, or PhD in domains linked to applied statistics, applied physics, Artificial Intelligence, Computer Vision etc. BE/BTECH/BSC Statistics/BSC Maths
Read more
GreedyGame
at GreedyGame
1 video
5 recruiters
Debdutta Pal
Posted by Debdutta Pal
Bengaluru (Bangalore)
2 - 4 yrs
₹10L - ₹12L / yr
skill iconPython
MySQL
skill iconData Science
NOSQL Databases
Greedygame is looking for a data scientist who will help us make sense of the vast amount of available data in order to make smarter decisions and develop high-quality products. Your primary focus will be using data mining techniques, statistical analysis, machine learning, in order to build high-quality prediction systems and strong consumer engagement profiles. Responsibilities Build required statistical models and heuristics to predict, optimize, and guide various aspects of our business based on available data Interact with product and operations teams to identify gaps, questions, and issues for data analysis and experiment Develop and code software programs, algorithms and create automated processes which cleanse,integrate and evaluate large datasets from multiple sources Create systems to use data from user behavior to identify actionable insights. Convey these insights to product and operations teams from time to time. Help in redefining ad viewing experience for consumers on a global scale Skills Required Coding experience in Python, MySQL, NoSQL and building prototypes for algorithms. Comfortable and willing to learn any machine learning algorithm, reading research papers and delving deep into its maths Passionate and curious to learn the latest trends, methods and technologies in this field. What’s in it for you? - Opportunity to be a part of the big disruption we are creating in the ad-tech space. - Work with complete autonomy, and take on multiple responsibilities - Work in a fast paced environment, with uncapped opportunities to learn and grow - Office in one of the most happening places in India. - Amazing colleagues, weekly lunches and beer on fridays! What we are building: GreedyGame is a platform which enables blending of ads within mobile gaming experience using assets like background, characters, power-ups. It helps advertisers engage audiences while they are playing games, empowers game developers monetize their game development efforts through non-intrusive advertising and allows gamers to enjoy gaming content without having to deal with distractive advertising.
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos