Data Analyst (SQL)

at Insurtech & B2C Tech firm

Agency job
via Merito
icon
Mumbai
icon
3 - 5 yrs
icon
₹6L - ₹8L / yr
icon
Full time
Skills
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
Data Analytics
About us :-

Merito is a curated talent platform where we identify, assess, and connect candidates for matching job opportunities. We are working with the mission to change the way hiring is done. The company is founded by a team of alumni from IIM Ahmedabad, McKinsey with more than 2 decades of experience in recruitment, training, and coaching.

About our Client :-

Our client is an InsureTech platform backed by a major retail giant that offers contextual insurance in a blink.

We are a team of consumer obsessed & driven people. We want to reimagine insurance through a consumption lens for millions. We are a B2C and B2B2C platform that wants to simplify the way insurance is experienced with principles of transparency, trust and contextualize. The multiple ideas that we are building upon are all about an experience beyond the way insurance is understood or sold at present in India.

If you are a passionate mind with the courage to reimagine, execute and grow, you are most welcome to join hands with us in any form.

About the role :-

We are looking for an experienced Data Analyst to join our team. An ideal candidate is someone who able to turn data into information, information into valuable insights that will help in business decisions.
The individual will be part of the team that is responsible for making Marketing, Business & Strategic decisions. 

Responsibilities :-

 Work closely with project managers to understand and maintain focus on their analytical needs, identifying critical metrics and KPIs, and deliver actionable insights to relevant decision-makers. 

 Analyze data from multiple sources (app, store sales, digital & social media) and create best-practice reports based on data mining, analysis, and visualization that will help in devising business/marketing/ communication plans

 Evaluate internal systems for efficiency, problems, and inaccuracies, developing and maintaining protocols for handling, processing, and cleaning data

 Coordinate with internal stakeholders and external partners to gather requirements, provide status updates, and build relationships

 Support in building customer analytics 

 Building some automated dashboards (like LIVE sale tracking, weekly management dashboards, etc.)

Required Skills and Qualifications :-

 Bachelor’s degree in engineering 

 3 – 5 years of experience as a data analyst in Retail industry 

 Must have hands-on experience working on multiple programming languages (MySQL & Presto)

 Technical writing experience in relevant areas, including codes, queries, reports, and presentations

 Excellent Excel skills and Good Communication skills 

 Experience in Python/Pyspark would be an added advantage
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Data Scientist

at OnlineSales.ai

Founded 2016  •  Products & Services  •  20-100 employees  •  Raised funding
Python
Data Science
Machine Learning (ML)
Natural Language Processing (NLP)
Computer Vision
Predictive modelling
Neural networks
recommendation algorithm
icon
Pune
icon
3 - 6 yrs
icon
Best in industry

We are looking for a Data Scientist who is excited by the prospects of Mining Big Datasets, Deriving Actionable Insights, Building production-ready Predictive Models and have a direct impact on business. 

 

RESPONSIBILITIES:

  • Hands-on knowledge of Python for building production-ready data products.
  • Strong Statistical Analysis and Modeling skills.
  • Proven skills in Data Science, Data Mining, Machine Learning, covering the spectrum of supervised as well as unsupervised learning algorithms.
  • Ability to grasp and work with new technologies quickly.
  • Knowledge of Reinforcement Learning is a plus. 
  • Some knowledge of big data technologies like Hadoop, Apache Spark will be an added advantage

 

QUALIFICATION REQUIRED:

  • Bachelor’s or Master's degree in STEM. 
  • 3-7 years of relevant experience in applied Data Science.

 

@ ONLINESALES.AI:

  • Get a chance to work with some of the most popular big data technologies in the market
  • Build algorithms that work on web-scale data
  • Participate in every aspect of building a data-based product
  • Chance of making a dent in a multi-billion dollar digital advertising industry

 

YOU CAN BE A GREAT FIT IF YOU HAVE:

  • An analytical approach towards problem-solving
  • Experience with data extraction and management
  • Experience with R, Python for Data Analysis and Data modeling. 
  • Ability to set goals and meet deadlines in a fast-paced working environment
  • Understanding of E-Commerce and Advertising as a domain. 
  • Willingness to work for a startup.
  • Individual Contributor with a sense of Business Acumen and a hunger to make an impact.
Job posted by
Shantanu Harkut
Machine Learning (ML)
Deep Learning
Artificial Intelligence (AI)
icon
Remote, all over india
icon
2 - 6 yrs
icon
₹6L - ₹20L / yr
Hi ,

We have an Excellent job Opportunity for  "Applied Machine Learning Engineer" with one of thr Product based organization for Remote Working Mode or for Mumbai Location.

Job Responsibilities:

  • Apply your knowledge of ML and statistics to conceptualise, experiment, develop & deploy machine learning & deep learning systems.
  • Understanding the business objectives & defining the right target metrics to track performance & progress.
  • Defining & building datasets with the appropriate representation techniques for learning.
  • Training & tuning models. Running evaluation & test experiments on the models.
  • Build ML pipelines end to end. (Everything MLOps.)
  • Building pipelines for the various stages.
  • Deploying models.
  • Troubleshooting issues with models in production.
  • Reporting results of model performance in production.
  • Retraining, performance logging & maintenance.
  • Help the business with insights for better decision-making. You will build many predictive models for internal business operations
    you will derive insights from the trained models & data to help the product & business teams make better decisions.

Requirements:

  • 2+ years of work experience as an ML engineer or Data Scientist with a Bachelors Degree in Computer science or related field
  • Theoretical & practical knowledge of Machine Learning, Deep Learning and Statistical methods. (NLP Tasks, Recommender Systems, Predictive Modelling etc)
  • Since Pepper is a content company, you will work on many interesting text based problems. Solid understanding of Natural Language Processing techniques with Deep Learning is a must for this role.
  • Familiarity with the popular NLP applications and text representation architectures & techniques: text classification, machine translation, named entity recognition, summarisation, question answering, zero-shot learning etc.  Bag of Words, TF-IDF, Word2vec, GloVe, BERT, ELMo, GPT etc.
  • Experience with ML frameworks (like Tensorflow, Keras, PyTorch) & libraries like Sklearn.
  • Experience with ML infrastructure & shipping models.
  • Excellent programming & algorithmic skills. Good understanding of Data Structures and algorithms (fluent in at least one object oriented programming language). Proficiency in Python is a must.
  • Strong understanding of database systems & schema design. Proficient in SQL

Please let us know if you are interested in the above opening and if interested please let us know your

 

Current CTC :

Expected CTC :

Notice Period :

Relevant experience in  Machine Learning :

Relevant experience in Deep Learning:

Relevant experience in NLP Applications:

 

Regards

Ashwini

Job posted by
Suma Latha

Data Scientist

at Impetus Technologies

Founded 2005  •  Products & Services  •  1000-5000 employees  •  Profitable
Data Science
Pricing Strategy
Python
Predictive analytics
Pricing models
Machine Learning (ML)
icon
Bengaluru (Bangalore)
icon
4 - 8 yrs
icon
₹20L - ₹35L / yr
Looking for Data Scientist with strong expertise in Classical Machine Learning algorithms and strong expertise in SQL and Python.
Experience in Pricing models will be definite plus
Job posted by
Gangadhar T.M
Apache Kafka
Kafka
Snowflake
Stored Procedures
icon
Bengaluru (Bangalore), Hyderabad, Mumbai, Pune
icon
10 - 19 yrs
icon
₹15L - ₹40L / yr
Hi , 
We are hiring for Senior Data Architect for a reputed company 
Experience required- 10-19 yrs
Skills required- Having hands on experience on Kafka, Stored procedures, Snowflakes.
Job posted by
Jyoti Sharma

Sr. Database Engineer

at Technology service company

Agency job
via Jobdost
Relational Database (RDBMS)
NOSQL Databases
NOSQL
Performance tuning
SQL
PostgreSQL
MongoDB
DynamoDB
Object Oriented Programming (OOPs)
Domain-driven design
Cloud Computing
Oracle
Data Analytics
Data modeling
Database Design
icon
Remote only
icon
5 - 10 yrs
icon
₹10L - ₹20L / yr

Preferred Education & Experience:

  • Bachelor’s or master’s degree in Computer Engineering, Computer Science, Computer Applications, Mathematics, Statistics or related technical field or equivalent practical experience. Relevant experience of at least 3 years in lieu of above if from a different stream of education.

  • Well-versed in and 5+ years of hands-on demonstrable experience with:
    ▪ Data Analysis & Data Modeling
    ▪ Database Design & Implementation
    ▪ Database Performance Tuning & Optimization
    ▪ PL/pgSQL & SQL

  • 5+ years of hands-on development experience in Relational Database (PostgreSQL/SQL Server/Oracle).

  • 5+ years of hands-on development experience in SQL, PL/PgSQL, including stored procedures, functions, triggers, and views.

  • Hands-on experience with demonstrable working experience in Database Design Principles, SQL Query Optimization Techniques, Index Management, Integrity Checks, Statistics, and Isolation levels

  • Hands-on experience with demonstrable working experience in Database Read & Write Performance Tuning & Optimization.

  • Knowledge and Experience working in Domain Driven Design (DDD) Concepts, Object Oriented Programming System (OOPS) Concepts, Cloud Architecture Concepts, NoSQL Database Concepts are added values

  • Knowledge and working experience in Oil & Gas, Financial, & Automotive Domains is a plus

  • Hands-on development experience in one or more NoSQL data stores such as Cassandra, HBase, MongoDB, DynamoDB, Elastic Search, Neo4J, etc. a plus.

Job posted by
Riya Roy

MySQL DBA

at Ecosmob Technologies Pvt Ltd

Founded 2007  •  Products & Services  •  100-1000 employees  •  Profitable
MySQL
MySQL DBA
DBA
icon
Remote only
icon
2 - 6 yrs
icon
₹3L - ₹13L / yr

MySQL DBA

 

Desired Candidate Profile

 

1. Good knowledge of MYSQL architecture

2. Knowledge on MYSQL replication Master-Master and Master slave, Galera cluster and troubleshooting

3. Must have knowledge of setting MYSQL clustering , tuning, troubleshooting

4. Must have good knowledge of Performance tuning of MYSQL databases.

5. Must have good knowledge of MYSQL database up-gradation

6. Installation and configuration of MYSQL on Linux

7. Understanding MYSQL Backup & Recovery.

8. Ability to multi-task and context-switch effectively between different activities and teams

9. Provide 24x7 support for critical production systems.

10. Excellent written and verbal communication.

11. Ability to organize and plan work independently.

12. Ability to work in a rapidly changing environment.

 

Job posted by
Kiran Sahani

Data Scientist

at CarWale

Founded  •   •  employees  • 
Data Science
Data Scientist
R Programming
Python
Machine Learning (ML)
Amazon Web Services (AWS)
icon
Navi Mumbai, Mumbai
icon
3 - 5 yrs
icon
₹10L - ₹15L / yr

About CarWale: CarWale's mission is to bring delight in car buying, we offer a bouquet of reliable tools and services to help car consumers decide on buying the right car, at the right price and from the right partner. CarWale has always strived to serve car buyers and owners in the most comprehensive and convenient way possible. We provide a platform where car buyers and owners can research, buy, sell and come together to discuss and talk about their cars.We aim to empower Indian consumers to make informed car buying and ownership decisions with exhaustive and un-biased information on cars through our expert reviews, owner reviews, detailed specifications and comparisons. We understand that a car is by and large the second-most expensive asset a consumer associates his lifestyle with! Together with CarTrade & BikeWale, we are the market leaders in the personal mobility media space.About the Team:We are a bunch of enthusiastic analysts assisting all business functions with their data needs. We deal with huge but diverse datasets to find relationships, patterns and meaningful insights. Our goal is to help drive growth across the organization by creating a data-driven culture.

We are looking for an experienced Data Scientist who likes to explore opportunities and know their way around data to build world class solutions making a real impact on the business. 

 

Skills / Requirements –

  • 3-5 years of experience working on Data Science projects
  • Experience doing statistical modelling of big data sets
  • Expert in Python, R language with deep knowledge of ML packages
  • Expert in fetching data from SQL
  • Ability to present and explain data to management
  • Knowledge of AWS would be beneficial
  • Demonstrate Structural and Analytical thinking
  • Ability to structure and execute data science project end to end

 

Education –

Bachelor’s degree in a quantitative field (Maths, Statistics, Computer Science). Masters will be preferred.

 

Job posted by
Vanita Acharya
PySpark
SQL
Data Warehouse (DWH)
ETL
icon
Remote, Bengaluru (Bangalore)
icon
5 - 8 yrs
icon
₹12L - ₹20L / yr
SQL Developer with Relevant experience of 7 Yrs with Strong Communication Skills.
 
Key responsibilities:
 
  • Creating, designing and developing data models
  • Prepare plans for all ETL (Extract/Transformation/Load) procedures and architectures
  • Validating results and creating business reports
  • Monitoring and tuning data loads and queries
  • Develop and prepare a schedule for a new data warehouse
  • Analyze large databases and recommend appropriate optimization for the same
  • Administer all requirements and design various functional specifications for data
  • Provide support to the Software Development Life cycle
  • Prepare various code designs and ensure efficient implementation of the same
  • Evaluate all codes and ensure the quality of all project deliverables
  • Monitor data warehouse work and provide subject matter expertise
  • Hands-on BI practices, data structures, data modeling, SQL skills
  • Minimum 1 year experience in Pyspark
Job posted by
Priyanka U

Data Engineer

at Mobile Programming LLC

Founded 1998  •  Services  •  100-1000 employees  •  Profitable
Big Data
Amazon Web Services (AWS)
Hadoop
SQL
Python
Scala
Linux/Unix
SQL server
Apache Hive
Spark
icon
Remote, Chennai
icon
3 - 7 yrs
icon
₹12L - ₹18L / yr
Position: Data Engineer  
Location: Chennai- Guindy Industrial Estate
Duration: Full time role
Company: Mobile Programming (https://www.mobileprogramming.com/" target="_blank">https://www.mobileprogramming.com/) 
Client Name: Samsung 


We are looking for a Data Engineer to join our growing team of analytics experts. The hire will be
responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing
data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline
builder and data wrangler who enjoy optimizing data systems and building them from the ground up.
The Data Engineer will support our software developers, database architects, data analysts and data
scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout
ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple
teams, systems and products.

Responsibilities for Data Engineer
 Create and maintain optimal data pipeline architecture,
 Assemble large, complex data sets that meet functional / non-functional business requirements.
 Identify, design, and implement internal process improvements: automating manual processes,
optimizing data delivery, re-designing infrastructure for greater scalability, etc.
 Build the infrastructure required for optimal extraction, transformation, and loading of data
from a wide variety of data sources using SQL and AWS big data technologies.
 Build analytics tools that utilize the data pipeline to provide actionable insights into customer
acquisition, operational efficiency and other key business performance metrics.
 Work with stakeholders including the Executive, Product, Data and Design teams to assist with
data-related technical issues and support their data infrastructure needs.
 Create data tools for analytics and data scientist team members that assist them in building and
optimizing our product into an innovative industry leader.
 Work with data and analytics experts to strive for greater functionality in our data systems.

Qualifications for Data Engineer
 Experience building and optimizing big data ETL pipelines, architectures and data sets.
 Advanced working SQL knowledge and experience working with relational databases, query
authoring (SQL) as well as working familiarity with a variety of databases.
 Experience performing root cause analysis on internal and external data and processes to
answer specific business questions and identify opportunities for improvement.
 Strong analytic skills related to working with unstructured datasets.
 Build processes supporting data transformation, data structures, metadata, dependency and
workload management.
 A successful history of manipulating, processing and extracting value from large disconnected
datasets.

 Working knowledge of message queuing, stream processing and highly scalable ‘big datadata
stores.
 Strong project management and organizational skills.
 Experience supporting and working with cross-functional teams in a dynamic environment.

We are looking for a candidate with 3-6 years of experience in a Data Engineer role, who has
attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools:
 Experience with big data tools: Spark, Kafka, HBase, Hive etc.
 Experience with relational SQL and NoSQL databases
 Experience with AWS cloud services: EC2, EMR, RDS, Redshift
 Experience with stream-processing systems: Storm, Spark-Streaming, etc.
 Experience with object-oriented/object function scripting languages: Python, Java, Scala, etc.

Skills: Big Data, AWS, Hive, Spark, Python, SQL
 
Job posted by
vandana chauhan

Data Analyst

at PayU

Founded 2002  •  Product  •  500-1000 employees  •  Profitable
Python
R Programming
Data Analytics
R
icon
gurgaon, NCR (Delhi | Gurgaon | Noida)
icon
1 - 3 yrs
icon
₹7L - ₹15L / yr

What you will be doing:

As a part of the Global Credit Risk and Data Analytics team, this person will be responsible for carrying out analytical initiatives which will be as follows: -

  • Dive into the data and identify patterns
  • Development of end-to-end Credit models and credit policy for our existing credit products
  • Leverage alternate data to develop best-in-class underwriting models
  • Working on Big Data to develop risk analytical solutions
  • Development of Fraud models and fraud rule engine
  • Collaborate with various stakeholders (e.g. tech, product) to understand and design best solutions which can be implemented
  • Working on cutting-edge techniques e.g. machine learning and deep learning models

Example of projects done in past:

  • Lazypay Credit Risk model using CatBoost modelling technique ; end-to-end pipeline for feature engineering and model deployment in production using Python
  • Fraud model development, deployment and rules for EMEA region

 

Basic Requirements:

  • 1-3 years of work experience as a Data scientist (in Credit domain)
  • 2016 or 2017 batch from a premium college (e.g B.Tech. from IITs, NITs, Economics from DSE/ISI etc)
  • Strong problem solving and understand and execute complex analysis
  • Experience in at least one of the languages - R/Python/SAS and SQL
  • Experience in in Credit industry (Fintech/bank)
  • Familiarity with the best practices of Data Science

 

Add-on Skills : 

  • Experience in working with big data
  • Solid coding practices
  • Passion for building new tools/algorithms
  • Experience in developing Machine Learning models
Job posted by
Deeksha Srivastava
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at Insurtech & B2C Tech firm?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort