Senior Product Analyst

at AYM Marketing Management

DP
Posted by Stephen FitzGerald
icon
Remote only
icon
2 - 8 yrs
icon
₹10L - ₹25L / yr
icon
Full time
Skills
SQL server
PowerBI
Data Analytics
Data Visualization
Tableau
Qlikview
Spotfire
Python
Data architecture
Mobile applications
ETL
Teamwork
Analytical Skills
Problem solving
Corporate Communications
Google Analytics

Senior Product Analyst

Pampers Start Up Team

India / Remote Working

 

 

Team Description

Our internal team focuses on App Development with data a growing area within the structure. We have a clear vision and strategy which is coupled up with App Development, Data, Testing, Solutions and Operations. The data team sits across the UK and India whilst other teams sit across Dubai, Lebanon, Karachi and various cities in India.

 

Role Description

In this role you will use a range of tools and technologies to primarily working on providing data design, data governance, reporting and analytics on the Pampers App.

 

This is a unique opportunity for an ambitious candidate to join a growing business where they will get exposure to a diverse set of assignments, can contribute fully to the growth of the business and where there are no limits to career progression and reward.

 

Responsibilities

● To be the Data Steward and drive governance having full understanding of all the data that flows through the Apps to all systems

● Work with the campaign team to do data fixes when issues with campaigns

● Investigate and troubleshoot issues with product and campaigns giving clear RCA and impact analysis

● Document data, create data dictionaries and be the “go to” person in understanding what data flows

● Build dashboards and reports using Amplitude, Power BI and present to the key stakeholders

● Carry out adhoc data investigations into issues with the app and present findings back querying data in BigQuery/SQL/CosmosDB

● Translate analytics into a clear powerpoint deck with actionable insights

● Write up clear documentation on processes

● Innovate with new processes or ways of providing analytics and reporting

● Help the data lead to find new ways of adding value

 

 

Requirements

● Bachelor’s degree and a minimum of 4+ years’ experience in an analytical role preferably working in product analytics with consumer app data

● Strong SQL Server and Power BI required

● You have experience with most or all of these tools – SQL Server, Python, Power BI, BigQuery.

● Understanding of mobile app data (Events, CTAs, Screen Views etc)

● Knowledge of data architecture and ETL

● Experience in analyzing customer behavior and providing insightful recommendations

● Self-starter, with a keen interest in technology and highly motivated towards success

● Must be proactive and be prepared to address meetings

● Must show initiative and desire to learn business subjects

● Able to work independently and provide updates to management

● Strong analytical and problem-solving capabilities with meticulous attention to detail

● Excellent problem-solving skills; proven teamwork and communication skills

● Experience working in a fast paced “start-up like” environment

 

Desirable

  • Knowledge of mobile analytical tools (Segment, Amplitude, Adjust, Braze and Google Analytics)
  • Knowledge of loyalty data

About AYM Marketing Management

Home | AYM Commerce
Founded
2016
Type
Products & Services
Size
20-100 employees
Stage
Profitable
View full company details
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Data Engineer

at Slintel

Agency job
via Qrata
Big Data
ETL
Apache Spark
Spark
Data engineer
Data engineering
Linux/Unix
MySQL
Python
Amazon Web Services (AWS)
icon
Bengaluru (Bangalore)
icon
4 - 9 yrs
icon
₹20L - ₹28L / yr
Responsibilities
  • Work in collaboration with the application team and integration team to design, create, and maintain optimal data pipeline architecture and data structures for Data Lake/Data Warehouse.
  • Work with stakeholders including the Sales, Product, and Customer Support teams to assist with data-related technical issues and support their data analytics needs.
  • Assemble large, complex data sets from third-party vendors to meet business requirements.
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, Elasticsearch, MongoDB, and AWS technology.
  • Streamline existing and introduce enhanced reporting and analysis solutions that leverage complex data sources derived from multiple internal systems.

Requirements
  • 5+ years of experience in a Data Engineer role.
  • Proficiency in Linux.
  • Must have SQL knowledge and experience working with relational databases, query authoring (SQL) as well as familiarity with databases including Mysql, Mongo, Cassandra, and Athena.
  • Must have experience with Python/Scala.
  • Must have experience with Big Data technologies like Apache Spark.
  • Must have experience with Apache Airflow.
  • Experience with data pipeline and ETL tools like AWS Glue.
  • Experience working with AWS cloud services: EC2, S3, RDS, Redshift.
Job posted by
Prajakta Kulkarni

Senior data scientist

at Compile

Founded 2011  •  Product  •  20-100 employees  •  Bootstrapped
Machine Learning (ML)
Data Science
Natural Language Processing (NLP)
Neural networks
Scikit-Learn
Python
Healthcare
icon
Bengaluru (Bangalore)
icon
2 - 3 yrs
icon
Best in industry

With 30B+ medical and pharmacy claims covering 300M+ US patients, Compile Data helps life science companies generate actionable insights across different stages of a drug's lifecycle. Through context driven record-linking and machine-learning algorithms, Compile's platform transforms messy and disparate datasets into an intuitive graph of healthcare providers and all their activities.

 

Responsibilities:

  • Help build intelligent systems to cleanse and record-link healthcare data from over 200 sources
  • Build tools and ML modules to generate insights from hard to analyse healthcare data, and help solve various business needs of large pharma companies
  • Mentoring and growing a data science team

Requirements:
  • 2-3 years of experience in building ML models, preferably in healthcare
  • Worked with NN and ML algorithms, solved problems using panel and transactional data
  • Experience working on record-linking problems and NLP approaches towards text normalization and standardization is a huge plus
  • Proven experience as an ML Lead, worked in Python or R; with experience in developing big-data ML solutions at scale and integration with production software systems
  • Ability to craft context around key business requirements and present ideas in business and user friendly language
Job posted by
Mithesh J C
SQL
Data engineering
Java
ETL
ELT
Python
Scala
Big Data
Hadoop
Spark
icon
Hyderabad
icon
4 - 9 yrs
icon
₹20L - ₹26L / yr

Responsibilities and Tasks:

  • Understand the Business Problem and the Relevant Data

  • Maintain an intimate understanding of company and department strategy

  • Translate analysis requirements into data requirements

  • Identify and understand the data sources that are relevant to the business problem

  • Develop conceptual models that capture the relationships within the data

  • Define the data-quality objectives for the solution

  • Be a subject matter expert in data sources and reporting options

 

Architect Data Management Systems:

  • Use understanding of the business problem and the nature of the data to select appropriate data management system (Big Data, Cloud DW,Cloud (GCP/AWS/Azure), OLTP, OLAP, etc.)

  • Design and implement optimum data structures in the appropriate data management system (Cloud DW, Cloud (GCP/AWS/Azure), Hadoop, SQL Server/Oracle, etc.) to satisfy the data requirements

  • Plan methods for archiving/deletion of information

 

Develop, Automate, and Orchestrate an Ecosystem of ETL Processes for Varying Volumes of Data:

  • Identify and select the optimum methods of access for each data source (real-time/streaming, delayed, static)

  • Determine transformation requirements and develop processes to bring structured and unstructured data from the source to a new physical data model

  • Develop processes to efficiently load the transform data into the data management system

 

Prepare Data to Meet Analysis Requirements

  • Work with the data scientist to implement strategies for cleaning and preparing data for analysis (e.g., outliers, missing data, etc.)

  • Develop and code data extracts

  • Follow standard methodologies to ensure data quality and data integrity

  • Ensure that the data is fit to use for data science applications

 

  • Qualifications and Experience:

  • 5 - 9 years of experience developing, delivering, and/or supporting data engineering, advanced analytics or business intelligence solutions

  • Experienced in developing ETL/ELT processes using Apache Ni-Fi and Snowflake

  • Significant experience with big data processing and/or developing applications and data sources via Hadoop, Yarn, Hive, Pig, Sqoop, MapReduce, HBASE, Flume, etc.

  • Significant experience with big data processing and/or developing applications and data Pipelines via Hadoop, Yarn, Hive, Spark, Pig, Sqoop, MapReduce, HBASE, Flume, etc.

  • Data Engineering and Analytics on Google Cloud Platform using BigQuery, Cloud Storage, Cloud SQL, Cloud Pub/Sub, Cloud DataFlow, Cloud Composer..etc or equivalent cloud platform.

  • Familiarity with software architecture (data structures, data schemas, etc.)

  • Strong working knowledge of databases (Oracle, MSSQL, etc.) including SQL and NoSQL.

  • Strong mathematics background, analytical, problem solving, and organizational skills

  • Strong communication skills (written, verbal and presentation)

  • Experience working in a global, multi-functional environment

  • Minimum of 2 years’ experience in any of the following: At least one high-level client, object-oriented language (e.g. JAVA/Python/Perl/Scala, etc.); at least one or more web programming language (PHP, MySQL, Python, Perl, JavaScript etc.); one or more Data Extraction Tools (Apache NiFi/Informatica/Talend etc.)

  • Software development using programming languages like Python/Java/Scala

  • Ability to travel as needed

Job posted by
sameer N

Survey Analytics

at Leading Management Consulting Firm

Python
R Programming
SAS
Surveying
Data Analytics
SQL
SPSS
icon
Bengaluru (Bangalore), Gurugram
icon
1 - 7 yrs
icon
₹4L - ₹10L / yr
Desired Skills & Mindset:

We are looking for candidates who have demonstrated both a strong business sense and deep understanding of the quantitative foundations of modelling.

• Excellent analytical and problem-solving skills, including the ability to disaggregate issues, identify root causes and recommend solutions
• Statistical programming software experience in SPSS and comfortable working with large data sets.
• R, Python, SAS & SQL are preferred but not a mandate
• Excellent time management skills
• Good written and verbal communication skills; understanding of both written and spoken English
• Strong interpersonal skills
• Ability to act autonomously, bringing structure and organization to work
• Creative and action-oriented mindset
• Ability to interact in a fluid, demanding and unstructured environment where priorities evolve constantly, and methodologies are regularly challenged
• Ability to work under pressure and deliver on tight deadlines

Qualifications and Experience
:

• Graduate degree in: Statistics/Economics/Econometrics/Computer
Science/Engineering/Mathematics/MBA (with a strong quantitative background) or
equivalent
• Strong track record work experience in the field of business intelligence, market
research, and/or Advanced Analytics
• Knowledge of data collection methods (focus groups, surveys, etc.)
• Knowledge of statistical packages (SPSS, SAS, R, Python, or similar), databases,
and MS Office (Excel, PowerPoint, Word)
• Strong analytical and critical thinking skills
• Industry experience in Consumer Experience/Healthcare a plus
Job posted by
Jayaraj E
PySpark
Data engineering
Big Data
Hadoop
Spark
Apache Hive
ETL
.NET
Microsoft Windows Azure
PowerBI
Apache Kafka
icon
Chennai
icon
5 - 13 yrs
icon
₹9L - ₹28L / yr
  • Demonstrable experience owning and developing big data solutions, using Hadoop, Hive/Hbase, Spark, Databricks, ETL/ELT for 5+ years

·       10+ years of Information Technology experience, preferably with Telecom / wireless service providers.

·       Experience in designing data solution following Agile practices (SAFe methodology); designing for testability, deployability and releaseability; rapid prototyping, data modeling, and decentralized innovation

  • DataOps mindset: allowing the architecture of a system to evolve continuously over time, while simultaneously supporting the needs of current users
  • Create and maintain Architectural Runway, and Non-Functional Requirements.
  • Design for Continuous Delivery Pipeline (CI/CD data pipeline) and enables Built-in Quality & Security from the start.

·       To be able to demonstrate an understanding and ideally use of, at least one recognised architecture framework or standard e.g. TOGAF, Zachman Architecture Framework etc

·       The ability to apply data, research, and professional judgment and experience to ensure our products are making the biggest difference to consumers

·       Demonstrated ability to work collaboratively

·       Excellent written, verbal and social skills - You will be interacting with all types of people (user experience designers, developers, managers, marketers, etc.)

·       Ability to work in a fast paced, multiple project environment on an independent basis and with minimal supervision

·       Technologies: .NET, AWS, Azure; Azure Synapse, Nifi, RDS, Apache Kafka, Azure Data bricks, Azure datalake storage, Power BI, Reporting Analytics, QlickView, SQL on-prem Datawarehouse; BSS, OSS & Enterprise Support Systems

Job posted by
Srikanth a

My SQL Database Engineer

at MOBtexting

Founded 2012  •  Services  •  20-100 employees  •  Profitable
MySQL
MySQL DBA
Data architecture
SQL
Cassandra
MongoDB
icon
Bengaluru (Bangalore)
icon
3 - 4 yrs
icon
₹5L - ₹6L / yr

Job Description

 

Experience: 3+ yrs

We are looking for a MySQL DBA who will be responsible for ensuring the performance, availability, and security of clusters of MySQL instances. You will also be responsible for design of database, database architecture, orchestrating upgrades, backups, and provisioning of database instances. You will also work in tandem with the other teams, preparing documentations and specifications as required.

 

Responsibilities:

Database design and data architecture

Provision MySQL instances, both in clustered and non-clustered configurations

Ensure performance, security, and availability of databases

Prepare documentations and specifications

Handle common database procedures, such as upgrade, backup, recovery, migration, etc.

Profile server resource usage, optimize and tweak as necessary

 

Skills and Qualifications:

Proven expertise in database design and data architecture for large scale systems

Strong proficiency in MySQL database management

Decent experience with recent versions of MySQL

Understanding of MySQL's underlying storage engines, such as InnoDB and MyISAM

Experience with replication configuration in MySQL

Knowledge of de-facto standards and best practices in MySQL

Proficient in writing and optimizing SQL statements

Knowledge of MySQL features, such as its event scheduler

Ability to plan resource requirements from high level specifications

Familiarity with other SQL/NoSQL databases such as Cassandra, MongoDB, etc.

Knowledge of limitations in MySQL and their workarounds in contrast to other popular relational databases

Job posted by
Nandhini Beke

Data Engineer

at TIGI HR Solution Pvt. Ltd.

Founded 2014  •  Services  •  employees  •  Profitable
Data engineering
Hadoop
Big Data
Python
SQL
Amazon Web Services (AWS)
Windows Azure
icon
Mumbai, Bengaluru (Bangalore), Pune, Hyderabad, Noida
icon
2 - 5 yrs
icon
₹10L - ₹17L / yr
Position : Data Engineer
Employee Strength : around 600 in all over India
Working days: 5 days
Working Time: Flexible
Salary : 30-40% Hike on Current CTC
As of now work from home.
 
Job description:
  • Design, implement and support an analytical data infrastructure, providing ad hoc access to large data sets and computing power.
  • Contribute to development of standards and the design and implementation of proactive processes to collect and report data and statistics on assigned systems.
  • Research opportunities for data acquisition and new uses for existing data.
  • Provide technical development expertise for designing, coding, testing, debugging, documenting and supporting data solutions.
  • Experience building data pipelines to connect analytics stacks, client data visualization tools and external data sources.
  • Experience with cloud and distributed systems principles
  • Experience with Azure/AWS/GCP cloud infrastructure
  • Experience with Databricks Clusters and Configuration
  • Experience with Python, R, sh/bash and JVM-based languages including Scala and Java.
  • Experience with Hadoop family languages including Pig and Hive.
Job posted by
Rutu Lakhani

Data Scientist

at Global Petrochemicals & Metals Company

Agency job
via Unnati
Data Science
Data Scientist
R Programming
Python
Data Analytics
SQL
Tableau
icon
Remote only
icon
3 - 8 yrs
icon
₹60L - ₹84L / yr
Our client is Saudi Arabia’s private sector owned joint-stock industrial company that aims to advance the economic diversification in the country. It is one of the largest chemical companies that also the world’s biggest investor in titanium dioxide. The 3-decade old company provides employment to over 3400 professionals and markets their products to most parts of the world.
 
Apart from their expertise in petrochemicals and advanced metals, they manage several R&D related activities that cover turnkey solutions, testing, product certifications and training, all supporting sustainability and profitability for their company and clients. The team is led by a Standford and Princeton alumnus who holds masters in Business as well as Nuclear Engineering. The other Board members are alumni of prestigious engineering schools across the world, with immense knowledge and experience, and tremendous background in innovation and technology.
 
As aData Scientist, you will be analyzing large amounts of raw information to find patterns that will help improve our company. Your goal will be to help our company analyze trends to make better decisions.
 
What you will do:
 
  • Identifying valuable data sources and automate collection processes
  • Undertaking preprocessing of structured and unstructured data
  • Analyzing large amounts of information to discover trends and patterns
  • Building predictive models and machine-learning algorithms
  • Combining models through ensemble modeling
  • Presenting information using data visualization techniques
  • Proposing solutions and strategies to business challenges
  • Collaborating with engineering and product development teams

 


Candidate Profile:

What you need to have:

 
  • Data Scientist with min 3 years of experience in Analytics or Data Science preferably in Pricing or Polymer Market    
  • Experience using scripting languages like Python(preferred) or R is a must.
  • Experience with SQL, Tableau is good to have
  • Strong numerical, problem solving and analytical aptitude
  • Being able to make data based decisions
  • Ability to present/communicate analytics driven insights.
  • Critical and Analytical thinking skills    
Job posted by
Gayatri Joshi

Data Scientist

at Simplifai Cognitive Solutions Pvt Ltd

Founded 2017  •  Product  •  100-500 employees  •  Bootstrapped
Data Science
Machine Learning (ML)
Python
Big Data
SQL
Natural Language Processing (NLP)
Deep Learning
chatbots
icon
Pune
icon
3 - 8 yrs
icon
₹5L - ₹30L / yr
Job Description for Data Scientist/ NLP Engineer

Responsibilities for Data Scientist/ NLP Engineer

Work with customers to identify opportunities for leveraging their data to drive business
solutions.
• Develop custom data models and algorithms to apply to data sets.
• Basic data cleaning and annotation for any incoming raw data.
• Use predictive modeling to increase and optimize customer experiences, revenue
generation, ad targeting and other business outcomes.
• Develop company A/B testing framework and test model quality.
• Deployment of ML model in production.
Qualifications for Junior Data Scientist/ NLP Engineer

• BS, MS in Computer Science, Engineering, or related discipline.
• 3+ Years of experience in Data Science/Machine Learning.
• Experience with programming language Python.
• Familiar with at least one database query language, such as SQL
• Knowledge of Text Classification & Clustering, Question Answering & Query Understanding,
Search Indexing & Fuzzy Matching.
• Excellent written and verbal communication skills for coordinating acrossteams.
• Willing to learn and master new technologies and techniques.
• Knowledge and experience in statistical and data mining techniques:
GLM/Regression, Random Forest, Boosting, Trees, text mining, NLP, etc.
• Experience with chatbots would be bonus but not required
Job posted by
Vipul Tiwari

ML Researcher

at Oil & Energy Industry

Machine Learning (ML)
Data Science
Deep Learning
Digital Signal Processing
Statistical signal processing
Python
Big Data
Linux/Unix
OpenCV
TensorFlow
Keras
icon
NCR (Delhi | Gurgaon | Noida)
icon
1 - 3 yrs
icon
₹8L - ₹12L / yr
Understanding business objectives and developing models that help to achieve them,
along with metrics to track their progress
Managing available resources such as hardware, data, and personnel so that deadlines
are met
Analysing the ML algorithms that could be used to solve a given problem and ranking
them by their success probability
Exploring and visualizing data to gain an understanding of it, then identifying
differences in data distribution that could affect performance when deploying the model
in the real world
Verifying data quality, and/or ensuring it via data cleaning
Supervising the data acquisition process if more data is needed
Defining validation strategies
Defining the pre-processing or feature engineering to be done on a given dataset
Defining data augmentation pipelines
Training models and tuning their hyper parameters
Analysing the errors of the model and designing strategies to overcome them
Deploying models to production
Job posted by
Susmita Mishra
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at AYM Marketing Management?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort