Cutshort logo
NoBroker logo
Product Analyst - Insights
Product Analyst - Insights
NoBroker's logo

Product Analyst - Insights

Safeer Ahmed's profile picture
Posted by Safeer Ahmed
0 - 2 yrs
₹8L - ₹11L / yr
Bengaluru (Bangalore)
Skills
SQL
skill iconData Analytics
skill iconPython
MS-Excel


About The Position:

The core of our Real Estate Tech ecosystem is the products we build and the Data that powers them. With a large number of products and business functions in place, Data flows massively into our backend. transactions, click events, uploads, updates, geospatial events, etc. flows in the system in large volumes and varieties. Constant monitoring of KPIs for all the product and business functions is an integral part of our success. As a product analyst, you will be an integral part of driving this success.

You will work closely with Product & Data Sciences to define KPIs, Report & Track them. You will breathe Data and with the spirit communicate to Business, understand problems, answer questions and create strategic solutions with Data. Your numbers and dashboards will drive business decisions, product enhancements & track product metrics.

An ideal candidate should possess strong analytical capabilities in Big Data with equal product & business acumen along with fluid communication abilities. They should have a strong quantitative/technical background, natural curiosity, and the ability to influentially communicate to senior leadership with numbers and strategic insights.

If you can talk numbers with people who run the shop, you are the one we are looking for.

What will you bring:

- Strong quantitative aptitude with a keen product inclination

- Passion for consumer products with an equal affection to Data

- Undergraduate and/or graduate degree in Math, Economics, Statistics, Engineering, Computer Science, or other quantitative fields

- Proven track record in quantitative analytics

- Proficient in SQL - the language of Data

- Strong foundations in Statistics - the grammar of Data

- Proficient in at least one of the following analytics tools - Pandas/R (Not Excel)

- Fluid scripting abilities with Python

- Strong understanding of the grammar of Data Visualization with the ability to make expressive reports and dashboards

- Strong interpersonal skills - verbal and written

- Strong understanding of experimentation methods (hypothesis testing, product experimentation, regressions, experimentation logic, and biases)

- Strong understanding of product growth optimization methods - tracking activity funnels, drop rates, outbound communication strategies like push, recommendations, emails, etc.

- Ability to function smoothly in a collaborative environment with Product, Business & Data Sciences

What we have to offer?

- An exciting and challenging working environment with passionate and enthusiastic people that builds an entrepreneurial mind-set.

- Be part of a start-up from the very beginning, work directly with founders, lead your area of expertise, build kickass products, and be a part of this exciting growth journey of changing the world.

- Best in class Salary

We are an equal opportunity employer and value diversity at our company. We do not discriminate based on caste, race, religion, color, national origin, gender, sexual orientation, age, marital status, or disability status.
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About NoBroker

Founded :
2014
Type
Size :
100-1000
Stage :
Raised funding
About

NoBroker is a new and disruptive force in the Real Estate Industry. We’re a site that’s built to let you buy, sell, rent, find a PG or a flatmate WITHOUT paying any brokerage.

 

Our mission is to lead India’s real estate industry, towards an era of doing real estate transactions in a convenient and brokerage-free manner. We currently save our customers over 250 crores per year in brokerage. NoBroker was founded by alumni from IIT Bombay, IIT Kanpur & IIM Ahmedabad in March 2014 and have since served over 35 lakh customers. As a VC funded company, we’ve raised over 20M+ in a couple of rounds of funding. We’re a team of 350 people driven by passion, the passion to help you fulfil your housing requirement, without paying a hefty brokerage.

 

NoBroker has worked tirelessly to remove all information asymmetry caused by brokers. We also enable owners and tenants to interact with each other directly by using our technologically advanced platform. Our world-class services include-

1-Verified brokerage-free properties for buyers and tenants

2- Quick brokerage-free tenants & buyers for property owners

3-Benefit rich services including online rental agreement and dedicated relationship managers

 

Our app (70 lakhs+ downloads) and our website serve 4 cities at present – Bangalore, Mumbai, Pune and Chennai. Our rapid growth means that we will keep on expanding to more cities shortly.

 

Are you looking for huge work- independence, passionate peers, steep learning curve, meritocratic work culture, massive growth environment with loads of fun, best-in-class salary and ESOPs? Just apply to our jobs below :-)

Read more
Company video
NoBroker's video section
NoBroker's video section
Photos
Company featured pictures
Company featured pictures
Company featured pictures
Company featured pictures
Company featured pictures
Connect with the team
Profile picture
Rawani Bassi
Profile picture
Priyanka Shukla
Profile picture
vanshika
Profile picture
Agnibesh Nayak
Profile picture
noor aqsa
Profile picture
Safeer Ahmed
Profile picture
Shrushtee Makwana
Profile picture
Nandini D
Profile picture
Deepa R
Profile picture
Tushar Kant
Profile picture
Sweta Pattnaik
Profile picture
Pavithra M
Profile picture
rajesh v
Profile picture
Tulika Kansal
Profile picture
Rajul Jain
Profile picture
Aparna Kulkarni
Profile picture
Ragini Soni
Profile picture
Sravani N S
Profile picture
Madhan G
Profile picture
Laxmi Thapa
Profile picture
Isha Ansari
Profile picture
Saba Parween
Profile picture
Kakoli Sinha
Profile picture
swati khandualo
Profile picture
Sushanth s
Profile picture
ATHUL S
Company social profiles
bloginstagramlinkedintwitterfacebook

Similar jobs

Antuit
at Antuit
1 recruiter
Purnendu Shakunt
Posted by Purnendu Shakunt
Bengaluru (Bangalore)
8 - 12 yrs
₹25L - ₹30L / yr
skill iconData Science
skill iconMachine Learning (ML)
Artificial Intelligence (AI)
Data Scientist
skill iconPython
+9 more

About antuit.ai

 

Antuit.ai is the leader in AI-powered SaaS solutions for Demand Forecasting & Planning, Merchandising and Pricing. We have the industry’s first solution portfolio – powered by Artificial Intelligence and Machine Learning – that can help you digitally transform your Forecasting, Assortment, Pricing, and Personalization solutions. World-class retailers and consumer goods manufacturers leverage antuit.ai solutions, at scale, to drive outsized business results globally with higher sales, margin and sell-through.

 

Antuit.ai’s executives, comprised of industry leaders from McKinsey, Accenture, IBM, and SAS, and our team of Ph.Ds., data scientists, technologists, and domain experts, are passionate about delivering real value to our clients. Antuit.ai is funded by Goldman Sachs and Zodius Capital.

 

The Role:

 

Antuit.ai is interested in hiring a Principal Data Scientist, this person will facilitate standing up standardization and automation ecosystem for ML product delivery, he will also actively participate in managing implementation, design and tuning of product to meet business needs.

 

Responsibilities:

 

Responsibilities includes, but are not limited to the following:

 

  • Manage and provides technical expertise to the delivery team. This includes recommendation of solution alternatives, identification of risks and managing business expectations.
  • Design, build reliable and scalable automated processes for large scale machine learning.
  • Use engineering expertise to help design solutions to novel problems in software development, data engineering, and machine learning. 
  • Collaborate with Business, Technology and Product teams to stand-up MLOps process.
  • Apply your experience in making intelligent, forward-thinking, technical decisions to delivery ML ecosystem, including implementing new standards, architecture design, and workflows tools.
  • Deep dive into complex algorithmic and product issues in production
  • Own metrics and reporting for delivery team. 
  • Set a clear vision for the team members and working cohesively to attain it.
  • Mentor and coach team members


Qualifications and Skills:

 

Requirements

  • Engineering degree in any stream
  • Has at least 7 years of prior experience in building ML driven products/solutions
  • Excellent programming skills in any one of the language C++ or Python or Java.
  • Hands on experience on open source libraries and frameworks- Tensorflow,Pytorch, MLFlow, KubeFlow, etc.
  • Developed and productized large-scale models/algorithms in prior experience
  • Can drive fast prototypes/proof of concept in evaluating various technology, frameworks/performance benchmarks.
  • Familiar with software development practices/pipelines (DevOps- Kubernetes, docker containers, CI/CD tools).
  • Good verbal, written and presentation skills.
  • Ability to learn new skills and technologies.
  • 3+ years working with retail or CPG preferred.
  • Experience in forecasting and optimization problems, particularly in the CPG / Retail industry preferred.

 

Information Security Responsibilities

 

  • Understand and adhere to Information Security policies, guidelines and procedure, practice them for protection of organizational data and Information System.
  • Take part in Information Security training and act accordingly while handling information.
  • Report all suspected security and policy breach to Infosec team or appropriate authority (CISO).

EEOC

 

Antuit.ai is an at-will, equal opportunity employer.  We consider applicants for all positions without regard to race, color, religion, national origin or ancestry, gender identity, sex, age (40+), marital status, disability, veteran status, or any other legally protected status under local, state, or federal law.
Read more
Remote only
3 - 8 yrs
₹20L - ₹26L / yr
Airflow
Amazon Redshift
skill iconAmazon Web Services (AWS)
skill iconJava
ETL
+4 more
  • Experience with Cloud native Data tools/Services such as AWS Athena, AWS Glue, Redshift Spectrum, AWS EMR, AWS Aurora, Big Query, Big Table, S3, etc.

 

  • Strong programming skills in at least one of the following languages: Java, Scala, C++.

 

  • Familiarity with a scripting language like Python as well as Unix/Linux shells.

 

  • Comfortable with multiple AWS components including RDS, AWS Lambda, AWS Glue, AWS Athena, EMR. Equivalent tools in the GCP stack will also suffice.

 

  • Strong analytical skills and advanced SQL knowledge, indexing, query optimization techniques.

 

  • Experience implementing software around data processing, metadata management, and ETL pipeline tools like Airflow.

 

Experience with the following software/tools is highly desired:

 

  • Apache Spark, Kafka, Hive, etc.

 

  • SQL and NoSQL databases like MySQL, Postgres, DynamoDB.

 

  • Workflow management tools like Airflow.

 

  • AWS cloud services: RDS, AWS Lambda, AWS Glue, AWS Athena, EMR.

 

  • Familiarity with Spark programming paradigms (batch and stream-processing).

 

  • RESTful API services.
Read more
Fintech Company
Agency job
via Jobdost by Sathish Kumar
Bengaluru (Bangalore)
4 - 6 yrs
₹10L - ₹15L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark

Purpose of Job:
We are looking for an exceptionally talented senior data engineer who has exposure in implementing AWS services to build data pipelines, api
integration and designing data warehouse.

 

Job Responsibilities:
• Total 4+ years of experience as a Data Engineer
• Have minimum 3 years of AWS Cloud experience.
• Well versed in languages such as Python, PySpark, SQL, NodeJS etc
• Has extensive experience in Spark ecosystem and has
worked on both real time and batch processing
• Have experience in AWS Glue, EMR, DMS, Lambda, S3, DynamoDB, Step functions, Airflow, RDS, Aurora etc.
• Experience with modern Database systems such as
Redshift, Presto, Hive etc.
• Worked on building data lakes in the past on S3 or
Apache Hudi • Solid understanding of Data Warehousing Concepts
• Good to have experience on tools such as Kafka or Kinesis
• Good to have AWS Developer Associate or Solutions Architect Associate Certification


Qualifications:
At least a bachelor’s degree in Science, Engineering, Applied
Mathematics. Other Requirements: Learning Attitude, Ownership skills

Read more
Dhwani Rural Information Systems
Sunandan Madan
Posted by Sunandan Madan
Gurugram
5 - 10 yrs
₹8L - ₹14L / yr
Job Overview -
Dhwani is looking for an experienced (5-10 years) MySQL database
administrator who will be responsible for ensuring the performance,
availability, and security of clusters of MySQL instances. The person will also be responsible for orchestrating upgrades, backups, and provisioning database instances. He/She will also work in tandem with the other teams, preparing documentation and specifications as required.

Job Responsibilities
1. Provision MySQL instances, both in clustered and non-clustered
configurations
2. Ensure performance, security, and availability of databases
3. Prepare documentation and specifications
4. Handle common database procedures, such as upgrade, backup, recovery, migration, etc.
5. Profile server resource usage, optimize and tweak as necessary
6. Collaborate with other team members and stakeholders
7. Writing queries and generating a report

Required Skills
1. Strong proficiency in MySQL database management, decent experience with recent versions of MySQL.
2. Understanding of MySQL’s underlying storage engines, such as InnoDB and MyISAM
3. Experience with replication configuration in MySQL
4. Knowledge of de-facto standards and best practices in MySQL
5. Proficient in writing and optimizing SQL statements
6. Knowledge of MySQL features, such as its event scheduler
7. Ability to plan resource requirements from high-level specifications
8. Familiarity with other SQL/NoSQL databases along with monitoring
tools. maxscale and Proxy SQL.
9. Working in a Linux environment is a must.
10. Knowledge of Docker is an advantage.
11. Knowledge of limitations in MySQL and their workarounds in contrast to other popular relational databases.
12. Should hand on writing complex queries and generating reports as per requirement
13. Experience in handling multi-location databases

Education
Bachelor’s degree in an analytical related field, including information
technology, science, and engineering discipline.
Read more
Mumbai
2 - 5 yrs
₹2L - ₹8L / yr
Data Warehouse (DWH)
Informatica
ETL
Microsoft Windows Azure
Big Data
+1 more
Responsible for the evaluation of cloud strategy and program architecture
2. Responsible for gathering system requirements working together with application architects
and owners
3. Responsible for generating scripts and templates required for the automatic provisioning of
resources
4. Discover standard cloud services offerings, install, and execute processes and standards for
optimal use of cloud service provider offerings
5. Incident Management on IaaS, PaaS, SaaS.
6. Responsible for debugging technical issues inside a complex stack involving virtualization,
containers, microservices, etc.
7. Collaborate with the engineering teams to enable their applications to run
on Cloud infrastructure.
8. Experience with OpenStack, Linux, Amazon Web Services, Microsoft Azure, DevOps, NoSQL
etc will be plus.
9. Design, implement, configure, and maintain various Azure IaaS, PaaS, SaaS services.
10. Deploy and maintain Azure IaaS Virtual Machines and Azure Application and Networking
Services.
11. Optimize Azure billing for cost/performance (VM optimization, reserved instances, etc.)
12. Implement, and fully document IT projects.
13. Identify improvements to IT documentation, network architecture, processes/procedures,
and tickets.
14. Research products and new technologies to increase efficiency of business and operations
15. Keep all tickets and projects updated and track time in a detailed format
16. Should be able to multi-task and work across a range of projects and issues with various
timelines and priorities
Technical:
• Minimum 1 year experience Azure and knowledge on Office365 services preferred.
• Formal education in IT preferred
• Experience with Managed Service business model a major plus
• Bachelor’s degree preferred
Read more
STP Research
Vivek Tyagi
Posted by Vivek Tyagi
Gaziabad, Vaishali, Noida
0 - 2 yrs
₹2.5L - ₹3.5L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+3 more

Hi,


We are looking for a young and passionate data analyst. Candidate should have knowledge of SPSS and other analysis tools.

Read more
Latent Bridge Pvt Ltd
at Latent Bridge Pvt Ltd
6 recruiters
Mansoor Khan
Posted by Mansoor Khan
Remote only
3 - 7 yrs
₹5L - ₹20L / yr
MicroStrategy administration
skill iconAmazon Web Services (AWS)
Business Intelligence (BI)
MSTR

Familiar with the MicroStrategy architecture, Admin Certification Preferred

· Familiar with administrative functions, using Object Manager, Command Manager, installation/configuration of MSTR in clustered architecture, applying patches, hot-fixes

· Monitor and manage existing Business Intelligence development/production systems

· MicroStrategy installation, upgrade and administration on Windows and Linux platform

· Ability to support and administer multi-tenant MicroStrategy infrastructure including server security troubleshooting and general system maintenance.

· Analyze application and system logs while troubleshooting and root cause analysis

· Work on operations like deploy and manage packages, User Management, Schedule Management, Governing Settings best practices, database instance and security configuration.

· Monitor, report and investigate solutions to improve report performance.

· Continuously improve the platform through tuning, optimization, governance, automation, and troubleshooting.

· Provide support for the platform, report execution and implementation, user community and data investigations.

· Identify improvement areas in Environment hosting and upgrade processes.

· Identify automation opportunities and participate in automation implementations

· Provide on-call support for Business Intelligence issues

· Experience of working on MSTR 2021, MSTR 2021 including knowledge of working on Enterprise Manager and new features like Platform Analytics, Hyper Intelligence, Collaboration, MSTR Library, etc.

· Familiar with AWS, Linux Scripting

· Knowledge of MSTR Mobile

· Knowledge of capacity planning and system’s scaling needs

Read more
MEDTEK DOT AI
at MEDTEK DOT AI
1 recruiter
Shilpi Sharma
Posted by Shilpi Sharma
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
3 - 10 yrs
₹20L - ₹45L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark

Hiring for GCP compliant cloud data lake solutions for clinical trials for US based pharmaceutical company.

Summary

This is a key position within Data Sciences and Systems organization responsible for data systems and related technologies. The role will part of Amazon Web service (AWS) Data Lake strategy, roadmap, and AWS architecture for data systems and technologies.

Essential/Primary Duties, Functions and Responsibilities

The essential duties and responsibilities of this position are as follows:

  • Collaborate with data science and systems leaders and other stakeholders to roadmap, structure, prioritize and execute on AWS data engineering requirements.
  • Works closely with the IT organization and other functions to make sure that business needs and requirements, IT processes, and regulatory compliance requirements are met.
  • Build the AWS infrastructure required for optimal extraction, transformation, and loading of data from a vendor site clinical data sources using AWS big data technologies
  • Create and maintain optimal AWS data pipeline architecture
  • Assemble large, complex data sets that meet functional / non-functional business requirements
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics
  • Work with stakeholders to assist with data-related technical issues and support their data infrastructure needs
  • Create data tools for analytics and data scientist team members that assist them in building and optimizing our product
  • Work with data and analytics experts to strive for greater functionality in our data systems

Requirements

  • A minimum of a bachelors degree in a Computer Science, Mathematics, Statistics or related discipline is required. A Master's degree is preferred. A minimum of 6-8 years technical management experience is required. Equivalent experience may be accepted.
  • Experience with data lake and/or data warehouse implementation is required
  • Minimum Bachelors Degree in Computer Science, Computer Engineering, Mathematical Engineering, Information Systems or related fields
  • Project experience with visualization tools (AWS, Tableau, R Studio, PowerBI, R shiny, D3js) and databases. Experience with python, R or SAS coding is a big plus.
  • Experience with AWS based S3, Lambda, Step functions.
  • Strong team player and you can work effectively in a collaborative, fast-paced, multi-tasking environment
  • Solid analytical and technical skill and the ability to exchange innovative ideas
  • Quick learner and passionate about continuously developing your skills and knowledge
  • Ability to solve problems by using AWS in data acquisitions
  • Ability to work in an interdisciplinary environment. You are able to interpret and translate very abstract and technical approaches into a healthcare and business-relevant solution
Read more
DemandMatrix
at DemandMatrix
4 recruiters
Harwinder Singh
Posted by Harwinder Singh
Remote only
9 - 12 yrs
₹25L - ₹30L / yr
Big Data
PySpark
Apache Hadoop
Spark
skill iconPython
+3 more

Only a solid grounding in computer engineering, Unix, data structures and algorithms would enable you to meet this challenge.

7+ years of experience architecting, developing, releasing, and maintaining large-scale big data platforms on AWS or GCP

Understanding of how Big Data tech and NoSQL stores like MongoDB, HBase/HDFS, ElasticSearch synergize to power applications in analytics, AI and knowledge graphs

Understandingof how data processing models, data location patterns, disk IO, network IO, shuffling affect large scale text processing - feature extraction, searching etc

Expertise with a variety of data processing systems, including streaming, event, and batch (Spark,  Hadoop/MapReduce)

5+ years proficiency in configuring and deploying applications on Linux-based systems

5+ years of experience Spark - especially Pyspark for transforming large non-structured text data, creating highly optimized pipelines

Experience with RDBMS, ETL techniques and frameworks (Sqoop, Flume) and big data querying tools (Pig, Hive)

Stickler of world class best practices, uncompromising on the quality of engineering, understand standards and reference architectures and deep in Unix philosophy with appreciation of big data design patterns, orthogonal code design and functional computation models
Read more
Dataweave Pvt Ltd
at Dataweave Pvt Ltd
32 recruiters
Pramod Shivalingappa S
Posted by Pramod Shivalingappa S
Bengaluru (Bangalore)
5 - 7 yrs
Best in industry
skill iconPython
skill iconData Science
skill iconR Programming
(Senior) Data Scientist Job Description

About us
DataWeave provides Retailers and Brands with “Competitive Intelligence as a Service” that enables them to take key decisions that impact their revenue. Powered by AI, we provide easily consumable and actionable competitive intelligence by aggregating and analyzing billions of publicly available data points on the Web to help businesses develop data-driven strategies and make smarter decisions.

Data Science@DataWeave
We the Data Science team at DataWeave (called Semantics internally) build the core machine learning backend and structured domain knowledge needed to deliver insights through our data products. Our underpinnings are: innovation, business awareness, long term thinking, and pushing the envelope. We are a fast paced labs within the org applying the latest research in Computer Vision, Natural Language Processing, and Deep Learning to hard problems in different domains.

How we work?
It's hard to tell what we love more, problems or solutions! Every day, we choose to address some of the hardest data problems that there are. We are in the business of making sense of messy public data on the web. At serious scale! 

What do we offer?
● Some of the most challenging research problems in NLP and Computer Vision. Huge text and image
datasets that you can play with!
● Ability to see the impact of your work and the value you're adding to our customers almost immediately.
● Opportunity to work on different problems and explore a wide variety of tools to figure out what really
excites you.
● A culture of openness. Fun work environment. A flat hierarchy. Organization wide visibility. Flexible
working hours.
● Learning opportunities with courses and tech conferences. Mentorship from seniors in the team.
● Last but not the least, competitive salary packages and fast paced growth opportunities.

Who are we looking for?
The ideal candidate is a strong software developer or a researcher with experience building and shipping production grade data science applications at scale. Such a candidate has keen interest in liaising with the business and product teams to understand a business problem, and translate that into a data science problem.

You are also expected to develop capabilities that open up new business productization opportunities.

We are looking for someone with a Master's degree and 1+ years of experience working on problems in NLP or Computer Vision.

If you have 4+ years of relevant experience with a Master's degree (PhD preferred), you will be considered for a senior role.

Key problem areas
● Preprocessing and feature extraction noisy and unstructured data -- both text as well as images.
● Keyphrase extraction, sequence labeling, entity relationship mining from texts in different domains.
● Document clustering, attribute tagging, data normalization, classification, summarization, sentiment
analysis.
● Image based clustering and classification, segmentation, object detection, extracting text from images,
generative models, recommender systems.
● Ensemble approaches for all the above problems using multiple text and image based techniques.

Relevant set of skills
● Have a strong grasp of concepts in computer science, probability and statistics, linear algebra, calculus,
optimization, algorithms and complexity.
● Background in one or more of information retrieval, data mining, statistical techniques, natural language
processing, and computer vision.
● Excellent coding skills on multiple programming languages with experience building production grade
systems. Prior experience with Python is a bonus.
● Experience building and shipping machine learning models that solve real world engineering problems.
Prior experience with deep learning is a bonus.
● Experience building robust clustering and classification models on unstructured data (text, images, etc).
Experience working with Retail domain data is a bonus.
● Ability to process noisy and unstructured data to enrich it and extract meaningful relationships.
● Experience working with a variety of tools and libraries for machine learning and visualization, including
numpy, matplotlib, scikit-learn, Keras, PyTorch, Tensorflow.
● Use the command line like a pro. Be proficient in Git and other essential software development tools.
● Working knowledge of large-scale computational models such as MapReduce and Spark is a bonus.
● Be a self-starter—someone who thrives in fast paced environments with minimal ‘management’.
● It's a huge bonus if you have some personal projects (including open source contributions) that you work
on during your spare time. Show off some of your projects you have hosted on GitHub.

Role and responsibilities
● Understand the business problems we are solving. Build data science capability that align with our product strategy.
● Conduct research. Do experiments. Quickly build throw away prototypes to solve problems pertaining to the Retail domain.
● Build robust clustering and classification models in an iterative manner that can be used in production.
● Constantly think scale, think automation. Measure everything. Optimize proactively.
● Take end to end ownership of the projects you are working on. Work with minimal supervision.
● Help scale our delivery, customer success, and data quality teams with constant algorithmic improvements and automation.
● Take initiatives to build new capabilities. Develop business awareness. Explore productization opportunities.
● Be a tech thought leader. Add passion and vibrance to the team. Push the envelope. Be a mentor to junior members of the team.
● Stay on top of latest research in deep learning, NLP, Computer Vision, and other relevant areas.
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos