Cutshort logo
Decision trees Jobs in Mumbai

2+ Decision trees Jobs in Mumbai | Decision trees Job openings in Mumbai

Apply to 2+ Decision trees Jobs in Mumbai on CutShort.io. Explore the latest Decision trees Job opportunities across top companies like Google, Amazon & Adobe.

icon
Matchmaking platform

Matchmaking platform

Agency job
via Peak Hire Solutions by Dhara Thakkar
Mumbai
2 - 5 yrs
₹15L - ₹28L / yr
skill iconData Science
skill iconPython
Natural Language Processing (NLP)
MySQL
skill iconMachine Learning (ML)
+15 more

Review Criteria

  • Strong Data Scientist/Machine Learnings/ AI Engineer Profile
  • 2+ years of hands-on experience as a Data Scientist or Machine Learning Engineer building ML models
  • Strong expertise in Python with the ability to implement classical ML algorithms including linear regression, logistic regression, decision trees, gradient boosting, etc.
  • Hands-on experience in minimum 2+ usecaseds out of recommendation systems, image data, fraud/risk detection, price modelling, propensity models
  • Strong exposure to NLP, including text generation or text classification (Text G), embeddings, similarity models, user profiling, and feature extraction from unstructured text
  • Experience productionizing ML models through APIs/CI/CD/Docker and working on AWS or GCP environments
  • Preferred (Company) – Must be from product companies

 

Job Specific Criteria

  • CV Attachment is mandatory
  • What's your current company?
  • Which use cases you have hands on experience?
  • Are you ok for Mumbai location (if candidate is from outside Mumbai)?
  • Reason for change (if candidate has been in current company for less than 1 year)?
  • Reason for hike (if greater than 25%)?

 

Role & Responsibilities

  • Partner with Product to spot high-leverage ML opportunities tied to business metrics.
  • Wrangle large structured and unstructured datasets; build reliable features and data contracts.
  • Build and ship models to:
  • Enhance customer experiences and personalization
  • Boost revenue via pricing/discount optimization
  • Power user-to-user discovery and ranking (matchmaking at scale)
  • Detect and block fraud/risk in real time
  • Score conversion/churn/acceptance propensity for targeted actions
  • Collaborate with Engineering to productionize via APIs/CI/CD/Docker on AWS.
  • Design and run A/B tests with guardrails.
  • Build monitoring for model/data drift and business KPIs


Ideal Candidate

  • 2–5 years of DS/ML experience in consumer internet / B2C products, with 7–8 models shipped to production end-to-end.
  • Proven, hands-on success in at least two (preferably 3–4) of the following:
  • Recommender systems (retrieval + ranking, NDCG/Recall, online lift; bandits a plus)
  • Fraud/risk detection (severe class imbalance, PR-AUC)
  • Pricing models (elasticity, demand curves, margin vs. win-rate trade-offs, guardrails/simulation)
  • Propensity models (payment/churn)
  • Programming: strong Python and SQL; solid git, Docker, CI/CD.
  • Cloud and data: experience with AWS or GCP; familiarity with warehouses/dashboards (Redshift/BigQuery, Looker/Tableau).
  • ML breadth: recommender systems, NLP or user profiling, anomaly detection.
  • Communication: clear storytelling with data; can align stakeholders and drive decisions.



Read more
Egnyte

at Egnyte

4 recruiters
Prasanth Mulleti
Posted by Prasanth Mulleti
Remote, Mumbai
4 - 10 yrs
Best in industry
skill iconData Science
data scientist
skill iconMachine Learning (ML)
Time series
QoS
+7 more

Job Description

We are looking for an experienced engineer to join our data science team, who will help us design, develop, and deploy machine learning models in production. You will develop robust models, prepare their deployment into production in a controlled manner, while providing appropriate means to monitor their performance and stability after deployment.

 

What You’ll Do will include (But not limited to):

  • Preparing datasets needed to train and validate our machine learning models
  • Anticipate and build solutions for problems that interrupt availability, performance, and stability in our systems, services, and products at scale.
  • Defining and implementing metrics to evaluate the performance of the models, both for computing performance (such as CPU & memory usage) and for ML performance (such as precision, recall, and F1)
  • Supporting the deployment of machine learning models on our infrastructure, including containerization, instrumentation, and versioning
  • Supporting the whole lifecycle of our machine learning models, including gathering data for retraining, A/B testing, and redeployments
  • Developing, testing, and evaluating tools for machine learning models deployment, monitoring, retraining.
  • Working closely within a distributed team to analyze and apply innovative solutions over billions of documents
  • Supporting solutions ranging from rule-bases, classical ML techniques  to the latest deep learning systems.
  • Partnering with cross-functional team members to bring large scale data engineering solutions to production
  • Communicating your approach and results to a wider audience through presentations

Your Qualifications:

  • Demonstrated success with machine learning in a SaaS or Cloud environment, with hands–on knowledge of model creation and deployments in production at scale
  • Good knowledge of traditional machine learning methods and neural networks
  • Experience with practical machine learning modeling, especially on time-series forecasting, analysis, and causal inference.
  • Experience with data mining algorithms and statistical modeling techniques for anomaly detection in time series such as clustering, classification, ARIMA, and decision trees is preferred.
  • Ability to implement data import, cleansing and transformation functions at scale
  • Fluency in Docker, Kubernetes
  • Working knowledge of relational and dimensional data models with appropriate visualization techniques such as PCA.
  • Solid English skills to effectively communicate with other team members

 

Due to the nature of the role, it would be nice if you have also:

  • Experience with large datasets and distributed computing, especially with the Google Cloud Platform
  • Fluency in at least one deep learning framework: PyTorch, TensorFlow / Keras
  • Experience with No–SQL and Graph databases
  • Experience working in a Colab, Jupyter, or Python notebook environment
  • Some experience with monitoring, analysis, and alerting tools like New Relic, Prometheus, and the ELK stack
  • Knowledge of Java, Scala or Go-Lang programming languages
  • Familiarity with KubeFlow
  • Experience with transformers, for example the Hugging Face libraries
  • Experience with OpenCV

 

About Egnyte

In a content critical age, Egnyte fuels business growth by enabling content-rich business processes, while also providing organizations with visibility and control over their content assets. Egnyte’s cloud-native content services platform leverages the industry’s leading content intelligence engine to deliver a simple, secure, and vendor-neutral foundation for managing enterprise content across business applications and storage repositories. More than 16,000 customers trust Egnyte to enhance employee productivity, automate data management, and reduce file-sharing cost and complexity. Investors include Google Ventures, Kleiner Perkins, Caufield & Byers, and Goldman Sachs. For more information, visit www.egnyte.com

 

#LI-Remote

Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort