Cutshort logo
Time series Jobs in Mumbai

3+ Time series Jobs in Mumbai | Time series Job openings in Mumbai

Apply to 3+ Time series Jobs in Mumbai on CutShort.io. Explore the latest Time series Job opportunities across top companies like Google, Amazon & Adobe.

icon
Lifespark Technologies

at Lifespark Technologies

6 candid answers
1 video
Amey Desai
Posted by Amey Desai
Mumbai
1 - 3 yrs
₹4L - ₹9L / yr
TensorFlow
skill iconMachine Learning (ML)
Computer Vision
skill iconDeep Learning
Time series
+4 more

Lifespark is looking for individuals with a passion for impacting real lives through technology. Lifespark is one of the most promising startups in the Assistive Tech space in India, and has been honoured with several National and International awards. Our mission is to create seamless, persistent and affordable healthcare solutions. If you are someone who is driven to make a real impact in this world, we are your people.

Lifespark is currently building solutions for Parkinson’s Disease, and we are looking for a ML lead to join our growing team. You will be working directly with the founders on high impact problems in the Neurology domain. You will be solving some of the most fundamental and exciting challenges in the industry and will have the ability to see your insights turned into real products every day

 

Essential experience and requirements:

1. Advanced knowledge in the domains of computer vision, deep learning

2. Solid understand of Statistical / Computational concepts like Hypothesis Testing, Statistical Inference, Design of Experiments and production level ML system design

3. Experienced with proper project workflow

4. Good at collating multiple datasets (potentially from different sources)

5. Good understanding of setting up production level data pipelines

6. Ability to independently develop and deploy ML systems to various platforms (local and cloud)

7. Fundamentally strong with time-series data analysis, cleaning, featurization and visualisation

8. Fundamental understanding of model and system explainability

9. Proactive at constantly unlearning and relearning

10. Documentation ninja - can understand others documentation as well as create good documentation

 

Responsibilities :

1. Develop and deploy ML based systems built upon healthcare data in the Neurological domain

2. Maintain deployed systems and upgrade them through online learning

3. Develop and deploy advanced online data pipelines

Read more
Dolat Capital Market Private Ltd.
Mumbai
0 - 4 yrs
₹10L - ₹20L / yr
Quantitative analyst
Analytical Skills
Probability
Statistical Analysis
Time series
+1 more

Dolat Capital is a multi-strategy trading firm, dedicated to producing superior returns by adhering to mathematical and statistical techniques. We trade actively in all Asset classes: equities, futures, options, commodities, currencies and fixed income taking advantage of our ultra-low latency infrastructure. Our low latency infrastructure is in C++, one of the most competitive in terms of latency.

 

The Quantitative Research team at Dolat Capital focuses on quantitative trading, specifically fully automated trading systems. We develop proprietary algorithms by using mathematical techniques and statistical analysis to identify patterns and behavior of markets.

 

The incumbent will be responsible to design, develop and implement computational algorithms/strategies that succeed in generating sizable profits consistently, in real life market for a high-frequency trading environment

Read more
Egnyte

at Egnyte

4 recruiters
Prasanth Mulleti
Posted by Prasanth Mulleti
Remote, Mumbai
4 - 10 yrs
Best in industry
skill iconData Science
data scientist
skill iconMachine Learning (ML)
Time series
QoS
+7 more

Job Description

We are looking for an experienced engineer to join our data science team, who will help us design, develop, and deploy machine learning models in production. You will develop robust models, prepare their deployment into production in a controlled manner, while providing appropriate means to monitor their performance and stability after deployment.

 

What You’ll Do will include (But not limited to):

  • Preparing datasets needed to train and validate our machine learning models
  • Anticipate and build solutions for problems that interrupt availability, performance, and stability in our systems, services, and products at scale.
  • Defining and implementing metrics to evaluate the performance of the models, both for computing performance (such as CPU & memory usage) and for ML performance (such as precision, recall, and F1)
  • Supporting the deployment of machine learning models on our infrastructure, including containerization, instrumentation, and versioning
  • Supporting the whole lifecycle of our machine learning models, including gathering data for retraining, A/B testing, and redeployments
  • Developing, testing, and evaluating tools for machine learning models deployment, monitoring, retraining.
  • Working closely within a distributed team to analyze and apply innovative solutions over billions of documents
  • Supporting solutions ranging from rule-bases, classical ML techniques  to the latest deep learning systems.
  • Partnering with cross-functional team members to bring large scale data engineering solutions to production
  • Communicating your approach and results to a wider audience through presentations

Your Qualifications:

  • Demonstrated success with machine learning in a SaaS or Cloud environment, with hands–on knowledge of model creation and deployments in production at scale
  • Good knowledge of traditional machine learning methods and neural networks
  • Experience with practical machine learning modeling, especially on time-series forecasting, analysis, and causal inference.
  • Experience with data mining algorithms and statistical modeling techniques for anomaly detection in time series such as clustering, classification, ARIMA, and decision trees is preferred.
  • Ability to implement data import, cleansing and transformation functions at scale
  • Fluency in Docker, Kubernetes
  • Working knowledge of relational and dimensional data models with appropriate visualization techniques such as PCA.
  • Solid English skills to effectively communicate with other team members

 

Due to the nature of the role, it would be nice if you have also:

  • Experience with large datasets and distributed computing, especially with the Google Cloud Platform
  • Fluency in at least one deep learning framework: PyTorch, TensorFlow / Keras
  • Experience with No–SQL and Graph databases
  • Experience working in a Colab, Jupyter, or Python notebook environment
  • Some experience with monitoring, analysis, and alerting tools like New Relic, Prometheus, and the ELK stack
  • Knowledge of Java, Scala or Go-Lang programming languages
  • Familiarity with KubeFlow
  • Experience with transformers, for example the Hugging Face libraries
  • Experience with OpenCV

 

About Egnyte

In a content critical age, Egnyte fuels business growth by enabling content-rich business processes, while also providing organizations with visibility and control over their content assets. Egnyte’s cloud-native content services platform leverages the industry’s leading content intelligence engine to deliver a simple, secure, and vendor-neutral foundation for managing enterprise content across business applications and storage repositories. More than 16,000 customers trust Egnyte to enhance employee productivity, automate data management, and reduce file-sharing cost and complexity. Investors include Google Ventures, Kleiner Perkins, Caufield & Byers, and Goldman Sachs. For more information, visit www.egnyte.com

 

#LI-Remote

Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort