Cutshort logo
Turing logo
Machine Learning Scientist
Machine Learning Scientist
Turing's logo

Machine Learning Scientist

Misbah Munir's profile picture
Posted by Misbah Munir
5 - 10 yrs
₹10L - ₹20L / yr
Remote only
Skills
Machine Learning (ML)
Artificial Intelligence (AI)
Natural Language Processing (NLP)
Decision trees
Deep Learning
Neural networks
TensorFlow
Keras
Python
PyTorch

About Turing:

Turing enables U.S. companies to hire the world’s best remote software engineers. 100+ companies including those backed by Sequoia, Andreessen, Google Ventures, Benchmark, Founders Fund, Kleiner, Lightspeed, and Bessemer have hired Turing engineers. For more than 180,000 engineers across 140 countries, we are the preferred platform for finding remote U.S. software engineering roles. We offer a wide range of full-time remote opportunities for full-stack, backend, frontend, DevOps, mobile, and AI/ML engineers.

We are growing fast (our revenue 15x’d in the past 12 months and is accelerating), and we have raised $14M in seed funding (https://tcrn.ch/3lNKbM9">one of the largest in Silicon Valley) from:

  • Facebook’s 1st CTO and Quora’s Co-Founder (Adam D’Angelo)
  • Executives from Google, Facebook, Square, Amazon, and Twitter
  • Foundation Capital (investors in Uber, Netflix, Chegg, Lending Club, etc.)
  • Cyan Banister
  • Founder of Upwork (Beerud Sheth)

We also raised a much larger round of funding in October 2020 that we will be publicly announcing over the coming month. 

Some articles about Turing:


Turing is led by successful repeat founders Jonathan Siddharth and Vijay Krishnan, whose last A.I. company leveraged elite remote talent and had a successful acquisition. (https://techcrunch.com/2017/02/23/revcontent-acquires-rover/">Techcrunch story). Turing’s leadership team is composed of ex-Engineering and Sales leadership from Facebook, Google, Uber, and Capgemini.


About the role:

Software developers from all over the world have taken 200,000+ tests and interviews on Turing. Turing has also recommended thousands of developers to its customers and got customer feedback in terms of customer interview pass/fail data and data from the success of the collaboration with a U.S customer. This generates a massive proprietary dataset with a rich feature set comprising resume and test/interview features and labels in the form of actual customer feedback. Continuing rapid growth in our business creates an ever-increasing data advantage for us. 

 

We are looking for a Machine Learning Scientist who can help solve a whole range of exciting and valuable machine learning problems at Turing. Turing collects a lot of valuable heterogeneous signals about software developers including their resume, GitHub profile and associated code and a lot of fine-grained signals from Turing’s own screening tests and interviews (that span various areas including Computer Science fundamentals, project ownership and collaboration, communication skills, proactivity and tech stack skills), their history of successful collaboration with different companies on Turing, etc. 

A machine learning scientist at Turing will help create deep developer profiles that are a good representation of a developer’s strengths and weaknesses as it relates to their probability of getting successfully matched to one of Turing’s partner companies and having a fruitful long-term collaboration. The ML scientist will build models that are able to rank developers for different jobs based on their probability of success at the job. 

     
 You will also help make Turing’s tests more efficient by assessing their ability to predict the probability of a successful match of a developer with at least one company. The prior probability of a registered developer getting matched with a customer is about 1%. We want our tests to adaptively reduce perplexity as steeply as possible and move this probability estimate rapidly toward either 0% or 100%; maximize expected information-gain per unit time in other words. 

As an ML Scientist on the team, you will have a unique opportunity to make an impact by advancing ML models and systems, as well as uncovering new opportunities to apply machine learning concepts to Turing product(s).

This role will directly report to Turing’s founder and CTO, https://www.linkedin.com/in/vijay0/">Vijay Krishnan. This is his https://scholar.google.com/citations?user=uCRc7DgAAAAJ&;hl=en">Google Scholar profile. 


Responsibilities:

  • Enhance our existing machine learning systems using your core coding skills and ML knowledge.
  • Take end to end ownership of machine learning systems - from data pipelines, feature engineering, candidate extraction, model training, as well as integration into our production systems.
  • Utilize state-of-the-art ML modeling techniques to predict user interactions and the direct impact on the company’s top-line metrics.
  • Design features and builds large scale recommendation systems to improve targeting and engagement.
  • Identify new opportunities to apply machine learning to different parts of our product(s) to drive value for our customers.

 

 

Minimum Requirements:

 

  • BS, MS, or Ph.D. in Computer Science or a relevant technical field (AI/ML preferred).
  • Extensive experience building scalable machine learning systems and data-driven products working with cross-functional teams
  • Expertise in machine learning fundamentals, applicable to search - Learning to Rank, Deep Learning, Tree-Based Models, Recommendation Systems, Relevance and Data mining, understanding of NLP approaches like W2V or Bert.
  • 2+ years of experience applying machine learning methods in settings like recommender systems, search, user modeling, graph representation learning, natural language processing.
  • Strong understanding of neural network/deep learning, feature engineering, feature selection, optimization algorithms. Proven ability to dig deep into practical problems and choose the right ML method to solve them.
  • Strong programming skills in Python and fluency in data manipulation (SQL, Spark, Pandas) and machine learning (scikit-learn, XGBoost, Keras/Tensorflow) tools.
  • Good understanding of mathematical foundations of machine learning algorithms.
  • Ability to be available for meetings and communication during Turing's "coordination hours" (Mon - Fri: 8 am to 12 pm PST).

 

Other Nice-to-have Requirements:

 

  • First author publications in ICML, ICLR, NeurIPS, KDD, SIGIR, and related conferences/journals. 
  • Strong performance in Kaggle competitions.
  • 5+ years of industry experience or a Ph.D. with 3+ years of industry experience in applied machine learning in similar problems e.g. ranking, recommendation, ads, etc.
  • Strong communication skills.
  • Experienced in leading large-scale multi-engineering projects.
  • Flexible, and a positive team player with outstanding interpersonal skills.
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About Turing

Founded :
2018
Type
Size
Stage :
Raised funding
About
Hire the top 1% of 150,000 senior remote software engineers | Find remote U.S. developer jobs | Full-stack, mobile, frontend, backend DevOps, AI/ML and more.
Read more
Connect with the team
Profile picture
Misbah Munir
Company social profiles
instagramlinkedintwitterfacebook

Similar jobs

Mumbai, Navi Mumbai
6 - 14 yrs
₹16L - ₹37L / yr
Python
PySpark
Data engineering
Big Data
Hadoop
+3 more

Role: Principal Software Engineer


We looking for a passionate Principle Engineer - Analytics to build data products that extract valuable business insights for efficiency and customer experience. This role will require managing, processing and analyzing large amounts of raw information and in scalable databases. This will also involve developing unique data structures and writing algorithms for the entirely new set of products. The candidate will be required to have critical thinking and problem-solving skills. The candidates must be experienced with software development with advanced algorithms and must be able to handle large volume of data. Exposure with statistics and machine learning algorithms is a big plus. The candidate should have some exposure to cloud environment, continuous integration and agile scrum processes.



Responsibilities:


• Lead projects both as a principal investigator and project manager, responsible for meeting project requirements on schedule

• Software Development that creates data driven intelligence in the products which deals with Big Data backends

• Exploratory analysis of the data to be able to come up with efficient data structures and algorithms for given requirements

• The system may or may not involve machine learning models and pipelines but will require advanced algorithm development

• Managing, data in large scale data stores (such as NoSQL DBs, time series DBs, Geospatial DBs etc.)

• Creating metrics and evaluation of algorithm for better accuracy and recall

• Ensuring efficient access and usage of data through the means of indexing, clustering etc.

• Collaborate with engineering and product development teams.


Requirements:


• Master’s or Bachelor’s degree in Engineering in one of these domains - Computer Science, Information Technology, Information Systems, or related field from top-tier school

• OR Master’s degree or higher in Statistics, Mathematics, with hands on background in software development.

• Experience of 8 to 10 year with product development, having done algorithmic work

• 5+ years of experience working with large data sets or do large scale quantitative analysis

• Understanding of SaaS based products and services.

• Strong algorithmic problem-solving skills

• Able to mentor and manage team and take responsibilities of team deadline.


Skill set required:


• In depth Knowledge Python programming languages

• Understanding of software architecture and software design

• Must have fully managed a project with a team

• Having worked with Agile project management practices

• Experience with data processing analytics and visualization tools in Python (such as pandas, matplotlib, Scipy, etc.)

• Strong understanding of SQL and querying to NoSQL database (eg. Mongo, Casandra, Redis

Read more
codersbrain
at codersbrain
1 recruiter
Tanuj Uppal
Posted by Tanuj Uppal
Delhi
4 - 8 yrs
₹2L - ₹15L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+5 more
  • Mandatory - Hands on experience in Python and PySpark.

 

  • Build pySpark applications using Spark Dataframes in Python using Jupyter notebook and PyCharm(IDE).

 

  • Worked on optimizing spark jobs that processes huge volumes of data.

 

  • Hands on experience in version control tools like Git.

 

  • Worked on Amazon’s Analytics services like Amazon EMR, Lambda function etc

 

  • Worked on Amazon’s Compute services like Amazon Lambda, Amazon EC2 and Amazon’s Storage service like S3 and few other services like SNS.

 

  • Experience/knowledge of bash/shell scripting will be a plus.

 

  • Experience in working with fixed width, delimited , multi record file formats etc.

 

  • Hands on experience in tools like Jenkins to build, test and deploy the applications

 

  • Awareness of Devops concepts and be able to work in an automated release pipeline environment.

 

  • Excellent debugging skills.
Read more
Mobile Programming LLC
at Mobile Programming LLC
1 video
34 recruiters
Sukhdeep Singh
Posted by Sukhdeep Singh
Chennai
4 - 7 yrs
₹13L - ₹15L / yr
Data Analytics
Data Visualization
PowerBI
Tableau
Qlikview
+10 more

Title: Platform Engineer Location: Chennai Work Mode: Hybrid (Remote and Chennai Office) Experience: 4+ years Budget: 16 - 18 LPA

Responsibilities:

  • Parse data using Python, create dashboards in Tableau.
  • Utilize Jenkins for Airflow pipeline creation and CI/CD maintenance.
  • Migrate Datastage jobs to Snowflake, optimize performance.
  • Work with HDFS, Hive, Kafka, and basic Spark.
  • Develop Python scripts for data parsing, quality checks, and visualization.
  • Conduct unit testing and web application testing.
  • Implement Apache Airflow and handle production migration.
  • Apply data warehousing techniques for data cleansing and dimension modeling.

Requirements:

  • 4+ years of experience as a Platform Engineer.
  • Strong Python skills, knowledge of Tableau.
  • Experience with Jenkins, Snowflake, HDFS, Hive, and Kafka.
  • Proficient in Unix Shell Scripting and SQL.
  • Familiarity with ETL tools like DataStage and DMExpress.
  • Understanding of Apache Airflow.
  • Strong problem-solving and communication skills.

Note: Only candidates willing to work in Chennai and available for immediate joining will be considered. Budget for this position is 16 - 18 LPA.

Read more
Ascendeum
at Ascendeum
3 recruiters
Sonali Jain
Posted by Sonali Jain
Remote only
1 - 3 yrs
₹6L - ₹9L / yr
Python
CI/CD
Storage & Networking
Data storage
  • Understand long-term and short-term business requirements to precision match it with the capabilities of different distributed storage and computing technologies from the plethora of options available in the ecosystem.

  • Create complex data processing pipelines

  • Design scalable implementations of the models developed by our Data Scientist.

  • Deploy data pipelines in production systems based on CICD practices

  • Create and maintain clear documentation on data models/schemas as well as

    transformation/validation rules

  • Troubleshoot and remediate data quality issues raised by pipeline alerts or downstream consumers

Read more
Amagi Media Labs
at Amagi Media Labs
3 recruiters
Rajesh C
Posted by Rajesh C
Chennai
15 - 18 yrs
Best in industry
Data architecture
Architecture
Data Architect
Architect
Java
+5 more
Job Title: Data Architect
Job Location: Chennai
Job Summary

The Engineering team is seeking a Data Architect. As a Data Architect, you will drive a
Data Architecture strategy across various Data Lake platforms. You will help develop
reference architecture and roadmaps to build highly available, scalable and distributed
data platforms using cloud based solutions to process high volume, high velocity and
wide variety of structured and unstructured data. This role is also responsible for driving
innovation, prototyping, and recommending solutions. Above all, you will influence how
users interact with Conde Nast’s industry-leading journalism.
Primary Responsibilities
Data Architect is responsible for
• Demonstrated technology and personal leadership experience in architecting,
designing, and building highly scalable solutions and products.
• Enterprise scale expertise in data management best practices such as data integration,
data security, data warehousing, metadata management and data quality.
• Extensive knowledge and experience in architecting modern data integration
frameworks, highly scalable distributed systems using open source and emerging data
architecture designs/patterns.
• Experience building external cloud (e.g. GCP, AWS) data applications and capabilities is
highly desirable.
• Expert ability to evaluate, prototype and recommend data solutions and vendor
technologies and platforms.
• Proven experience in relational, NoSQL, ELT/ETL technologies and in-memory
databases.
• Experience with DevOps, Continuous Integration and Continuous Delivery technologies
is desirable.
• This role requires 15+ years of data solution architecture, design and development
delivery experience.
• Solid experience in Agile methodologies (Kanban and SCRUM)
Required Skills
• Very Strong Experience in building Large Scale High Performance Data Platforms.
• Passionate about technology and delivering solutions for difficult and intricate
problems. Current on Relational Databases and No sql databases on cloud.
• Proven leadership skills, demonstrated ability to mentor, influence and partner with
cross teams to deliver scalable robust solutions..
• Mastery of relational database, NoSQL, ETL (such as Informatica, Datastage etc) /ELT
and data integration technologies.
• Experience in any one of Object Oriented Programming (Java, Scala, Python) and
Spark.
• Creative view of markets and technologies combined with a passion to create the
future.
• Knowledge on cloud based Distributed/Hybrid data-warehousing solutions and Data
Lake knowledge is mandate.
• Good understanding of emerging technologies and its applications.
• Understanding of code versioning tools such as GitHub, SVN, CVS etc.
• Understanding of Hadoop Architecture and Hive SQL
• Knowledge in any one of the workflow orchestration
• Understanding of Agile framework and delivery

Preferred Skills:
● Experience in AWS and EMR would be a plus
● Exposure in Workflow Orchestration like Airflow is a plus
● Exposure in any one of the NoSQL database would be a plus
● Experience in Databricks along with PySpark/Spark SQL would be a plus
● Experience with the Digital Media and Publishing domain would be a
plus
● Understanding of Digital web events, ad streams, context models
About Condé Nast
CONDÉ NAST INDIA (DATA)
Over the years, Condé Nast successfully expanded and diversified into digital, TV, and social
platforms - in other words, a staggering amount of user data. Condé Nast made the right
move to invest heavily in understanding this data and formed a whole new Data team
entirely dedicated to data processing, engineering, analytics, and visualization. This team
helps drive engagement, fuel process innovation, further content enrichment, and increase
market revenue. The Data team aimed to create a company culture where data was the
common language and facilitate an environment where insights shared in real-time could
improve performance.
The Global Data team operates out of Los Angeles, New York, Chennai, and London. The
team at Condé Nast Chennai works extensively with data to amplify its brands' digital
capabilities and boost online revenue. We are broadly divided into four groups, Data
Intelligence, Data Engineering, Data Science, and Operations (including Product and
Marketing Ops, Client Services) along with Data Strategy and monetization. The teams built
capabilities and products to create data-driven solutions for better audience engagement.
What we look forward to:
We want to welcome bright, new minds into our midst and work together to create diverse
forms of self-expression. At Condé Nast, we encourage the imaginative and celebrate the
extraordinary. We are a media company for the future, with a remarkable past. We are
Condé Nast, and It Starts Here.
Read more
Astegic
at Astegic
3 recruiters
Nikita Pasricha
Posted by Nikita Pasricha
Remote only
5 - 7 yrs
₹8L - ₹15L / yr
Data engineering
SQL
Relational Database (RDBMS)
Big Data
Scala
+14 more

WHAT YOU WILL DO:

  • ●  Create and maintain optimal data pipeline architecture.

  • ●  Assemble large, complex data sets that meet functional / non-functional business requirements.

  • ●  Identify, design, and implement internal process improvements: automating manual processes,

    optimizing data delivery, re-designing infrastructure for greater scalability, etc.

  • ●  Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide

    variety of data sources using Spark,Hadoop and AWS 'big data' technologies.(EC2, EMR, S3, Athena).

  • ●  Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition,

    operational efficiency and other key business performance metrics.

  • ●  Work with stakeholders including the Executive, Product, Data and Design teams to assist with

    data-related technical issues and support their data infrastructure needs.

  • ●  Keep our data separated and secure across national boundaries through multiple data centers and AWS

    regions.

  • ●  Create data tools for analytics and data scientist team members that assist them in building and

    optimizing our product into an innovative industry leader.

  • ●  Work with data and analytics experts to strive for greater functionality in our data systems.

    REQUIRED SKILLS & QUALIFICATIONS:

  • ●  5+ years of experience in a Data Engineer role.

  • ●  Advanced working SQL knowledge and experience working with relational databases, query authoring

    (SQL) as well as working familiarity with a variety of databases.

  • ●  Experience building and optimizing 'big data' data pipelines, architectures and data sets.

  • ●  Experience performing root cause analysis on internal and external data and processes to answer

    specific business questions and identify opportunities for improvement.

  • ●  Strong analytic skills related to working with unstructured datasets.

  • ●  Build processes supporting data transformation, data structures, metadata, dependency and workload

    management.

  • ●  A successful history of manipulating, processing and extracting value from large disconnected datasets.

  • ●  Working knowledge of message queuing, stream processing, and highly scalable 'big data' data stores.

  • ●  Strong project management and organizational skills.

  • ●  Experience supporting and working with cross-functional teams in a dynamic environment

  • ●  Experience with big data tools: Hadoop, Spark, Pig, Vetica, etc.

  • ●  Experience with AWS cloud services: EC2, EMR, S3, Athena

  • ●  Experience with Linux

  • ●  Experience with object-oriented/object function scripting languages: Python, Java, Shell, Scala, etc.


    PREFERRED SKILLS & QUALIFICATIONS:

● Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field.

Read more
CES Information Technologies
Yash Rathod
Posted by Yash Rathod
Hyderabad
7 - 12 yrs
₹5L - ₹15L / yr
Machine Learning (ML)
Deep Learning
Python
Data modeling
o Critical thinking mind who likes to solve complex problems, loves programming, and cherishes to work in a fast-paced environment.
o Strong Python development skills, with 7+ yrs. experience with SQL.
o A bachelor or master’s degree in Computer Science or related areas
o 5+ years of experience in data integration and pipeline development
o Experience in Implementing Databricks Delta lake and data lake
o Expertise designing and implementing data pipelines using modern data engineering approach and tools: SQL, Python, Delta Lake, Databricks, Snowflake Spark
o Experience in working with multiple file formats (Parque, Avro, Delta Lake) & API
o experience with AWS Cloud on data integration with S3.
o Hands on Development experience with Python and/or Scala.
o Experience with SQL and NoSQL databases.
o Experience in using data modeling techniques and tools (focused on Dimensional design)
o Experience with micro-service architecture using Docker and Kubernetes
o Have experience working with one or more of the public cloud providers i.e. AWS, Azure or GCP
o Experience in effectively presenting and summarizing complex data to diverse audiences through visualizations and other means
o Excellent verbal and written communications skills and strong leadership capabilities

Skills:
ML
MOdelling
Python
SQL
Azure Data Lake, dataFactory, Databricks, Delta Lake
Read more
Ezeiatech systems
at Ezeiatech systems
5 recruiters
Preeti Rai
Posted by Preeti Rai
Gurugram
0 - 6 yrs
₹2L - ₹15L / yr
Data Science
R Programming
Python
● Responsible for developing new features and models as part of our core product through
applied research.
● Understand, apply and extend state-of-the-art NLP research to better serve our customers.
● Work closely with engineering, product, and customers to scientifically frame the business problems and come up with the underlying AI models.
● Design, implement, test, deploy, and maintain innovative data and machine learning solutions to accelerate our business.
● Think creatively to identify new opportunities and contribute to high-quality publications or patents.
Desired Qualifications and Experience

● At Least 1 year of professional experience.
● Bachelors in Computer Science or related fields from the top colleges.
● Extensive knowledge and practical experience in one or more of the following areas: machine learning, deep learning, NLP, recommendation systems, information retrieval.
● Experience applying ML to solve complex business problems from scratch.
● Experience with Python and a deep learning framework like Pytorch/Tensorflow.
● Awareness of the state of the art research in the NLP community.
● Excellent verbal and written communication and presentation skills.
Read more
Greenway Health
at Greenway Health
2 recruiters
Agency job
via Vipsa Talent Solutions by Prashma S R
Bengaluru (Bangalore)
6 - 8 yrs
₹8L - ₹15L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+5 more
6-8years of experience in data engineer
Spark
Hadoop
Big Data
Data engineering
PySpark
Python
AWS Lambda
SQL
hadoop
kafka
Read more
AI & ML Startup
Agency job
via Unnati by Dipannita Chakraborty
NCR (Delhi | Gurgaon | Noida)
4 - 7 yrs
₹20L - ₹25L / yr
Deep Learning
Machine Learning (ML)
Algorithms
Analysis of algorithms
We are looking for passionate engineers and researchers that want to contribute in this exciting and fast moving field of Deep Learning and Research.
 
Our client is a highly awarded AI and Machine Learning lab, which is disrupting the multi billion dollar Agriculture and commodities business globally. They are recognized as a de-facto business for expert AI capability in solutions that satisfy real world challenges in near real time.
 
As the Lead Engineer - Deep Learning, you will be responsible for leading research, software implementation for new concept prototypes in the areas of computer vision and deep learning.
 
What you will do:
  • Focusing on developing new concepts and user experiences through rapid prototyping and collaboration with the best-in-class research and development team.
  • Reading research papers and implementing state-of-the-art techniques for computer vision
  • Building and managing datasets.
  • Providing Rapid experimentation, analysis, and deployment of machine/deep learning models
  • Based on requirements set by the team, helping develop new and rapid prototypes
  • Developing end to end products for problems related to agritech and other use cases
  • Leading the deep learning team



What you need to have:
  • MS/ME/PhD in Computer Science, Computer Engineering equivalent Proficient in Python and C++, CUDA a plus
  • International conference papers/Patents, Algorithm design, deep learning development, programming (Python, C/C++)
  • Knowledge of multiple deep-learning frameworks, such as Caffe, TensorFlow, Theano, Torch/PyTorch
  • Problem Solving: Deep learning development
  • Vision, perception, control, planning algorithm development
  • Track record of excellence in the machine learning / perception / control, including patents, publications to international conferences or journals.
  • Communications: Good communication skills
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos