About Turing:
Turing enables U.S. companies to hire the world’s best remote software engineers. 100+ companies including those backed by Sequoia, Andreessen, Google Ventures, Benchmark, Founders Fund, Kleiner, Lightspeed, and Bessemer have hired Turing engineers. For more than 180,000 engineers across 140 countries, we are the preferred platform for finding remote U.S. software engineering roles. We offer a wide range of full-time remote opportunities for full-stack, backend, frontend, DevOps, mobile, and AI/ML engineers.
We are growing fast (our revenue 15x’d in the past 12 months and is accelerating), and we have raised $14M in seed funding (one of the largest in Silicon Valley) from:
- Facebook’s 1st CTO and Quora’s Co-Founder (Adam D’Angelo)
- Executives from Google, Facebook, Square, Amazon, and Twitter
- Foundation Capital (investors in Uber, Netflix, Chegg, Lending Club, etc.)
- Cyan Banister
- Founder of Upwork (Beerud Sheth)
We also raised a much larger round of funding in October 2020 that we will be publicly announcing over the coming month.
Some articles about Turing:
- TechCrunch: Turing raises $14M seed to help source, vet, place, and manage remote developers
- The Information: Six Startups Prospering During Coronavirus
- Cyan Banister: Turing Helps the World Level Up
- Jonathan Siddharth (Turing CEO): The Future of Work is Remote.
Turing is led by successful repeat founders Jonathan Siddharth and Vijay Krishnan, whose last A.I. company leveraged elite remote talent and had a successful acquisition. (Techcrunch story). Turing’s leadership team is composed of ex-Engineering and Sales leadership from Facebook, Google, Uber, and Capgemini.
About the role:
Software developers from all over the world have taken 200,000+ tests and interviews on Turing. Turing has also recommended thousands of developers to its customers and got customer feedback in terms of customer interview pass/fail data and data from the success of the collaboration with a U.S customer. This generates a massive proprietary dataset with a rich feature set comprising resume and test/interview features and labels in the form of actual customer feedback. Continuing rapid growth in our business creates an ever-increasing data advantage for us.
We are looking for a Machine Learning Scientist who can help solve a whole range of exciting and valuable machine learning problems at Turing. Turing collects a lot of valuable heterogeneous signals about software developers including their resume, GitHub profile and associated code and a lot of fine-grained signals from Turing’s own screening tests and interviews (that span various areas including Computer Science fundamentals, project ownership and collaboration, communication skills, proactivity and tech stack skills), their history of successful collaboration with different companies on Turing, etc.
A machine learning scientist at Turing will help create deep developer profiles that are a good representation of a developer’s strengths and weaknesses as it relates to their probability of getting successfully matched to one of Turing’s partner companies and having a fruitful long-term collaboration. The ML scientist will build models that are able to rank developers for different jobs based on their probability of success at the job.
You will also help make Turing’s tests more efficient by assessing their ability to predict the probability of a successful match of a developer with at least one company. The prior probability of a registered developer getting matched with a customer is about 1%. We want our tests to adaptively reduce perplexity as steeply as possible and move this probability estimate rapidly toward either 0% or 100%; maximize expected information-gain per unit time in other words.
As an ML Scientist on the team, you will have a unique opportunity to make an impact by advancing ML models and systems, as well as uncovering new opportunities to apply machine learning concepts to Turing product(s).
This role will directly report to Turing’s founder and CTO, Vijay Krishnan. This is his Google Scholar profile.
Responsibilities:
- Enhance our existing machine learning systems using your core coding skills and ML knowledge.
- Take end to end ownership of machine learning systems - from data pipelines, feature engineering, candidate extraction, model training, as well as integration into our production systems.
- Utilize state-of-the-art ML modeling techniques to predict user interactions and the direct impact on the company’s top-line metrics.
- Design features and builds large scale recommendation systems to improve targeting and engagement.
- Identify new opportunities to apply machine learning to different parts of our product(s) to drive value for our customers.
Minimum Requirements:
- BS, MS, or Ph.D. in Computer Science or a relevant technical field (AI/ML preferred).
- Extensive experience building scalable machine learning systems and data-driven products working with cross-functional teams
- Expertise in machine learning fundamentals, applicable to search - Learning to Rank, Deep Learning, Tree-Based Models, Recommendation Systems, Relevance and Data mining, understanding of NLP approaches like W2V or Bert.
- 2+ years of experience applying machine learning methods in settings like recommender systems, search, user modeling, graph representation learning, natural language processing.
- Strong understanding of neural network/deep learning, feature engineering, feature selection, optimization algorithms. Proven ability to dig deep into practical problems and choose the right ML method to solve them.
- Strong programming skills in Python and fluency in data manipulation (SQL, Spark, Pandas) and machine learning (scikit-learn, XGBoost, Keras/Tensorflow) tools.
- Good understanding of mathematical foundations of machine learning algorithms.
- Ability to be available for meetings and communication during Turing's "coordination hours" (Mon - Fri: 8 am to 12 pm PST).
Other Nice-to-have Requirements:
- First author publications in ICML, ICLR, NeurIPS, KDD, SIGIR, and related conferences/journals.
- Strong performance in Kaggle competitions.
- 5+ years of industry experience or a Ph.D. with 3+ years of industry experience in applied machine learning in similar problems e.g. ranking, recommendation, ads, etc.
- Strong communication skills.
- Experienced in leading large-scale multi-engineering projects.
- Flexible, and a positive team player with outstanding interpersonal skills.
About Turing
Similar jobs
ML Engineer
at Inviz Ai Solutions Private Limited
Experience: 2 or above years of experience
Responsibilities (but not limited to):
- Create data staging, transformation layers
- Prepare model-ready-data
- Create consumption layer of data/models by exposing them as service
- Maintain/Monitor and ensure scalability
Preferred Skills (but not limited to):
- Strong background in handling data, writing efficient SQL, python scripts, optimizing a query, loops, designing dataflow jobs, identifying the bottlenecks in a code and optimizing them, data structures, and design
- Strong background in deploying ML/Data as a service by writing APIs, monitoring, error handling, load balancing, access, and authentications
- Conversant with using API developments ( like GCP APIgee, FastAPI, Spring boot ),
- Have an understanding of Apache Airflow, Spark Streaming, SparkML
- Familiarity with development of javascript, jquery UI, UX design while keeping in mind the optimized load balancing and other front-end aspects
- 5+ years of industry experience in administering (including setting up, managing, monitoring) data processing pipelines (both streaming and batch) using frameworks such as Kafka Streams, Py Spark, and streaming databases like druid or equivalent like Hive
- Strong industry expertise with containerization technologies including kubernetes (EKS/AKS), Kubeflow
- Experience with cloud platform services such as AWS, Azure or GCP especially with EKS, Managed Kafka
- 5+ Industry experience in python
- Experience with popular modern web frameworks such as Spring boot, Play framework, or Django
- Experience with scripting languages. Python experience highly desirable. Experience in API development using Swagger
- Implementing automated testing platforms and unit tests
- Proficient understanding of code versioning tools, such as Git
- Familiarity with continuous integration, Jenkins
Responsibilities
- Architect, Design and Implement Large scale data processing pipelines using Kafka Streams, PySpark, Fluentd and Druid
- Create custom Operators for Kubernetes, Kubeflow
- Develop data ingestion processes and ETLs
- Assist in dev ops operations
- Design and Implement APIs
- Identify performance bottlenecks and bugs, and devise solutions to these problems
- Help maintain code quality, organization, and documentation
- Communicate with stakeholders regarding various aspects of solution.
- Mentor team members on best practices
- Data & Analytics team is responsible to integrate new data sources and build data models, data dictionaries and machine learning models for the Wholesale Bank.
- The goal is to design and build data products to support squads in Wholesale Bank with business outcomes and development of business insights. In this Job Family we make a distinction between Data Analysts and Data Scientist. Both scientists as analysts work with data and are expected to write queries, work with engineering teams to source the right data, perform data munging (getting data into the correct format, convenient for analysis/interpretation) and derive information from data.
- The data analyst typically works on simpler structured SQL or similar databases or with other BI tools/packages. The Data Scientists are expected to build statistical models or be hands-on in machine learning and advanced programming.
- Role of Data Scientist to support our Corporate banking teams with insights gained from analyzing company data. The ideal candidate is adept at using large data sets to find opportunities for product and process optimization and using models to test the effectiveness of different courses of action. They must have strong experience using a variety of data mining/data analysis methods, using a variety of data tools, building and implementing models, using/creating algorithms and creating/running simulations. They must have banking or corporate banking experience.
6 Years - 10 Years
Analytics
- Should be comfortable in solving Wholesale Banking domain analytical solution within AI/ML platform
- Identifying valuable data sources and automate collection processes
- Undertaking preprocessing of structured and unstructured data
- Analyzing large amounts of information to discover trends and patterns
- Building predictive models and machine-learning algorithms
- Combining models through ensemble modeling
- Presenting information using data visualization techniques
- Proposing solutions and strategies to business challenges
- Collaborating with engineering and product development teams
What you need to have:
- Data Scientist with min 3 years of experience in Analytics or Data Science preferably in Pricing or Polymer Market
- Experience using scripting languages like Python(preferred) or R is a must.
- Experience with SQL, Tableau is good to have
- Strong numerical, problem solving and analytical aptitude
- Being able to make data based decisions
- Ability to present/communicate analytics driven insights.
- Critical and Analytical thinking skills
Glance – An InMobi Group Company:
Glance is an AI-first Screen Zero content discovery platform, and it’s scaled massively in the last few months to one of the largest platforms in India. Glance is a lock-screen first mobile content platform set up within InMobi. The average mobile phone user unlocks their phone >150 times a day. Glance aims to be there, providing visually rich, easy to consume content to entertain and inform mobile users - one unlock at a time. Glance is live on more than 80 millions of mobile phones in India already, and we are only getting started on this journey! We are now into phase 2 of the Glance story - we are going global!
Roposo is part of the Glance family. It is a short video entertainment platform. All the videos created here are user generated (via upload or Roposo creation tools in camera) and there are many communities creating these videos on various themes we call channels. Around 4 million videos are created every month on Roposo and power Roposo channels, some of the channels are - HaHa TV (for comedy videos), News, Beats (for singing/ dance performances) along with a For You (personalized for a user) and Your Feed (for videos of people a user follows).
What’s the Glance family like?
Consistently featured among the “Great Places to Work” in India since 2017, our culture is our true north, enabling us to think big, solve complex challenges and grow with new opportunities. Glanciers are passionate and driven, creative and fun-loving, take ownership and are results-focused. We invite you to free yourself, dream big and chase your passion.
What can we promise?
We offer an opportunity to have an immediate impact on the company and our products. The work that you shall do will be mission critical for Glance and will be critical for optimizing tech operations, working with highly capable and ambitious peer groups. At Glance, you get food for your body, soul, and mind with daily meals, gym, and yoga classes, cutting-edge training and tools, cocktails at drink cart Thursdays and fun at work on Funky Fridays. We even promise to let you bring your kids and pets to work.
What you will be doing?
Glance is looking for a Data Scientist who will design and develop processes and systems to analyze high volume, diverse "big data" sources using advanced mathematical, statistical, querying, and reporting methods. Will use machine learning techniques and statistical analysis to predict outcomes and behaviors. Interacts with business partners to identify questions for data analysis and experiments. Identifies meaningful insights from large data and metadata sources; interprets and communicates insights and or prepares output from analysis and experiments to business partners.
You will be working with Product leadership, taking high-level objectives and developing solutions that fulfil these requirements. Stakeholder management across Eng, Product and Business teams will be required.
Basic Qualifications:
- Five+ years experience working in a Data Science role
- Extensive experience developing and deploying ML models in real world environments
- Bachelor's degree in Computer Science, Mathematics, Statistics, or other analytical fields
- Exceptional familiarity with Python, Java, Spark or other open-source software with data science libraries
- Experience in advanced math and statistics
- Excellent familiarity with command line linux environment
- Able to understand various data structures and common methods in data transformation
- Experience deploying machine learning models and measuring their impact
- Knowledge of a variety of machine learning techniques (clustering, decision tree learning, artificial neural networks, etc.) and their real-world advantages/drawbacks.
Preferred Qualifications
- Experience developing recommendation systems
- Experience developing and deploying deep learning models
- Bachelor’s or Master's Degree or PhD that included coursework in statistics, machine learning or data analysis
- Five+ years experience working with Hadoop, a NoSQL Database or other big data infrastructure
- Experience with being actively engaged in data science or other research-oriented position
- You would be comfortable collaborating with cross-functional teams.
- Active personal GitHub account.
ML Engineer-Analyst/ Senior Analyst
Job purpose:
To design and develop machine learning and deep learning systems. Run machine learning tests andexperiments and implementing appropriate ML algorithms. Works cross-functionally with the Data Scientists, Software application developers and business groups for the development of innovative ML models. Use Agile experience to work collaboratively with other Managers/Owners in geographically distributed teams.
Accountabilities:
- Work with Data Scientists and Business Analysts to frame problems in a business context. Assist all the processes from data collection, cleaning, and preprocessing, to training models and deploying them to production.
- Understand business objectives and developing models that help to achieve them, along with metrics to track their progress.
- Explore and visualize data to gain an understanding of it, then identify differences in data distribution that could affect performance when deploying the model in the real world.
- Define validation strategies, preprocess or feature engineering to be done on a given dataset and data augmentation pipelines.
- Analyze the errors of the model and design strategies to overcome them.
- Collaborate with data engineers to build data and model pipelines, manage the infrastructure and data pipelines needed to bring code to production and demonstrate end-to-end understanding of applications (including, but not limited to, the machine learning algorithms) being created.
Qualifications & Specifications
- Bachelor's degree in Engineering /Computer Science/ Math/ Statistics or equivalent. Master's degree in relevant specification will be first preference
- Experience of machine learning algorithms and libraries
- Understanding of data structures, data modeling and software architecture.
- Deep knowledge of math, probability, statistics and algorithms
- Experience with machine learning platforms such as Microsoft Azure, Google Cloud, IBM Watson, and Amazon
- Big data environment: Hadoop, Spark
- Programming languages: Python, R, PySpark
- Supervised & Unsupervised machine learning: linear regression, logistic regression, k-means
clustering, ensemble models, random forest, svm, gradient boosting
- Sampling data: bagging & boosting, bootstrapping
- Neural networks: ANN, CNN, RNN related topics
- Deep learning: Keras, Tensorflow
- Experience with AWS Sagemaker deployment and agile methodology
Researcher - Deep Learning & AI
Data Engineer
- 5+ years of experience in a Data Engineer role
- Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field.
- Experience with big data tools: Hadoop, Spark, Kafka, etc.
- Experience with relational SQL and NoSQL databases such as Cassandra.
- Experience with AWS cloud services: EC2, EMR, Athena
- Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
- Advanced SQL knowledge and experience working with relational databases, query authoring (SQL) as well as familiarity with unstructured datasets.
- Deep problem-solving skills to perform root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
1)Machine learning development using Python or Scala Spark
2)Knowledge of multiple ML algorithms like Random forest, XG boost, RNN, CNN, Transform learning etc..
3)Aware of typical challenges in machine learning implementation and respective applications
Good to have
1)Stack development or DevOps team experience
2)Cloud service (AWS, Cloudera), SAAS, PAAS
3)Big data tools and framework
4)SQL experience