Sizzle is an exciting new startup that’s changing the world of gaming. At Sizzle, we’re building AI to automate gaming highlights, directly from Twitch and YouTube streams. We’re looking for a superstar engineer that is well versed with AI and audio technologies around audio detection, speech-to-text, interpretation, and sentiment analysis.
You will be responsible for:
Developing audio algorithms to detect key moments within popular online games, such as:
Streamer speaking, shouting, etc.
Gunfire, explosions, and other in-game audio events
Speech-to-text and sentiment analysis of the streamer’s narration
Leveraging baseline technologies such as TensorFlow and others -- and building models on top of them
Building neural network architectures for audio analysis as it pertains to popular games
Specifying exact requirements for training data sets, and working with analysts to create the data sets
Training final models, including techniques such as transfer learning, data augmentation, etc. to optimize models for use in a production environment
Working with back-end engineers to get all of the detection algorithms into production, to automate the highlight creation
You should have the following qualities:
Solid understanding of AI frameworks and algorithms, especially pertaining to audio analysis, speech-to-text, sentiment analysis, and natural language processing
Experience using Python, TensorFlow and other AI tools
Demonstrated understanding of various algorithms for audio analysis, such as CNNs, LSTM for natural language processing, and others
Nice to have: some familiarity with AI-based audio analysis including sentiment analysis
Familiarity with AWS environments
Excited about working in a fast-changing startup environment
Willingness to learn rapidly on the job, try different things, and deliver results
Ideally a gamer or someone interested in watching gaming content online
Machine Learning, Audio Analysis, Sentiment Analysis, Speech-To-Text, Natural Language Processing, Neural Networks, TensorFlow, OpenCV, AWS, Python
Work Experience: 2 years to 10 years
Sizzle is building AI to automate gaming highlights, directly from Twitch and YouTube videos. Presently, there are over 700 million fans around the world that watch gaming videos on Twitch and YouTube. Sizzle is creating a new highlights experience for these fans, so they can catch up on their favorite streamers and esports leagues. Sizzle is available at www.sizzle.gg.
o 3+ years of software engineering experience.
o Advanced knowledge of Python, with 2+ years in a production environment.
o Experience with practical applications of deep learning.
o Experience with agile, test-driven development, continuous integration, and automated testing.
o Experience with productionizing machine learning models and integrating into web- services.
o Experience with the full software development life cycle, including requirements collection, design, implementation, testing, and operational support.
o Excellent verbal and written communication, teamwork, decision making and influencing
o Hustle. Thrives in an evolving, fast paced, ambiguous work environment.
We are looking for a Data Scientist to analyze large amounts of raw information to find patterns that will help improve our company. We will rely on you to build data products to extract valuable business insights.
In this role, you should be highly analytical with a knack for analysis, math and statistics. Critical thinking and problem-solving skills are essential for interpreting data. We also want to see a passion for machine-learning and research.
Your goal will be to help our company analyze trends to make better decisions.
1. 2 to 4 years of relevant industry experience
2. Experience in Linear algebra, statistics & Probability skills, such as distributions, Deep Learning, Machine Learning
3. Strong mathematical and statistics background is a must
4. Experience in machine learning frameworks such as Tensorflow, Caffe, PyTorch, or MxNet
5. Strong industry experience in using design patterns, algorithms and data structures
6. Industry experience in using feature engineering, model performance tuning, and optimizing machine learning models
7. Hands on development experience in Python and packages such as NumPy, Sci-Kit Learn and Matplotlib
8. Experience in model building, hyper
Summary of the Position:
Genesys is the global omnichannel customer experience and contact center solution leader. With over 4,500 successful customers, our customer experience platform and solutions help companies engage effortlessly with their customers, across all touchpoints, channels and interactions to deliver differentiated customer journeys, while maximizing revenue and loyalty.
Genesys is embracing AI and Machine Learning in delivering AI-powered products like Predictive Routing, Chat bots, and Customer Journey management. The PS Data Scientist share a passion for AI & data working with global project teams and customers to extend data science services across Genesys solutions both on premise and in the cloud.
Join us and be a part of this journey as we write Customer Success stories with these products.
WHAT YOU DO:
- Interface with business customers, gathering and understanding requirements
- Interface with customer and Genesys data science teams in discovery, extraction, loading, data transformation, and analysis of results
- Define and utilize data intuition process to cleanse and verify the integrity of customer & Genesys data to be used for analysis
- Implement, own, and improve data pipelines using best practices in data modeling, ETL/ELT processes
- Build, improve, and provide ongoing optimization of high quality models
- Work with PS & Engineering to deliver specific customer requirements and report back customer feedback, issues and feature requests. Continuous improvement in reporting, analysis, overall process.
- Visualize, present and demonstrate findings as required. Perform knowledge transfer to customer and internal teams.
- Communicate within the global community respecting cultural, language and time zone variations
- Demonstrate flexibility to adjust working hours to match customer and team interactions
- Bachelor’s / Master’s degree in quantitative field (e.g. Computer Science, Statistics, Engineering)
- 2-4 years of relevant experience in Data Science or Data Engineering
- 2+ years of hands-on experience performing statistical data analysis across large datasets writing highly optimized SQL queries and utilizing Python (NumPy, Pandas) or similar software (Primary)
- Expertise with major statistical & analytical software like (Python or R or SAS) - Primary
- Experience in Snowflake, Tableau, Elasticsearch, Kibana and real time analytics solution development will be major plus (Secondary)
- Should have the application development background of using any contact center product suites such as Genesys, Avaya, Cisco etc.
- Expertise with data modelling, data warehousing and ETL/ELT development
- Expertise with database solutions such as SQL, MongoDB, Redshift, Hadoop, Hive
- Proficiency with REST API, JSON, (AWS/AZURE/GCP) - Primary
- Experience in working and delivering projects independently. Ability to multi-task and context switch between projects and tasks
- Curiosity, passion, and drive for data queries, analysis, quality, models
- Excellent communication, initiative, and coordination skills with great attention to detail. Ability to explain and discuss complex topics with both experts and business leaders
How You Do It:
How You Think: Understands the business and takes a non-traditional approach to solving common problems. Willing to draw outside the lines and find new ways to make an impact on old problems.
How You Interact: Can easily build collaborative relationships that energize individuals, teams, and the company into action. You are a global thinker and can work across locations and time zones. You are an excellent communicator and listener, and can easily persuade to drive a vision and purpose.
How You Own It: You are a hands-on executor who can drive change and clearly communicate across all stakeholders.
How You Show Up:
Embodies Genesys core cultural values and pushes to create an authentic employee experience. You are the type of person who can succeed through ambiguity, bringing clarity where there is no roadmap, who can re-set when a change in direction is needed without getting derailed or frustrated. You are authentic and in still the trust in others.
Genesys ® powers more than 25 billion of the world’s best customer experiences each year. We put the customer at the center of everything we do and passionately believe that great customer engagement drives great business outcomes. More than 10,000 companies in more than 100 countries trust the industry’s #1 customer experience platform to orchestrate omnichannel customer journeys that eliminate silos and build lasting relationships. With a strong track record of innovation and a never-ending desire to be first, Genesys is the only company recognized by top industry analysts as a leader in both cloud and on-premise customer engagement solutions. Connect with Genesys via www.genesys.com , Twitter , Facebook , YouTube , LinkedIn , and the Genesys blog.
Genesys is an equal opportunity employer committed to diversity in the workplace. We evaluate qualified applicants without regard to race, color, religion, sex, sexual orientation, disability, veteran status, and other protected characteristics.
We are looking for an exceptional Data Scientist Lead / Manager who is passionate about data and motivated to build large scale machine learning solutions to shine our data products. This person will be contributing to the analytics of data for insight discovery and development of machine learning pipeline to support modeling of terabytes of daily data for various use cases.
Location: Pune (Initially remote due to COVID 19)
*****Looking for someone who can start immediately / Within a month. Hands-on experience in Python programming (Minimum 5 Years) is a must.
About the Organisation :
- It provides a dynamic, fun workplace filled with passionate individuals. We are at the cutting edge of advertising technology and there is never a dull moment at work.
- We have a truly global footprint, with our headquarters in Singapore and offices in Australia, United States, Germany, United Kingdom and India.
- You will gain work experience in a global environment. We speak over 20 different languages, from more than 16 different nationalities and over 42% of our staff are multilingual.
• 8+ years relevant working experience
• Master / Bachelors in computer science or engineering
• Working knowledge of Python and SQL
• Experience in time series data, data manipulation, analytics, and visualization
• Experience working with large-scale data
• Proficiency of various ML algorithms for supervised and unsupervised learning
• Experience working in Agile/Lean model
• Experience with Java and Golang is a plus
• Experience with BI toolkit such as Tableau, Superset, Quicksight, etc is a plus
• Exposure to building large-scale ML models using one or more of modern tools and libraries such as AWS Sagemaker, Spark ML-Lib, Dask, Tensorflow, PyTorch, Keras, GCP ML Stack
• Exposure to modern Big Data tech such as Cassandra/Scylla, Kafka, Ceph, Hadoop, Spark
• Exposure to IAAS platforms such as AWS, GCP, Azure
Typical persona: Data Science Manager/Architect
Experience: 8+ years programming/engineering experience (with at least last 4 years in Data science in a Product development company)
Type: Hands-on candidate only
a. Hands-on Python: pandas,scikit-learn
b. Working knowledge of Kafka
c. Able to carry out own tasks and help the team in resolving problems - logical or technical (25% of job)
d. Good on analytical & debugging skills
e. Strong communication skills
Desired (in order of priorities)
a.Go (Strong advantage)
b. Airflow (Strong advantage)
c. Familiarity & working experience on more than one type of database: relational, object, columnar, graph and other unstructured databases
d. Data structures, Algorithms
e. Experience with multi-threaded and thread sync concepts
f. AWS Sagemaker
Our Engineering team values productivity, integrity, and pragmatism.
We are looking for a seasoned ML engineer who is able to understand customer needs along with any technical constraints, and be responsible for conceiving the product idea based on variety and amount of data we have available with us (in the BigData ecosystem), and then can translate them into product features leading to the success of our customers. The ML engineer will also be expected to design the overall functionality of the product, coding ML applications in distributed environments and seeing them through production.
● Degree in Computer Science, Mathematics, Data Science, Statistics or equivalent proven experience.
● Experience as an ML Engineer or Data scientist, ideally from a data or software engineering background.
● Minimum 4+ years of hands-on experience in developing and deploying(Docker, Kubernetes, etc) machine learning solutions into production environments.
● Excellent programming skills in Python and Scala and hands-on with Spark.
● Proficient with any of the Machine learning or Deep Learning framework like sklearn, spark ML,TensorFlow/Keras, PyTorch
● Strong system design and architecture skills, having individually led a project from design to delivery
● Exposure to container orchestration of production environments, DevOps and CI/CD
● Experience with a JVM based language (Java, Scala, Kotlin) is a plus.
● Experience with time series data analysis and models
● Familiarity with cloud environments (GCP, AWS, Azure).
along with metrics to track their progress
Managing available resources such as hardware, data, and personnel so that deadlines
Analysing the ML algorithms that could be used to solve a given problem and ranking
them by their success probability
Exploring and visualizing data to gain an understanding of it, then identifying
differences in data distribution that could affect performance when deploying the model
in the real world
Verifying data quality, and/or ensuring it via data cleaning
Supervising the data acquisition process if more data is needed
Defining validation strategies
Defining the pre-processing or feature engineering to be done on a given dataset
Defining data augmentation pipelines
Training models and tuning their hyper parameters
Analysing the errors of the model and designing strategies to overcome them
Deploying models to production
DataWeave provides Retailers and Brands with “Competitive Intelligence as a Service” that enables them to take key decisions that impact their revenue. Powered by AI, we provide easily consumable and actionable competitive intelligence by aggregating and analyzing billions of publicly available data points on the Web to help businesses develop data-driven strategies and make smarter decisions.
Data [email protected]
We the Data Science team at DataWeave (called Semantics internally) build the core machine learning backend and structured domain knowledge needed to deliver insights through our data products. Our underpinnings are: innovation, business awareness, long term thinking, and pushing the envelope. We are a fast paced labs within the org applying the latest research in Computer Vision, Natural Language Processing, and Deep Learning to hard problems in different domains.
How we work?
It's hard to tell what we love more, problems or solutions! Every day, we choose to address some of the hardest data problems that there are. We are in the business of making sense of messy public data on the web. At serious scale!
What do we offer?
- Some of the most challenging research problems in NLP and Computer Vision. Huge text and image datasets that you can play with!
- Ability to see the impact of your work and the value you're adding to our customers almost immediately.
- Opportunity to work on different problems and explore a wide variety of tools to figure out what really excites you.
- A culture of openness. Fun work environment. A flat hierarchy. Organization wide visibility. Flexible working hours.
- Learning opportunities with courses and tech conferences. Mentorship from seniors in the team.
- Last but not the least, competitive salary packages and fast paced growth opportunities.
Who are we looking for?
The ideal candidate is a strong software developer or a researcher with experience building and shipping production grade data science applications at scale. Such a candidate has keen interest in liaising with the business and product teams to understand a business problem, and translate that into a data science problem. You are also expected to develop capabilities that open up new business productization opportunities.
We are looking for someone with 6+ years of relevant experience working on problems in NLP or Computer Vision with a Master's degree (PhD preferred).
Key problem areas
- Preprocessing and feature extraction noisy and unstructured data -- both text as well as images.
- Keyphrase extraction, sequence labeling, entity relationship mining from texts in different domains.
- Document clustering, attribute tagging, data normalization, classification, summarization, sentiment analysis.
- Image based clustering and classification, segmentation, object detection, extracting text from images, generative models, recommender systems.
- Ensemble approaches for all the above problems using multiple text and image based techniques.
Relevant set of skills
- Have a strong grasp of concepts in computer science, probability and statistics, linear algebra, calculus, optimization, algorithms and complexity.
- Background in one or more of information retrieval, data mining, statistical techniques, natural language processing, and computer vision.
- Excellent coding skills on multiple programming languages with experience building production grade systems. Prior experience with Python is a bonus.
- Experience building and shipping machine learning models that solve real world engineering problems. Prior experience with deep learning is a bonus.
- Experience building robust clustering and classification models on unstructured data (text, images, etc). Experience working with Retail domain data is a bonus.
- Ability to process noisy and unstructured data to enrich it and extract meaningful relationships.
- Experience working with a variety of tools and libraries for machine learning and visualization, including numpy, matplotlib, scikit-learn, Keras, PyTorch, Tensorflow.
- Use the command line like a pro. Be proficient in Git and other essential software development tools.
- Working knowledge of large-scale computational models such as MapReduce and Spark is a bonus.
- Be a self-starter—someone who thrives in fast paced environments with minimal ‘management’.
- It's a huge bonus if you have some personal projects (including open source contributions) that you work on during your spare time. Show off some of your projects you have hosted on GitHub.
Role and responsibilities
- Understand the business problems we are solving. Build data science capability that align with our product strategy.
- Conduct research. Do experiments. Quickly build throw away prototypes to solve problems pertaining to the Retail domain.
- Build robust clustering and classification models in an iterative manner that can be used in production.
- Constantly think scale, think automation. Measure everything. Optimize proactively.
- Take end to end ownership of the projects you are working on. Work with minimal supervision.
- Help scale our delivery, customer success, and data quality teams with constant algorithmic improvements and automation.
- Take initiatives to build new capabilities. Develop business awareness. Explore productization opportunities.
- Be a tech thought leader. Add passion and vibrance to the team. Push the envelope. Be a mentor to junior members of the team.
- Stay on top of latest research in deep learning, NLP, Computer Vision, and other relevant areas.