Job Title – Data Scientist (Forecasting)
Anicca Data is seeking a Data Scientist (Forecasting) who is motivated to apply his/her/their skill set to solve complex and challenging problems. The focus of the role will center around applying deep learning models to real-world applications. The candidate should have experience in training, testing deep learning architectures. This candidate is expected to work on existing codebases or write an optimized codebase at Anicca Data. The ideal addition to our team is self-motivated, highly organized, and a team player who thrives in a fast-paced environment with the ability to learn quickly and work independently.
Job Location: Remote (for time being) and Bangalore, India (post-COVID crisis)
Required Skills:
- At least 3+ years of experience in a Data Scientist role
- Bachelor's/Master’s degree in Computer Science, Engineering, Statistics, Mathematics, or similar quantitative discipline. D. will add merit to the application process
- Experience with large data sets, big data, and analytics
- Exposure to statistical modeling, forecasting, and machine learning. Deep theoretical and practical knowledge of deep learning, machine learning, statistics, probability, time series forecasting
- Training Machine Learning (ML) algorithms in areas of forecasting and prediction
- Experience in developing and deploying machine learning solutions in a cloud environment (AWS, Azure, Google Cloud) for production systems
- Research and enhance existing in-house, open-source models, integrate innovative techniques, or create new algorithms to solve complex business problems
- Experience in translating business needs into problem statements, prototypes, and minimum viable products
- Experience managing complex projects including scoping, requirements gathering, resource estimations, sprint planning, and management of internal and external communication and resources
- Write C++ and Python code along with TensorFlow, PyTorch to build and enhance the platform that is used for training ML models
Preferred Experience
- Worked on forecasting projects – both classical and ML models
- Experience with training time series forecasting methods like Moving Average (MA) and Autoregressive Integrated Moving Average (ARIMA) with Neural Networks (NN) models as Feed-forward NN and Nonlinear Autoregressive
- Strong background in forecasting accuracy drivers
- Experience in Advanced Analytics techniques such as regression, classification, and clustering
- Ability to explain complex topics in simple terms, ability to explain use cases and tell stories
Similar jobs
Technical must haves:
● Extensive exposure to at least one Business Intelligence Platform (if possible, QlikView/Qlik
Sense) – if not Qlik, ETL tool knowledge, ex- Informatica/Talend
● At least 1 Data Query language – SQL/Python
● Experience in creating breakthrough visualizations
● Understanding of RDMS, Data Architecture/Schemas, Data Integrations, Data Models and Data Flows is a must
● A technical degree like BE/B. Tech a must
Technical Ideal to have:
● Exposure to our tech stack – PHP
● Microsoft workflows knowledge
Behavioural Pen Portrait:
● Must Have: Enthusiastic, aggressive, vigorous, high achievement orientation, strong command
over spoken and written English
● Ideal: Ability to Collaborate
Preferred location is Ahmedabad, however, if we find exemplary talent then we are open to remote working model- can be discussed.
Requirements-
● B.Tech/Masters in Mathematics, Statistics, Computer Science or another quantitative field
● 2-3+ years of work experience in ML domain ( 2-5 years experience )
● Hands-on coding experience in Python
● Experience in machine learning techniques such as Regression, Classification,Predictive modeling, Clustering, Deep Learning stack, NLP.
● Working knowledge of Tensorflow/PyTorch
Optional Add-ons-
● Experience with distributed computing frameworks: Map/Reduce, Hadoop, Spark etc.
● Experience with databases: MongoDB
Kwalee is one of the world’s leading multiplatform game publishers and developers, with well over 750 million downloads worldwide for mobile hits such as Draw It, Teacher Simulator, Let’s Be Cops 3D, Traffic Cop 3D and Makeover Studio 3D. Alongside this, we also have a growing PC and Console team of incredible pedigree that is on the hunt for great new titles to join TENS!, Eternal Hope and Die by the Blade.
With a team of talented people collaborating daily between our studios in Leamington Spa, Bangalore and Beijing, or on a remote basis from Turkey, Brazil, the Philippines and many more places, we have a truly global team making games for a global audience. And it’s paying off: Kwalee games have been downloaded in every country on earth! If you think you’re a good fit for one of our remote vacancies, we want to hear from you wherever you are based.
Founded in 2011 by David Darling CBE, a key architect of the UK games industry who previously co-founded and led Codemasters for many years, our team also includes legends such as Andrew Graham (creator of Micro Machines series) and Jason Falcus (programmer of classics including NBA Jam) alongside a growing and diverse team of global gaming experts. Everyone contributes creatively to Kwalee’s success, with all employees eligible to pitch their own game ideas on Creative Wednesdays, and we’re proud to have built our success on this inclusive principle. Could your idea be the next global hit?
What’s the job?
As an Analyst (Research) in the Mobile Publishing division you’ll be using your previous experience in analysing market trends to pull usable insights from numerous sources, and find trends others might miss.
What you tell your friends you do
“I provide insights that help guide the direction of Kwalee’s mobile publishing team as they expand their operation”
What you will really be doing
- Using our internal and external data sources to generate insights.
- Assess market trends and make recommendations to our publishing team on which opportunities to pursue and which to decline
- Evaluate market movements and use data to assess new opportunities
- Create frameworks to predict how successful new content can be and the metrics games are likely to achieve
- Evaluate business opportunities and conduct due diligence on potential business partners we are planning to work with
- Be an expert on industry data sets and how we can best use them
How you will be doing this
- You’ll be part of an agile, multidisciplinary and creative team and work closely with them to ensure the best results.
- You'll think creatively and be motivated by challenges and constantly striving for the best.
- You’ll work with cutting edge technology, if you need software or hardware to get the job done efficiently, you will get it. We even have a robot!
Team
Our talented team is our signature. We have a highly creative atmosphere with more than 200 staff where you’ll have the opportunity to contribute daily to important decisions. You’ll work within an extremely experienced, passionate and diverse team, including David Darling and the creator of the Micro Machines video games.
Skills and Requirements
- Previous experience of working with big data sets, preferably in a gaming or tech environment
- An advanced degree in a related field
- A keen interest in video games and the market, particularly in the mobile space
- Familiarity with industry tools and data providers
- A can-do attitude and ability to move projects forward even when outcomes may not be clear
We offer
- We want everyone involved in our games to share our success, that’s why we have a generous team profit sharing scheme from day 1 of employment
- In addition to a competitive salary we also offer private medical cover and life assurance
- Creative Wednesdays!(Design and make your own games every Wednesday)
- 20 days of paid holidays plus bank holidays
- Hybrid model available depending on the department and the role
- Relocation support available
- Great work-life balance with flexible working hours
- Quarterly team building days - work hard, play hard!
- Monthly employee awards
- Free snacks, fruit and drinks
Our philosophy
We firmly believe in creativity and innovation and that a fundamental requirement for a successful and happy company is having the right mix of individuals. With the right people in the right environment anything and everything is possible.
Kwalee makes games to bring people, their stories, and their interests together. As an employer, we’re dedicated to making sure that everyone can thrive within our team by welcoming and supporting people of all ages, races, colours, beliefs, sexual orientations, genders and circumstances. With the inclusion of diverse voices in our teams, we bring plenty to the table that’s fresh, fun and exciting; it makes for a better environment and helps us to create better games for everyone! This is how we move forward as a company – because these voices are the difference that make all the difference.
Senior Data Engineer
Data Analyst
Role Description:
- You will be part of the data delivery team and will have the opportunity to develop a deep understanding of the domain/function.
- You will design and drive the work plan for the optimization/automation and standardization of the processes incorporating best practices to achieve efficiency gains.
- You will run data engineering pipelines, link raw client data with data model, conduct data assessment, perform data quality checks, and transform data using ETL tools.
- You will perform data transformations, modeling, and validation activities, as well as configure applications to the client context. You will also develop scripts to validate, transform, and load raw data using programming languages such as Python and / or PySpark.
- In this role, you will determine database structural requirements by analyzing client operations, applications, and programming.
- You will develop cross-site relationships to enhance idea generation, and manage stakeholders.
- Lastly, you will collaborate with the team to support ongoing business processes by delivering high-quality end products on-time and perform quality checks wherever required.
Job Requirement:
- Bachelor’s degree in Engineering or Computer Science; Master’s degree is a plus
- 3+ years of professional work experience with a reputed analytics firm
- Expertise in handling large amount of data through Python or PySpark
- Conduct data assessment, perform data quality checks and transform data using SQL and ETL tools
- Experience of deploying ETL / data pipelines and workflows in cloud technologies and architecture such as Azure and Amazon Web Services will be valued
- Comfort with data modelling principles (e.g. database structure, entity relationships, UID etc.) and software development principles (e.g. modularization, testing, refactoring, etc.)
- A thoughtful and comfortable communicator (verbal and written) with the ability to facilitate discussions and conduct training
- Strong problem-solving, requirement gathering, and leading.
-
Track record of completing projects successfully on time, within budget and as per scope
• Help build a Data Science team which will be engaged in researching, designing,
implementing, and deploying full-stack scalable data analytics vision and machine learning
solutions to challenge various business issues.
• Modelling complex algorithms, discovering insights and identifying business
opportunities through the use of algorithmic, statistical, visualization, and mining techniques
• Translates business requirements into quick prototypes and enable the
development of big data capabilities driving business outcomes
• Responsible for data governance and defining data collection and collation
guidelines.
• Must be able to advice, guide and train other junior data engineers in their job.
Must Have:
• 4+ experience in a leadership role as a Data Scientist
• Preferably from retail, Manufacturing, Healthcare industry(not mandatory)
• Willing to work from scratch and build up a team of Data Scientists
• Open for taking up the challenges with end to end ownership
• Confident with excellent communication skills along with a good decision maker
- ORI is an end-to-end provider of AI-powered conversational tools that help enterprises simplify their customer experience, improve conversions and help them get better ROI on the marketing spend. Ori is focused on automating the customer journey through it's AI powered self-service SAAS platform, made by applying design thinking principles and Machine Learning.
- ORI's cognitive solutions provide non-intrusive customer experience for Sales, Marketing, Support & Engagement across IoT devices, sensors, web, app, social media & messaging platforms as well as AR and VR platforms.
- Founded in 2017, We've changed the way AI conversational tools are built and trained, providing a revolutionary experience. Clients who have bet on us are Tata Motors, Dishtv, Vodafone, Idea, Lenkart.com, Royal Enfield, IKEA and many more.
- At ORI, you’ll be a part of an environment that’s fast-paced, nurturing, collaborative, and challenging. We believe in 100% ownership & flexibility of how & where you work. You’ll be given complete freedom to get your creative juices flowing and implement your ideas to deliver solutions that bring about revolutionary change. We are a team that believes in working smarter and partying hard and are looking for A-players to hop on-board a rocket-ship that’s locked, loaded & ready to blast off!
Job Profile:-
We are looking for applicants who have a demonstrated research background in AI, Deep Learning and NLP, a passion for independent research and technical problem-solving, and a proven ability to develop and implement ideas from research.The candidate will collaborate with researchers and engineers of multiple disciplines within Ori, in particular with researchers in data science and development teams to develop advanced NLP and AI solutions. Work with massive amounts of data collected from various sources.
Key Attributes you need to possess:-
- Communication Skills- Written and verbal form are a must have.You will be required to explain advanced statistical content to clients and relevant stakeholders.Therefore, you must have the ability to translate and tailor this technical content into business applicable material with clear recommendations and insights relevant to the audience at hand.
- Technological Savvy/Analytical Skills- Must be technologically adept, demonstrate exceptionally good computer skills, and demonstrate a passion for research, statistics, and data analysis as well as a demonstrated ability and passion for designing and implementing successful data analysis solutions within a business.
- Business Understanding- Someone who can understand the business's needs and develop analytics that meet those objectives through enhanced customer engagement, automation resulting in cost optimization, or business process optimization saving time and labor. However, real value comes from delivering the results that match the actual business need.
- Innovation- Someone who is always looking for the next big thing that will distinguish their offering from others already in the market and must be able to differentiate great from not-so-great analytics.
Typical work week look like:-
1. Work with product/business owners to map business requirements into products / productized solutions and/or working prototypes of NLP & ML algorithms.
2. Evaluate and compare algorithm performance based on large, real-world data sets.
3. Mine massive amounts of data from various sources to gain insights and identify patterns using machine learning techniques and complex network analysis methods.
4. Design and implement ML algorithms and models through in-depth research and experiment with neural network models, parameter optimization, and optimization algorithms.
5. Work to accelerate the distributed implementation of existing algorithms and models.
6. Conduct research to advance the state of the art in deep learning and provide technical solutions at scale for real world challenges in various scenarios.
7. Establish scalable, efficient, automated processes for model development, model validation, model implementation and large scale data analysis.
8. Optimizing pre-existing algorithms for accuracy and speed.
Our ideal candidate should have:-
- Ph.D. / Master's degree / B.Tech / B.E. from an accredited college/university in Computer Science, Statistics, Mathematics, Engineering, or related fields (strong mathematical/statistics background with the ability to understand algorithms and methods from a mathematical and intuitive viewpoint)
- 4+ years of professional experience in Artificial Intelligence, Machine Learning, Deep Learning, Natural Language Processing/Text mining or related fields.
- Technical ability and hands on expertise in Python, R, XML parsing, Big Data, NoSQL and SQL
- Preference for candidates with prior experience in deep learning tools Keras, TensorFlow, Bert, Transformers, LSTM, Python, Topic modeling, Text classification, NER,SVM, KNN, Reinforcement Learning, Summarisation etc.
- Self-starter and able to manage multiple research projects with a flexible approach and ability to develop new skills.
- Strong knowledge/experience of data extraction and data processing in a distributed cloud environment.
What you can expect from ORI:-
- Passion & happiness in the workplace with great people & open culture with amazing growth opportunities.
- An ecosystem where leadership is fostered which builds an environment where everyone is free to take necessary actions to learn from real experiences.
- Chance to work on the cutting edge of technology.- Freedom to pursue your ideas and tinker with multiple technologies- which a techie would definitely enjoy!!
If you have outstanding programming skills and a great passion for developing beautiful, innovative applications, then you will love this job!!
ABOUT EPISOURCE:
Episource has devoted more than a decade in building solutions for risk adjustment to measure healthcare outcomes. As one of the leading companies in healthcare, we have helped numerous clients optimize their medical records, data, analytics to enable better documentation of care for patients with chronic diseases.
The backbone of our consistent success has been our obsession with data and technology. At Episource, all of our strategic initiatives start with the question - how can data be “deployed”? Our analytics platforms and datalakes ingest huge quantities of data daily, to help our clients deliver services. We have also built our own machine learning and NLP platform to infuse added productivity and efficiency into our workflow. Combined, these build a foundation of tools and practices used by quantitative staff across the company.
What’s our poison you ask? We work with most of the popular frameworks and technologies like Spark, Airflow, Ansible, Terraform, Docker, ELK. For machine learning and NLP, we are big fans of keras, spacy, scikit-learn, pandas and numpy. AWS and serverless platforms help us stitch these together to stay ahead of the curve.
ABOUT THE ROLE:
We’re looking to hire someone to help scale Machine Learning and NLP efforts at Episource. You’ll work with the team that develops the models powering Episource’s product focused on NLP driven medical coding. Some of the problems include improving our ICD code recommendations, clinical named entity recognition, improving patient health, clinical suspecting and information extraction from clinical notes.
This is a role for highly technical data engineers who combine outstanding oral and written communication skills, and the ability to code up prototypes and productionalize using a large range of tools, algorithms, and languages. Most importantly they need to have the ability to autonomously plan and organize their work assignments based on high-level team goals.
You will be responsible for setting an agenda to develop and ship data-driven architectures that positively impact the business, working with partners across the company including operations and engineering. You will use research results to shape strategy for the company and help build a foundation of tools and practices used by quantitative staff across the company.
During the course of a typical day with our team, expect to work on one or more projects around the following;
1. Create and maintain optimal data pipeline architectures for ML
2. Develop a strong API ecosystem for ML pipelines
3. Building CI/CD pipelines for ML deployments using Github Actions, Travis, Terraform and Ansible
4. Responsible to design and develop distributed, high volume, high-velocity multi-threaded event processing systems
5. Knowledge of software engineering best practices across the development lifecycle, coding standards, code reviews, source management, build processes, testing, and operations
6. Deploying data pipelines in production using Infrastructure-as-a-Code platforms
7. Designing scalable implementations of the models developed by our Data Science teams
8. Big data and distributed ML with PySpark on AWS EMR, and more!
BASIC REQUIREMENTS
-
Bachelor’s degree or greater in Computer Science, IT or related fields
-
Minimum of 5 years of experience in cloud, DevOps, MLOps & data projects
-
Strong experience with bash scripting, unix environments and building scalable/distributed systems
-
Experience with automation/configuration management using Ansible, Terraform, or equivalent
-
Very strong experience with AWS and Python
-
Experience building CI/CD systems
-
Experience with containerization technologies like Docker, Kubernetes, ECS, EKS or equivalent
-
Ability to build and manage application and performance monitoring processes