Roles and Responsibilities
- Managing available resources such as hardware, data, and personnel so that deadlines are met.
- Analyzing the ML and Deep Learning algorithms that could be used to solve a given problem and ranking them by their success probabilities
- Exploring data to gain an understanding of it, then identifying differences in data distribution that could affect performance when deploying the model in the real world
- Defining validation framework and establish a process to ensure acceptable data quality criteria are met
- Supervising the data acquisition and partnership roadmaps to create stronger product for our customers.
- Defining feature engineering process to ensure usage of meaningful features given the business constraints which may vary by market
- Device self-learning strategies through analysis of errors from the models
- Understand business issues and context, devise a framework for solving unstructured problems and articulate clear and actionable solutions underpinned by analytics.
- Manage multiple projects simultaneously while demonstrating business leadership to collaborate & coordinate with different functions to deliver the solutions in a timely, efficient and effective manner.
- Manage project resources optimally to deliver projects on time; drive innovation using residual resources to create strong solution pipeline; provide direction, coaching & training, feedbacks to project team members to enhance performance, support development and encourage value aligned behaviour of the project team members; Provide inputs for periodic performance appraisal of project team members.
Preferred Technical & Professional expertise
- Undergraduate Degree in Computer Science / Engineering / Mathematics / Statistics / economics or other quantitative fields
- At least 2+ years of experience of managing Data Science projects with specializations in Machine Learning
- In-depth knowledge of cloud analytics tools.
- Able to drive Python Code optimization; ability review codes and provide inputs to improve the quality of codes
- Ability to evaluate hardware selection for running ML models for optimal performance
- Up to date with Python libraries and versions for machine learning; Extensive hands-on experience with Regressors; Experience working with data pipelines.
- Deep knowledge of math, probability, statistics and algorithms; Working knowledge of Supervised Learning, Adversarial Learning and Unsupervised learning
- Deep analytical thinking with excellent problem-solving abilities
- Strong verbal and written communication skills with a proven ability to work with all levels of management; effective interpersonal and influencing skills.
- Ability to manage a project team through effectively allocation of tasks, anticipating risks and setting realistic timelines for managing the expectations of key stakeholders
- Strong organizational skills and an ability to balance and handle multiple concurrent tasks and/or issues simultaneously.
- Ensure that the project team understand and abide by compliance framework for policies, data, systems etc. as per group, region and local standards
About Data Sutram
Similar jobs
Required skills and experience: · Solid experience working in Big Data ETL environments with Spark and Java/Scala/Python · Strong experience with AWS cloud technologies (EC2, EMR, S3, Kinesis, etc) · Experience building monitoring/alerting frameworks with tools like Newrelic and escalations with slack/email/dashboard integrations, etc · Executive-level communication, prioritization, and team leadership skills
Job Description
Job Responsibilities
-
Design and implement robust database solutions including
-
Security, backup and recovery
-
Performance, scalability, monitoring and tuning,
-
Data management and capacity planning,
-
Planning, and implementing failover between database instances.
-
-
Create data architecture strategies for each subject area of the enterprise data model.
-
Communicate plans, status and issues to higher management levels.
-
Collaborate with the business, architects and other IT organizations to plan a data strategy, sharing important information related to database concerns and constrains
-
Produce all project data architecture deliverables..
-
Create and maintain a corporate repository of all data architecture artifacts.
Skills Required:
-
Understanding of data analysis, business principles, and operations
-
Software architecture and design Network design and implementation
-
Data visualization, data migration and data modelling
-
Relational database management systems
-
DBMS software, including SQL Server
-
Database and cloud computing design, architectures and data lakes
-
Information management and data processing on multiple platforms
-
Agile methodologies and enterprise resource planning implementation
-
Demonstrate database technical functionality, such as performance tuning, backup and recovery, monitoring.
-
Excellent skills with advanced features such as database encryption, replication, partitioning, etc.
-
Strong problem solving, organizational and communication skill.
- Job Title - Backend Software Engineer - Data Team
- Reports Into - Senior Data Science Core Developer
- Location - Hybrid / Leamington Spa
- Application Close Date - 30/03/2023
A Little Bit about Kwalee….
Kwalee is one of the world’s leading multiplatform game developers and publishers, with well over 900 million downloads worldwide for mobile hits such as Draw It, Teacher Simulator, Let’s Be Cops 3D, Airport Security and Makeover Studio 3D. We also have a growing PC and Console team of incredible pedigree that is on the hunt for great new titles to join TENS!, Eternal Hope, Die by the Blade and Scathe.
What’s In It For You?
- Hybrid working - 3 days in the office, 2 days remote/ WFH is the norm
- Flexible working hours - we trust you to choose how and when you work best
- Profit sharing scheme - we win, you win
- Private medical cover - delivered through BUPA
- Life Assurance - for long term peace of mind
- On site gym - take care of yourself
- Relocation support - available
- Quarterly Team Building days - we’ve done Paintballing, Go Karting & even Robot Wars
- Pitch and make your own games on Creative Wednesdays!
Are You Up To The Challenge?
As a Software Engineer in the Data Science Team, you will build tools and develop technology that deliver data products to a team of strategists, marketing experts and game developers. You will design, build, test and deploy products which serve a company fuelled by data.
Your Team Mates
The Data Science team is central in developing the technology behind the growth and monetisation of our games. We are a cross functional team that consists of analysts, engineers and data scientists, and work closely with the larger engineering team to deliver products spanning our modern, cloud first, tech stack.
What Does The Job Actually Involve?
As a Software Engineer in the Data Science Team, you will build tools and develop technology that deliver data products to a team of strategists, marketing experts and game developers. You will design, build, test and deploy products which serve a company fueled by data.
- Develop the essential components of Kwalee data products. Including automated bidding software and our gameplay optimisation engine
- Work at the core of a talented and growing Data and Engineering department
- Unlock value by collaborating with marketing and development teams across the company
- Manipulate vast databases of player interaction and performance marketing data to power new products
Your Hard Skills
- A proven track record of writing high quality program code in Python
- Experience dealing with data pipelines and ETL processes
- Desire to learn new technology and solve hard problems
- Ability to work in a fast paced environment
- An avid interest in the development, marketing and monetisation of mobile games
Your Soft Skills
Kwalee has grown fast in recent years but we’re very much a family of colleagues. We welcome people of all genders, races and backgrounds - and all we ask is that you collaborate, work hard, ask questions and have fun with your team and colleagues. We don’t like egos or arrogance and we love playing games and celebrating success together.
If that sounds like you, then please apply.
A Little More About Kwalee
Founded in 2011 by David Darling CBE, a key architect of the UK games industry who previously co-founded and led Codemasters, our team also includes legends such as Andrew Graham (creator of Micro Machines series) and Jason Falcus (programmer of classics including NBA Jam) alongside a growing and diverse team of global gaming experts.
Everyone contributes creatively to Kwalee’s success, with all employees eligible to pitch their own game ideas on Creative Wednesdays, and we’re proud to have built our success on this inclusive principle.
We have an amazing team of experts collaborating daily between our studios in Leamington Spa, Lisbon, Bangalore and Beijing, or on a remote basis from Turkey, Brazil, Cyprus, the Philippines and many more places around the world. We’ve recently acquired our first external studio, TicTales, which is based in France.
We have a truly global team making games for a global audience, and it’s paying off: - Kwalee has been voted the Best Large Studio and Best Leadership Team at the TIGA Awards (Independent Game Developers’ Association) and our games have been downloaded in every country on earth - including Antarctica!
Desired Skills & Mindset:
We are looking for candidates who have demonstrated both a strong business sense and deep understanding of the quantitative foundations of modelling.
• Excellent analytical and problem-solving skills, including the ability to disaggregate issues, identify root causes and recommend solutions
• Statistical programming software experience in SPSS and comfortable working with large data sets.
• R, Python, SAS & SQL are preferred but not a mandate
• Excellent time management skills
• Good written and verbal communication skills; understanding of both written and spoken English
• Strong interpersonal skills
• Ability to act autonomously, bringing structure and organization to work
• Creative and action-oriented mindset
• Ability to interact in a fluid, demanding and unstructured environment where priorities evolve constantly, and methodologies are regularly challenged
• Ability to work under pressure and deliver on tight deadlines
Qualifications and Experience:
• Graduate degree in: Statistics/Economics/Econometrics/Computer
Science/Engineering/Mathematics/MBA (with a strong quantitative background) or
equivalent
• Strong track record work experience in the field of business intelligence, market
research, and/or Advanced Analytics
• Knowledge of data collection methods (focus groups, surveys, etc.)
• Knowledge of statistical packages (SPSS, SAS, R, Python, or similar), databases,
and MS Office (Excel, PowerPoint, Word)
• Strong analytical and critical thinking skills
• Industry experience in Consumer Experience/Healthcare a plus
with proven ability to process large datasets and create new data models.
This is an exciting opportunity to join the explosively growing Blockchain fraternity
and work constructively with our in-house industry experts.
We expect you to have hands on working experience with -
- Machine Learning techniques and algorithms, such as k-NN, Naive Bayes, SVM,
Decision Forests, etc.
- Python data science toolkit with Scikit-Learn and Keras/Pytorch
- Prowess with both SQL and NoSQL Databases
- Good, applied statistics skills, such as distributions, statistical testing, regression,
etc.
- Excellent scripting and programming skills
- Fundamental principles of the CCAR or CCEL or Basel II Accord or IFRS
- Docker and other PaaS (Platform-as-a-Service) products
- MUST Specialize in Natural Language Processing
Key Roles & Responsibilities
- Develop predictive models, statistical analyses, optimization procedures, monitoring
processes, data quality analyses, and score implementations supporting impairment
and collections.
- Produce robust documentation to ensure replicability of results and fulfill Antier's
governance requirements.
- Superlative communication skills to represent us before stakeholders
- Leadership skills to build, expand and drive a new process
Academic & other Qualifications
- Postgraduate university degree in the quantitative discipline required (ex. Statistics,
Operations Research, Economics, Computer Science). Graduate studies.
- An ability to produce reports and interrogate systems to produce analysis and
resolve discrepancies/queries.
- Proficiency with analytical software Python or SAS or R, SQL tools.
- Prior experience in banking products such as Mutual Funds will add value
Roles and
Responsibilities
Seeking AWS Cloud Engineer /Data Warehouse Developer for our Data CoE team to
help us in configure and develop new AWS environments for our Enterprise Data Lake,
migrate the on-premise traditional workloads to cloud. Must have a sound
understanding of BI best practices, relational structures, dimensional data modelling,
structured query language (SQL) skills, data warehouse and reporting techniques.
Extensive experience in providing AWS Cloud solutions to various business
use cases.
Creating star schema data models, performing ETLs and validating results with
business representatives
Supporting implemented BI solutions by: monitoring and tuning queries and
data loads, addressing user questions concerning data integrity, monitoring
performance and communicating functional and technical issues.
Job Description: -
This position is responsible for the successful delivery of business intelligence
information to the entire organization and is experienced in BI development and
implementations, data architecture and data warehousing.
Requisite Qualification
Essential
-
AWS Certified Database Specialty or -
AWS Certified Data Analytics
Preferred
Any other Data Engineer Certification
Requisite Experience
Essential 4 -7 yrs of experience
Preferred 2+ yrs of experience in ETL & data pipelines
Skills Required
Special Skills Required
AWS: S3, DMS, Redshift, EC2, VPC, Lambda, Delta Lake, CloudWatch etc.
Bigdata: Databricks, Spark, Glue and Athena
Expertise in Lake Formation, Python programming, Spark, Shell scripting
Minimum Bachelor’s degree with 5+ years of experience in designing, building,
and maintaining AWS data components
3+ years of experience in data component configuration, related roles and
access setup
Expertise in Python programming
Knowledge in all aspects of DevOps (source control, continuous integration,
deployments, etc.)
Comfortable working with DevOps: Jenkins, Bitbucket, CI/CD
Hands on ETL development experience, preferably using or SSIS
SQL Server experience required
Strong analytical skills to solve and model complex business requirements
Sound understanding of BI Best Practices/Methodologies, relational structures,
dimensional data modelling, structured query language (SQL) skills, data
warehouse and reporting techniques
Preferred Skills
Required
Experience working in the SCRUM Environment.
Experience in Administration (Windows/Unix/Network/
plus.
Experience in SQL Server, SSIS, SSAS, SSRS
Comfortable with creating data models and visualization using Power BI
Hands on experience in relational and multi-dimensional data modelling,
including multiple source systems from databases and flat files, and the use of
standard data modelling tools
Ability to collaborate on a team with infrastructure, BI report development and
business analyst resources, and clearly communicate solutions to both
technical and non-technical team members
Do you want to help build real technology for a meaningful purpose? Do you want to contribute to making the world more sustainable, advanced and accomplished extraordinary precision in Analytics?
What is your role?
As a Computer Vision & Machine Learning Engineer at Datasee.AI, you’ll be core to the development of our robotic harvesting system’s visual intelligence. You’ll bring deep computer vision, machine learning, and software expertise while also thriving in a fast-paced, flexible, and energized startup environment. As an early team member, you’ll directly build our success, growth, and culture. You’ll hold a significant role and are excited to grow your role as Datasee.AI grows.
What you’ll do
- You will be working with the core R&D team which drives the computer vision and image processing development.
- Build deep learning model for our data and object detection on large scale images.
- Design and implement real-time algorithms for object detection, classification, tracking, and segmentation
- Coordinate and communicate within computer vision, software, and hardware teams to design and execute commercial engineering solutions.
- Automate the workflow process between the fast-paced data delivery systems.
What we are looking for
- 1 to 3+ years of professional experience in computer vision and machine learning.
- Extensive use of Python
- Experience in python libraries such as OpenCV, Tensorflow and Numpy
- Familiarity with a deep learning library such as Keras and PyTorch
- Worked on different CNN architectures such as FCN, R-CNN, Fast R-CNN and YOLO
- Experienced in hyperparameter tuning, data augmentation, data wrangling, model optimization and model deployment
- B.E./M.E/M.Sc. Computer Science/Engineering or relevant degree
- Dockerization, AWS modules and Production level modelling
- Basic knowledge of the Fundamentals of GIS would be added advantage
Prefered Requirements
- Experience with Qt, Desktop application development, Desktop Automation
- Knowledge on Satellite image processing, Geo-Information System, GDAL, Qgis and ArcGIS
About Datasee.AI:
Datasee>AI, Inc. is an AI driven Image Analytics company offering Asset Management solutions for industries in the sectors of Renewable Energy, Infrastructure, Utilities & Agriculture. With core expertise in Image processing, Computer Vision & Machine Learning, Takvaviya’s solution provides value across the enterprise for all the stakeholders through a data driven approach.
With Sales & Operations based out of US, Europe & India, Datasee.AI is a team of 32 people located across different geographies and with varied domain expertise and interests.
A focused and happy bunch of people who take tasks head-on and build scalable platforms and products.
About You
As a Senior Data Engineer part of the data team, you will be responsible for running the data systems and services that monitor and report on the end to end Data infrastructure. We are heavily dependent on Snowflake, Airflow, Fivetran, Looker for our business intelligence and embrace AWS as a key partner across our engineering teams. You will report directly in the Head of Data Engineering and work closely with our ML Engineering and Data science Team.
Some of the things you’ll be doing:
- Integration of additional data sources into our Snowflake Data Warehouse using Fivetran or custom code
- Building infrastructure that helps our analysts to move faster, such as adding tests to our CI/CD systems
- Designing, developing, and implementing scalable, automated processes for data extraction, processing, and analysis
- Maintaining an accurate log of the technical documentation for the warehouse
- Troubleshooting and resolving technical issues as they arise
- Ensuring all servers and applications are patched and upgraded in a timely manner
- Looking for ways of improving both what and how services are delivered by the department
- Building data loading services for the purpose of importing data from numerous, disparate data sources, inclusive of APIs, logs, relational, and non-relational databases
- Working with the BI Developer to ensure that all data feeds are optimised and available at the required times. This can include Change Capture, Change Data Control and other “delta loading” approaches
- Discovering, transforming, testing, deploying and documenting data sources
- Applying, help defining, and championing data warehouse governance: data quality, testing, coding best practises, and peer review
What you’ll get in return:
- Competitive Salary
- Family & Self Health Insurance
- Life & Accidental Insurance
- 25 days annual leaves
- We invest in your development with professional L&D budget (fixed amount of 40,000 per year)
- Flexible working options
- Share Options
You’ll be a great fit if:
- You have 4+ years of experience in Data Engineering
- You have extensive development in Building ELT pipelines (Snowflake added advantage )
- You have experience in building data solutions, both batch processes and streaming applications
- You have extensive experience in designing , architecting and implementing best Data Engineering practices
- You have good experience in Data Modelling
- You have extensive experience in writing SQL statements and performance tuning them
- You have experience in data mining, data warehouse solutions, and ETL, and using databases in a business environment with large-scale, complex datasets
- You have experience architecting analytical databases
- You have experience working in a data engineering or data warehousing team
- You have high development standards, especially for code quality, code reviews, unit testing, continuous integration and deployment
- You have strong technical documentation skills and the ability to be clear and precise with business users
- You have business-level of English and good communication skills
- You have knowledge of various systems across the AWS platform and the role they play e.g. Lambda, DynamoDB, CloudFormation, Glue
- You have experience with Git and Docker
- You have experience with with Snowflake, dbt, Apache Airflow, Python, Fivetran, AWS, git and Looker
Who are Tide?
We’re the UK’s leading provider of smart current accounts for sole traders and small companies. We’re also on a mission to save business owners time and money on their banking and finance admin so they can get back to doing what they love - for too long, these customers have been under-served by the big banks.
Our offices are in London, UK, Sofia, Bulgaria and Hyderabad, India, where our teams are dedicated to our small business members, revolutionising business banking for SMEs. We are also the leading provider of UK SME business accounts and one of the fastest-growing fintechs in the UK.
We’re scaling at speed with a focus on hiring talented individuals with a growth mindset and ownership mentality, who are able to juggle multiple and sometimes changing priorities. Our values show our commitment to working as one team, working collaboratively to take action and deliver results. Member first, we are passionate about our members and put them first. We are data-driven, we make decisions, creating insight using data.
We’re also one of LinkedIn’s top 10 hottest UK companies to work for.
Here’s what we think about diversity and inclusion…
We build our services for all types of small business owners. We aim to be as diverse as our members so we hire people from a variety of backgrounds. We’re proud that our diversity not only reflects our multicultural society but that this breadth of experience makes us awesome at solving problems. Everyone here has a voice and you’ll be able to make a difference. If you share our values and want to help small businesses, you’ll make an amazing Tidean.
A note on the future of work at Tide:
Tide’s offices are beginning to open for Tideans to return on a voluntary basis. Timelines for reopening will be unique for each region and will be based on region-specific guidelines. The health and well-being of Tideans and candidates is our primary concern, therefore, for the foreseeable future, we have transitioned all interviews and onboarding to be conducted via Zoom.
Once offices are fully open, Tideans will be able to choose to work from the office or remotely, with the requirement that they visit the office or participate in face-to-face team activities several times per month.
DataWeave provides Retailers and Brands with “Competitive Intelligence as a Service” that enables them to take key decisions that impact their revenue. Powered by AI, we provide easily consumable and actionable competitive intelligence by aggregating and analyzing billions of publicly available data points on the Web to help businesses develop data-driven strategies and make smarter decisions.
Data [email protected]
We the Data Science team at DataWeave (called Semantics internally) build the core machine learning backend and structured domain knowledge needed to deliver insights through our data products. Our underpinnings are: innovation, business awareness, long term thinking, and pushing the envelope. We are a fast paced labs within the org applying the latest research in Computer Vision, Natural Language Processing, and Deep Learning to hard problems in different domains.
How we work?
It's hard to tell what we love more, problems or solutions! Every day, we choose to address some of the hardest data problems that there are. We are in the business of making sense of messy public data on the web. At serious scale!
What do we offer?
- Some of the most challenging research problems in NLP and Computer Vision. Huge text and image datasets that you can play with!
- Ability to see the impact of your work and the value you're adding to our customers almost immediately.
- Opportunity to work on different problems and explore a wide variety of tools to figure out what really excites you.
- A culture of openness. Fun work environment. A flat hierarchy. Organization wide visibility. Flexible working hours.
- Learning opportunities with courses and tech conferences. Mentorship from seniors in the team.
- Last but not the least, competitive salary packages and fast paced growth opportunities.
Who are we looking for?
The ideal candidate is a strong software developer or a researcher with experience building and shipping production grade data science applications at scale. Such a candidate has keen interest in liaising with the business and product teams to understand a business problem, and translate that into a data science problem. You are also expected to develop capabilities that open up new business productization opportunities.
We are looking for someone with 6+ years of relevant experience working on problems in NLP or Computer Vision with a Master's degree (PhD preferred).
Key problem areas
- Preprocessing and feature extraction noisy and unstructured data -- both text as well as images.
- Keyphrase extraction, sequence labeling, entity relationship mining from texts in different domains.
- Document clustering, attribute tagging, data normalization, classification, summarization, sentiment analysis.
- Image based clustering and classification, segmentation, object detection, extracting text from images, generative models, recommender systems.
- Ensemble approaches for all the above problems using multiple text and image based techniques.
Relevant set of skills
- Have a strong grasp of concepts in computer science, probability and statistics, linear algebra, calculus, optimization, algorithms and complexity.
- Background in one or more of information retrieval, data mining, statistical techniques, natural language processing, and computer vision.
- Excellent coding skills on multiple programming languages with experience building production grade systems. Prior experience with Python is a bonus.
- Experience building and shipping machine learning models that solve real world engineering problems. Prior experience with deep learning is a bonus.
- Experience building robust clustering and classification models on unstructured data (text, images, etc). Experience working with Retail domain data is a bonus.
- Ability to process noisy and unstructured data to enrich it and extract meaningful relationships.
- Experience working with a variety of tools and libraries for machine learning and visualization, including numpy, matplotlib, scikit-learn, Keras, PyTorch, Tensorflow.
- Use the command line like a pro. Be proficient in Git and other essential software development tools.
- Working knowledge of large-scale computational models such as MapReduce and Spark is a bonus.
- Be a self-starter—someone who thrives in fast paced environments with minimal ‘management’.
- It's a huge bonus if you have some personal projects (including open source contributions) that you work on during your spare time. Show off some of your projects you have hosted on GitHub.
Role and responsibilities
- Understand the business problems we are solving. Build data science capability that align with our product strategy.
- Conduct research. Do experiments. Quickly build throw away prototypes to solve problems pertaining to the Retail domain.
- Build robust clustering and classification models in an iterative manner that can be used in production.
- Constantly think scale, think automation. Measure everything. Optimize proactively.
- Take end to end ownership of the projects you are working on. Work with minimal supervision.
- Help scale our delivery, customer success, and data quality teams with constant algorithmic improvements and automation.
- Take initiatives to build new capabilities. Develop business awareness. Explore productization opportunities.
- Be a tech thought leader. Add passion and vibrance to the team. Push the envelope. Be a mentor to junior members of the team.
- Stay on top of latest research in deep learning, NLP, Computer Vision, and other relevant areas.