
Role : Sr Data Scientist / Tech Lead – Data Science
Number of positions : 8
Responsibilities
- Lead a team of data scientists, machine learning engineers and big data specialists
- Be the main point of contact for the customers
- Lead data mining and collection procedures
- Ensure data quality and integrity
- Interpret and analyze data problems
- Conceive, plan and prioritize data projects
- Build analytic systems and predictive models
- Test performance of data-driven products
- Visualize data and create reports
- Experiment with new models and techniques
- Align data projects with organizational goals
Requirements (please read carefully)
- Very strong in statistics fundamentals. Not all data is Big Data. The candidate should be able to derive statistical insights from very few data points if required, using traditional statistical methods.
- Msc-Statistics/ Phd.Statistics
- Education – no bar, but preferably from a Statistics academic background (eg MSc-Stats, MSc-Econometrics etc), given the first point
- Strong expertise in Python (any other statistical languages/tools like R, SAS, SPSS etc are just optional, but Python is absolutely essential). If the person is very strong in Python, but has almost nil knowledge in the other statistical tools, he/she will still be considered a good candidate for this role.
- Proven experience as a Data Scientist or similar role, for about 7-8 years
- Solid understanding of machine learning and AI concepts, especially wrt choice of apt candidate algorithms for a use case, and model evaluation.
- Good expertise in writing SQL queries (should not be dependent upon anyone else for pulling in data, joining them, data wrangling etc)
- Knowledge of data management and visualization techniques --- more from a Data Science perspective.
- Should be able to grasp business problems, ask the right questions to better understand the problem breadthwise /depthwise, design apt solutions, and explain that to the business stakeholders.
- Again, the last point above is extremely important --- should be able to identify solutions that can be explained to stakeholders, and furthermore, be able to present them in simple, direct language.
http://www.altimetrik.com/">http://www.altimetrik.com
https://www.youtube.com/watch?v=3nUs4YxppNE&feature=emb_rel_end">https://www.youtube.com/watch?v=3nUs4YxppNE&feature=emb_rel_end
https://www.youtube.com/watch?v=e40r6kJdC8c">https://www.youtube.com/watch?v=e40r6kJdC8c

About Product Engineering MNC (FinTech Domain)
Similar jobs
We are looking out for a technically driven "ML OPS Engineer" for one of our premium client
COMPANY DESCRIPTION:
Key Skills
• Excellent hands-on expert knowledge of cloud platform infrastructure and administration
(Azure/AWS/GCP) with strong knowledge of cloud services integration, and cloud security
• Expertise setting up CI/CD processes, building and maintaining secure DevOps pipelines with at
least 2 major DevOps stacks (e.g., Azure DevOps, Gitlab, Argo)
• Experience with modern development methods and tooling: Containers (e.g., docker) and
container orchestration (K8s), CI/CD tools (e.g., Circle CI, Jenkins, GitHub actions, Azure
DevOps), version control (Git, GitHub, GitLab), orchestration/DAGs tools (e.g., Argo, Airflow,
Kubeflow)
• Hands-on coding skills Python 3 (e.g., API including automated testing frameworks and libraries
(e.g., pytest) and Infrastructure as Code (e.g., Terraform) and Kubernetes artifacts (e.g.,
deployments, operators, helm charts)
• Experience setting up at least one contemporary MLOps tooling (e.g., experiment tracking,
model governance, packaging, deployment, feature store)
• Practical knowledge delivering and maintaining production software such as APIs and cloud
infrastructure
• Knowledge of SQL (intermediate level or more preferred) and familiarity working with at least
one common RDBMS (MySQL, Postgres, SQL Server, Oracle)
We are looking for a motivated data analyst with sound experience in handling web/ digital analytics, to join us as part of the Kapiva D2C Business Team. This team is primarily responsible for driving sales and customer engagement on our website. This channel has grown 5x in revenue over the last 12 months and is poised to grow another 5x over the next six. It represents a high-growth, important part of our overall e-commerce growth strategy.
The mandate here is to run an end-to-end sustainable e-commerce business, boost sales through marketing campaigns, and build a cutting edge product (website) that optimizes the customer’s journey as well as increases customer lifetime value.
The Data Analyst will support the business heads by providing data-backed insights in order to drive customer growth, retention and engagement. They will be required to set-up and manage reports, test various hypotheses and coordinate with various stakeholders on a day-to-day basis.
Job Responsibilities:
Strategy and planning:
● Work with the D2C functional leads and support analytics planning on a quarterly/ annual basis
● Identify reports and analytics needed to be conducted on a daily/ weekly/ monthly frequency
● Drive planning for hypothesis-led testing of key metrics across the customer funnel
Analytics:
● Interpret data, analyze results using statistical techniques and provide ongoing reports
● Analyze large amounts of information to discover trends and patterns
● Work with business teams to prioritize business and information needs
● Collaborate with engineering and product development teams to setup data infrastructure as needed
Reporting and communication:
● Prepare reports / presentations to present actionable insights that can drive business objectives
● Setup live dashboards reporting key cross-functional metrics
● Coordinate with various stakeholders to collect useful and required data
● Present findings to business stakeholders to drive action across the organization
● Propose solutions and strategies to business challenges
Requirements sought:
Must haves:
● Bachelor’s/ Masters in Mathematics, Economics, Computer Science, Information Management, Statistics or related field
● High proficiency in MS Excel and SQL
● Knowledge of one or more programming languages like Python/ R. Adept at queries, report writing and presenting findings
● Strong analytical skills with the ability to collect, organize, analyze, and disseminate significant amounts of information with attention to detail and accuracy - working knowledge of statistics and statistical methods
● Ability to work in a highly dynamic environment across cross-functional teams; good at
coordinating with different departments and managing timelines
● Exceptional English written/verbal communication
● A penchant for understanding consumer traits and behavior and a keen eye to detail
Good to have:
● Hands-on experience with one or more web analytics tools like Google Analytics, Mixpanel, Kissmetrics, Heap, Adobe Analytics, etc.
● Experience in using business intelligence tools like Metabase, Tableau, Power BI is a plus
● Experience in developing predictive models and machine learning algorithms
Data Engineer
Mandatory Requirements
- Experience in AWS Glue
- Experience in Apache Parquet
- Proficient in AWS S3 and data lake
- Knowledge of Snowflake
- Understanding of file-based ingestion best practices.
- Scripting language - Python & pyspark
CORE RESPONSIBILITIES
- Create and manage cloud resources in AWS
- Data ingestion from different data sources which exposes data using different technologies, such as: RDBMS, REST HTTP API, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies
- Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform
- Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations
- Develop an infrastructure to collect, transform, combine and publish/distribute customer data.
- Define process improvement opportunities to optimize data collection, insights and displays.
- Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible
- Identify and interpret trends and patterns from complex data sets
- Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders.
- Key participant in regular Scrum ceremonies with the agile teams
- Proficient at developing queries, writing reports and presenting findings
- Mentor junior members and bring best industry practices
QUALIFICATIONS
- 5-7+ years’ experience as data engineer in consumer finance or equivalent industry (consumer loans, collections, servicing, optional product, and insurance sales)
- Strong background in math, statistics, computer science, data science or related discipline
- Advanced knowledge one of language: Java, Scala, Python, C#
- Production experience with: HDFS, YARN, Hive, Spark, Kafka, Oozie / Airflow, Amazon Web Services (AWS), Docker / Kubernetes, Snowflake
- Proficient with
- Data mining/programming tools (e.g. SAS, SQL, R, Python)
- Database technologies (e.g. PostgreSQL, Redshift, Snowflake. and Greenplum)
- Data visualization (e.g. Tableau, Looker, MicroStrategy)
- Comfortable learning about and deploying new technologies and tools.
- Organizational skills and the ability to handle multiple projects and priorities simultaneously and meet established deadlines.
- Good written and oral communication skills and ability to present results to non-technical audiences
- Knowledge of business intelligence and analytical tools, technologies and techniques.
Familiarity and experience in the following is a plus:
- AWS certification
- Spark Streaming
- Kafka Streaming / Kafka Connect
- ELK Stack
- Cassandra / MongoDB
- CI/CD: Jenkins, GitLab, Jira, Confluence other related tools
At Livello we building machine-learning-based demand forecasting tools as well as computer-vision-based multi-camera product recognition solutions that detects people and products to track the inserted/removed items on shelves based on the hand movement of users. We are building models to determine real-time inventory levels, user behaviour as well as predicting how much of each product needs to be reordered so that the right products are delivered to the right locations at the right time, to fulfil customer demand.
Responsibilities
- Lead the CV and DS Team
- Work in the area of Computer Vision and Machine Learning, with focus on product (primarily food) and people recognition (position, movement, age, gender, DSGVO compliant).
- Your work will include formulation and development of a Machine Learning models to solve the underlying problem.
- You help build our smart supply chain system, keep up to date with the latest algorithmic improvements in forecasting and predictive areas, challenge the status quo
- Statistical data modelling and machine learning research.
- Conceptualize, implement and evaluate algorithmic solutions for supply forecasting, inventory optimization, predicting sales, and automating business processes
- Conduct applied research to model complex dependencies, statistical inference and predictive modelling
- Technological conception, design and implementation of new features
- Quality assurance of the software through planning, creation and execution of tests
- Work with a cross-functional team to define, build, test, and deploy applications
Requirements:
- Master/PHD in Mathematics, Statistics, Engineering, Econometrics, Computer Science or any related fields.
- 3-4 years of experience with computer vision and data science.
- Relevant Data Science experience, deep technical background in applied data science (machine learning algorithms, statistical analysis, predictive modelling, forecasting, Bayesian methods, optimization techniques).
- Experience building production-quality and well-engineered Computer Vision and Data Science products.
- Experience in image processing, algorithms and neural networks.
- Knowledge of the tools, libraries and cloud services for Data Science. Ideally Google Cloud Platform
- Solid Python engineering skills and experience with Python, Tensorflow, Docker
- Cooperative and independent work, analytical mindset, and willingness to take responsibility
- Fluency in English, both written and spoken.
Job Responsibilities:
- Identify valuable data sources and automate collection processes
- Undertake preprocessing of structured and unstructured data.
- Analyze large amounts of information to discover trends and patterns
- Helping develop reports and analysis.
- Present information using data visualization techniques.
- Assessing tests and implementing new or upgraded software and assisting with strategic decisions on new systems.
- Evaluating changes and updates to source production systems.
- Develop, implement, and maintain leading-edge analytic systems, taking complicated problems and building simple frameworks
- Providing technical expertise in data storage structures, data mining, and data cleansing.
- Propose solutions and strategies to business challenges
Desired Skills and Experience:
- At least 1 year of experience in Data Analysis
- Complete understanding of Operations Research, Data Modelling, ML, and AI concepts.
- Knowledge of Python is mandatory, familiarity with MySQL, SQL, Scala, Java or C++ is an asset
- Experience using visualization tools (e.g. Jupyter Notebook) and data frameworks (e.g. Hadoop)
- Analytical mind and business acumen
- Strong math skills (e.g. statistics, algebra)
- Problem-solving aptitude
- Excellent communication and presentation skills.
- Bachelor’s / Master's Degree in Computer Science, Engineering, Data Science or other quantitative or relevant field is preferred
Roles and
Responsibilities
Seeking AWS Cloud Engineer /Data Warehouse Developer for our Data CoE team to
help us in configure and develop new AWS environments for our Enterprise Data Lake,
migrate the on-premise traditional workloads to cloud. Must have a sound
understanding of BI best practices, relational structures, dimensional data modelling,
structured query language (SQL) skills, data warehouse and reporting techniques.
Extensive experience in providing AWS Cloud solutions to various business
use cases.
Creating star schema data models, performing ETLs and validating results with
business representatives
Supporting implemented BI solutions by: monitoring and tuning queries and
data loads, addressing user questions concerning data integrity, monitoring
performance and communicating functional and technical issues.
Job Description: -
This position is responsible for the successful delivery of business intelligence
information to the entire organization and is experienced in BI development and
implementations, data architecture and data warehousing.
Requisite Qualification
Essential
-
AWS Certified Database Specialty or -
AWS Certified Data Analytics
Preferred
Any other Data Engineer Certification
Requisite Experience
Essential 4 -7 yrs of experience
Preferred 2+ yrs of experience in ETL & data pipelines
Skills Required
Special Skills Required
AWS: S3, DMS, Redshift, EC2, VPC, Lambda, Delta Lake, CloudWatch etc.
Bigdata: Databricks, Spark, Glue and Athena
Expertise in Lake Formation, Python programming, Spark, Shell scripting
Minimum Bachelor’s degree with 5+ years of experience in designing, building,
and maintaining AWS data components
3+ years of experience in data component configuration, related roles and
access setup
Expertise in Python programming
Knowledge in all aspects of DevOps (source control, continuous integration,
deployments, etc.)
Comfortable working with DevOps: Jenkins, Bitbucket, CI/CD
Hands on ETL development experience, preferably using or SSIS
SQL Server experience required
Strong analytical skills to solve and model complex business requirements
Sound understanding of BI Best Practices/Methodologies, relational structures,
dimensional data modelling, structured query language (SQL) skills, data
warehouse and reporting techniques
Preferred Skills
Required
Experience working in the SCRUM Environment.
Experience in Administration (Windows/Unix/Network/
plus.
Experience in SQL Server, SSIS, SSAS, SSRS
Comfortable with creating data models and visualization using Power BI
Hands on experience in relational and multi-dimensional data modelling,
including multiple source systems from databases and flat files, and the use of
standard data modelling tools
Ability to collaborate on a team with infrastructure, BI report development and
business analyst resources, and clearly communicate solutions to both
technical and non-technical team members
1. Working on supervised and unsupervised learning algorithms
2. Developing deep learning and machine learning algorithms
3. Working on live projects on data analytics
We are looking for an engineer with ML/DL background.
Ideal candidate should have the following skillset
1) Python
2) Tensorflow
3) Experience building and deploying systems
4) Experience with Theano/Torch/Caffe/Keras all useful
5) Experience Data warehousing/storage/management would be a plus
6) Experience writing production software would be a plus
7) Ideal candidate should have developed their own DL architechtures apart from using open source architechtures.
8) Ideal candidate would have extensive experience with computer vision applications
Candidates would be responsible for building Deep Learning models to solve specific problems. Workflow would look as follows:
1) Define Problem Statement (input -> output)
2) Preprocess Data
3) Build DL model
4) Test on different datasets using Transfer Learning
5) Parameter Tuning
6) Deployment to production
Candidate should have experience working on Deep Learning with an engineering degree from a top tier institute (preferably IIT/BITS or equivalent)
The programmer should be proficient in python and should be able to work totally independently. Should also have skill to work with databases and have strong capability to understand how to fetch data from various sources, organise the data and identify useful information through efficient code.
Familiarity with Python
Some examples of work:

