Cutshort logo
Senior Data Scientist

Senior Data Scientist

at A large South African technology company

Agency job
icon
Remote only
icon
5 - 20 yrs
icon
₹30L - ₹30L / yr
icon
Full time
Skills
Data Science
Statistical Analysis
Statistical Modeling
Machine Learning (ML)
Google Cloud Platform (GCP)
Data-flow analysis
Deep Learning
Demand forecasting
Data modeling
SQL server
PostgreSQL
Aqua Data Studio
Tableau
Python
TensorFlow
Keras
PyTorch
Algorithms
SQL
Data Visualization
Data Analytics
Predictive modelling
R Programming
Natural Language Processing (NLP)
Vertex

**The salary offered for this role is up to 38.5 LPA**

 

Successful SA technology group is looking to hire a full-time remote Senior Data Scientist with experience and ability in statistical modeling (price optimisation and price elasticity). 

 

The Senior Data Scientist will actively lead the application of data science methods, which includes machine learning, deep learning, artificial intelligence and predictive analytics, required to meet the company’s business interests as well as that of their clients.

 

Requirements:

 

  • Relevant degree in Data Science, Statistics or equivalent quantitative field
  • Minimum 5+ years of experience

 

The ideal candidate would have knowledge in:

 

  • Machine learning
  • Google Cloud Platform (AI platform, BigQuery, Dataproc, Dataflow, Kubeflow, Vertex AI)
  • Deep learning (Demand Forecasting, Recommendations Engine, Image Modelling and NLP etc.)
  • Statistical modelling (Price Optimization, Price Elasticity etc.)
  • Data Modelling
  • Database management (MS SQL server, PostgresSQL or similar)
  • Data visualisation (Data Studio, Tableau)
  • Data Science
  • Data Analysis
  • Predictive analytics
  • Python

 

They would have the following skills:

 

  • Ability to program data using python, R;
  • Ability to use deep learning tools like Tensorflow, Keras, Sklearn, Pytorch
  • Ability to build predictive models and machine learning algorithms
  • Ability to manage both structured and unstructured data using SQL;
  • Ability to visualise data using various tools;
  • Ability to model data for prediction
  • Ability to work independently and propose solutions to business challenges
  • Ability to automate data pipelines and machine learning models
  • Ability to manage time and project deliverables

 

The main responsibilities of the role are:

 

  • Providing support and leadership role to the other data scientist;
  • Identifying and acting on new opportunities for data driven business in data science and analytics;
  • Loading and merging data originating from diverse sources;
  • Pre-processing and Transforming data for model building and analysis;
  • Leading the development of predictive models for business solution;
  • Performing descriptive analytics to discover trends and patterns in the data;
  • Deploying predictive and other models to production
Read more
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

An AI based company
Agency job
via Qrata by Prajakta Kulkarni
Gurugram, Delhi, Noida, Ghaziabad, Faridabad
5 - 10 yrs
₹25L - ₹70L / yr
Computer Vision
OpenCV
Python
TensorFlow
PyTorch
Job Title : Lead Computer Vision Engineer
Location : Gurgaon

About the company:
The company is changing the way cataloging is done across the Globe. Our vision is to empower the smallest of sellers, situated in the farthest of corners, to create superior product images and videos, without the need for any external professional help. Imagine 30M+ merchants shooting Product Images or Videos using their Smartphones, and then choosing Filters for Amazon, Asos, Airbnb, Doordash, etc to instantly compose High-Quality "tuned-in" product visuals, instantly. The company has built the world’s leading image editing AI software, to capture and process beautiful product images for online selling. We are also fortunate and proud to be backed by the biggest names in the investment community including the likes of Accel Partners, Angellist and prominent Founders and Internet company operators, who believe that there is an intelligent and efficient way of doing Digital Production than how the world operates currently.

Job Description :
- We are looking for a seasoned Computer Vision Engineer with AI/ML/CV and Deep Learning skills to
play a senior leadership role in our Product & Technology Research Team.
- You will be leading a team of CV researchers to build models that automatically transform millions of e
commerce, automobiles, food, real-estate ram images into processed final images.
- You will be responsible for researching the latest art of the possible in the field of computer vision,
designing the solution architecture for our offerings and lead the Computer Vision teams to build the core
algorithmic models & deploy them on Cloud Infrastructure.
- Working with the Data team to ensure your data pipelines are well set up and
models are being constantly trained and updated
- Working alongside product team to ensure that AI capabilities are built as democratized tools that
provides internal as well external stakeholders to innovate on top of it and make our customers
successful
- You will work closely with the Product & Engineering teams to convert the models into beautiful products
that will be used by thousands of Businesses everyday to transform their images and videos.

Job Requirements:
- Min 3+ years of work experience in Computer Vision with 5-10 years work experience overall
- BS/MS/ Phd degree in Computer Science, Engineering or a related subject from a ivy league institute
- Exposure on Deep Learning Techniques, TensorFlow/Pytorch
- Prior expertise on building Image processing applications using GANs, CNNs, Diffusion models
- Expertise with Image Processing Python libraries like OpenCV, etc.
- Good hands-on experience on Python, Flask or Django framework
- Authored publications at peer-reviewed AI conferences (e.g. NeurIPS, CVPR, ICML, ICLR,ICCV, ACL)
- Prior experience of managing teams and building large scale AI / CV projects is a big plus
- Great interpersonal and communication skills
- Critical thinker and problem-solving skills
Read more
Hyderabad - Hybrid
Agency job
via Vmultiply solutions by Mounica Buddharaju
Hyderabad
5 - 8 yrs
₹10L - ₹15L / yr
airflow
Windows Azure
SQL
Airflow
Python

Airflow developer:

Exp: 5 to 10yrs & Relevant exp must be above 4 Years.

Work location: Hyderabad (Hybrid Model)



Job description:  

·        Experience in working on Airflow.

·        Experience in SQL, Python, and Object-oriented programming. 

·        Experience in the data warehouse, database concepts, and ETL tools (Informatica, DataStage, Pentaho, etc.).

·        Azure experience and exposure to Kubernetes. 

·        Experience in Azure data factory, Azure Databricks, and Snowflake. 

Required Skills: Azure Databricks/Data Factory, Kubernetes/Dockers, DAG Development, Hands-on Python coding.

Read more
client of peoplefirst consultants
Agency job
via People First Consultants by Aishwarya KA
Remote, Chennai
3 - 6 yrs
Best in industry
Machine Learning (ML)
Data Science
Deep Learning
Artificial Intelligence (AI)
Python
+1 more

Skills: Machine Learning,Deep Learning,Artificial Intelligence,python.

Location:Chennai


Domain knowledge:
Data cleaning, modelling, analytics, statistics, machine learning, AI

Requirements:

·         To be part of Digital Manufacturing and Industrie 4.0 projects across Saint Gobain group of companies

·         Design and develop AI//ML models to be deployed across SG factories

·         Knowledge on Hadoop, Apache Spark, MapReduce, Scala, Python programming, SQL and NoSQL databases is required

·         Should be strong in statistics, data analysis, data modelling, machine learning techniques and Neural Networks

·         Prior experience in developing AI and ML models is required

·         Experience with data from the Manufacturing Industry would be a plus

Roles and Responsibilities:

·         Develop AI and ML models for the Manufacturing Industry with a focus on Energy, Asset Performance Optimization and Logistics

·         Multitasking, good communication necessary

·         Entrepreneurial attitude.

 
Read more
DP
Posted by Alfiya Khan
Pune, Bengaluru (Bangalore)
6 - 8 yrs
₹15L - ₹25L / yr
Big Data
Data Warehouse (DWH)
Data modeling
Apache Spark
Data integration
+10 more
Company Profile
XpressBees – a logistics company started in 2015 – is amongst the fastest growing
companies of its sector. While we started off rather humbly in the space of
ecommerce B2C logistics, the last 5 years have seen us steadily progress towards
expanding our presence. Our vision to evolve into a strong full-service logistics
organization reflects itself in our new lines of business like 3PL, B2B Xpress and cross
border operations. Our strong domain expertise and constant focus on meaningful
innovation have helped us rapidly evolve as the most trusted logistics partner of
India. We have progressively carved our way towards best-in-class technology
platforms, an extensive network reach, and a seamless last mile management
system. While on this aggressive growth path, we seek to become the one-stop-shop
for end-to-end logistics solutions. Our big focus areas for the very near future
include strengthening our presence as service providers of choice and leveraging the
power of technology to improve efficiencies for our clients.

Job Profile
As a Lead Data Engineer in the Data Platform Team at XpressBees, you will build the data platform
and infrastructure to support high quality and agile decision-making in our supply chain and logistics
workflows.
You will define the way we collect and operationalize data (structured / unstructured), and
build production pipelines for our machine learning models, and (RT, NRT, Batch) reporting &
dashboarding requirements. As a Senior Data Engineer in the XB Data Platform Team, you will use
your experience with modern cloud and data frameworks to build products (with storage and serving
systems)
that drive optimisation and resilience in the supply chain via data visibility, intelligent decision making,
insights, anomaly detection and prediction.

What You Will Do
• Design and develop data platform and data pipelines for reporting, dashboarding and
machine learning models. These pipelines would productionize machine learning models
and integrate with agent review tools.
• Meet the data completeness, correction and freshness requirements.
• Evaluate and identify the data store and data streaming technology choices.
• Lead the design of the logical model and implement the physical model to support
business needs. Come up with logical and physical database design across platforms (MPP,
MR, Hive/PIG) which are optimal physical designs for different use cases (structured/semi
structured). Envision & implement the optimal data modelling, physical design,
performance optimization technique/approach required for the problem.
• Support your colleagues by reviewing code and designs.
• Diagnose and solve issues in our existing data pipelines and envision and build their
successors.

Qualifications & Experience relevant for the role

• A bachelor's degree in Computer Science or related field with 6 to 9 years of technology
experience.
• Knowledge of Relational and NoSQL data stores, stream processing and micro-batching to
make technology & design choices.
• Strong experience in System Integration, Application Development, ETL, Data-Platform
projects. Talented across technologies used in the enterprise space.
• Software development experience using:
• Expertise in relational and dimensional modelling
• Exposure across all the SDLC process
• Experience in cloud architecture (AWS)
• Proven track record in keeping existing technical skills and developing new ones, so that
you can make strong contributions to deep architecture discussions around systems and
applications in the cloud ( AWS).

• Characteristics of a forward thinker and self-starter that flourishes with new challenges
and adapts quickly to learning new knowledge
• Ability to work with a cross functional teams of consulting professionals across multiple
projects.
• Knack for helping an organization to understand application architectures and integration
approaches, to architect advanced cloud-based solutions, and to help launch the build-out
of those systems
• Passion for educating, training, designing, and building end-to-end systems.
Read more
at Marj Technologies
1 recruiter
DP
Posted by Shyam Verma
Noida
3 - 10 yrs
₹7L - ₹10L / yr
Data Science
Machine Learning (ML)
Natural Language Processing (NLP)
Computer Vision
recommendation algorithm

Job Description

We are looking for a highly capable machine learning engineer to optimize our deep learning systems. You will be evaluating existing deep learning (DL) processes, do hyperparameter tuning, performing statistical analysis (logging and evaluating model’s performance) to resolve data set problems, and enhancing the accuracy of our AI software's predictive automation capabilities.

You will be working with technologies like AWS Sagemaker, TensorFlow JS, TensorFlow/ Keras/TensorBoard to create Deep Learning backends that powers our application.
To ensure success as a machine learning engineer, you should demonstrate solid data science knowledge and experience in Deep Learning role. A first-class machine learning engineer will be someone whose expertise translates into the enhanced performance of predictive automation software. To do this job successfully, you need exceptional skills in DL and programming.


Responsibilities

  • Consulting with managers to determine and refine machine learning objectives.

  • Designing deep learning systems and self-running artificial intelligence (AI) software to

    automate predictive models.

  • Transforming data science prototypes and applying appropriate ML algorithms and

    tools.

  • Carry out data engineering subtasks such as defining data requirements, collecting,

    labeling, inspecting, cleaning, augmenting, and moving data.

  • Carry out modeling subtasks such as training deep learning models, defining

    evaluation metrics, searching hyperparameters, and reading research papers.

  • Carry out deployment subtasks such as converting prototyped code into production

    code, working in-depth with AWS services to set up cloud environment for training,

    improving response times and saving bandwidth.

  • Ensuring that algorithms generate robust and accurate results.

  • Running tests, performing analysis, and interpreting test results.

  • Documenting machine learning processes.

  • Keeping abreast of developments in machine learning.

    Requirements

  • Proven experience as a Machine Learning Engineer or similar role.

  • Should have indepth knowledge of AWS Sagemaker and related services (like S3).

  • Extensive knowledge of ML frameworks, libraries, algorithms, data structures, data

    modeling, software architecture, and math & statistics.

  • Ability to write robust code in Python & Javascript (TensorFlow JS).

  • Experience with Git and Github.

  • Superb analytical and problem-solving abilities.

  • Excellent troubleshooting skills.

  • Good project management skills.

  • Great communication and collaboration skills.

  • Excellent time management and organizational abilities.

  • Bachelor's degree in computer science, data science, mathematics, or a related field;

    Master’s degree is a plus.

Read more
at Blue Sky Analytics
3 recruiters
DP
Posted by Balahun Khonglanoh
Remote only
5 - 10 yrs
₹8L - ₹25L / yr
Python
Remote sensing
Data Science
GIS analysis

About the Company

Blue Sky Analytics is a Climate Tech startup that combines the power of AI & Satellite data to aid in the creation of a global environmental data stack. Our funders include Beenext and Rainmatter. Over the next 12 months, we aim to expand to 10 environmental data-sets spanning water, land, heat, and more!


We are looking for a Data Lead - someone who works at the intersection of data science, GIS, and engineering. We want a leader who not only understands environmental data but someone who can quickly assemble large scale datasets that are crucial to the well being of our planet. Come save the planet with us!


Your Role

Manage: As a leadership position, this requires long term strategic thinking. You will be in charge of daily operations of the data team. This would include running team standups, planning the execution of data generation and ensuring the algorithms are put in production. You will also be the person in charge to dumb down the data science for the rest of us who do not know what it means.

Love and Live Data: You will also be taking all the responsibility of ensuring that the data we generate is accurate, clean, and is ready to use for our clients. This would entail that you understand what the market needs, calculate feasibilities and build data pipelines. You should understand the algorithms that we use or need to use and take decisions on what would serve the needs of our clients well. We also want our Data Lead to be constantly probing for newer and optimized ways of generating datasets. It would help if they were abreast of all the latest developments in the data science and environmental worlds. The Data Lead also has to be able to work with our Platform team on integrating the data on our platform and API portal.

Collaboration: We use Clubhouse to track and manage our projects across our organization - this will require you to collaborate with the team and follow up with members on a regular basis. About 50% of the work, needs to be the pulse of the platform team. You'll collaborate closely with peers from other functions—Design, Product, Marketing, Sales, and Support to name a few—on our overall product roadmap, on product launches, and on ongoing operations. You will find yourself working with the product management team to define and execute the feature roadmap. You will be expected to work closely with the CTO, reporting on daily operations and development. We don't believe in a top-down hierarchical approach and are transparent with everyone. This means honest and mutual feedback and ability to adapt.

Teaching: Not exactly in the traditional sense. You'll recruit, coach, and develop engineers while ensuring that they are regularly receiving feedback and making rapid progress on personal and professional goals.

Humble and cool: Look we will be upfront with you about one thing - our team is fairly young and is always buzzing with work. In this fast-paced setting, we are looking for someone who can stay cool, is humble, and is willing to learn. You are adaptable, can skill up fast, and are fearless at trying new methods. After all, you're in the business of saving the planet!

Requirements

  • A minimum of 5 years of industry experience.
  • Hyper-curious!
  • Exceptional at Remote Sensing Data, GIS, Data Science.
  • Must have big data & data analytics experience
  • Very good in documentation & speccing datasets
  • Experience with AWS Cloud, Linux, Infra as Code & Docker (containers) is a must
  • Coordinate with cross-functional teams (DevOPS, QA, Design etc.) on planning and execution
  • Lead, mentor and manage deliverables of a team of talented and highly motivated team of developers
  • Must have experience in building, managing, growing & hiring data teams. Has built large-scale datasets from scratch
  • Managing work on team's Clubhouse & follows up with the team. ~ 50% of work, needs to be the pulse of the platform team
  • Exceptional communication skills & ability to abstract away problems & build systems. Should be able to explain to the management anything & everything
  • Quality control - you'll be responsible for maintaining a high quality bar for everything your team ships. This includes documentation and data quality
  • Experience of having led smaller teams, would be a plus.

Benefits

  • Work from anywhere: Work by the beach or from the mountains.
  • Open source at heart: We are building a community where you can use, contribute and collaborate on.
  • Own a slice of the pie: Possibility of becoming an owner by investing in ESOPs.
  • Flexible timings: Fit your work around your lifestyle.
  • Comprehensive health cover: Health cover for you and your dependents to keep you tension free.
  • Work Machine of choice: Buy a device and own it after completing a year at BSA.
  • Quarterly Retreats: Yes there's work-but then there's all the non-work+fun aspect aka the retreat!
  • Yearly vacations: Take time off to rest and get ready for the next big assignment by availing the paid leaves.
Read more
at Mobile Programming LLC
1 video
34 recruiters
DP
Posted by Apurva kalsotra
Mohali, Gurugram, Bengaluru (Bangalore), Chennai, Hyderabad, Pune
3 - 8 yrs
₹3L - ₹9L / yr
Data Warehouse (DWH)
Big Data
Spark
Apache Kafka
Data engineering
+14 more
Day-to-day Activities
Develop complex queries, pipelines and software programs to solve analytics and data mining problems
Interact with other data scientists, product managers, and engineers to understand business problems, technical requirements to deliver predictive and smart data solutions
Prototype new applications or data systems
Lead data investigations to troubleshoot data issues that arise along the data pipelines
Collaborate with different product owners to incorporate data science solutions
Maintain and improve data science platform
Must Have
BS/MS/PhD in Computer Science, Electrical Engineering or related disciplines
Strong fundamentals: data structures, algorithms, database
5+ years of software industry experience with 2+ years in analytics, data mining, and/or data warehouse
Fluency with Python
Experience developing web services using REST approaches.
Proficiency with SQL/Unix/Shell
Experience in DevOps (CI/CD, Docker, Kubernetes)
Self-driven, challenge-loving, detail oriented, teamwork spirit, excellent communication skills, ability to multi-task and manage expectations
Preferred
Industry experience with big data processing technologies such as Spark and Kafka
Experience with machine learning algorithms and/or R a plus 
Experience in Java/Scala a plus
Experience with any MPP analytics engines like Vertica
Experience with data integration tools like Pentaho/SAP Analytics Cloud
Read more
Agency job
via UpgradeHR by Sangita Deka
Hyderabad
6 - 10 yrs
₹10L - ₹15L / yr
Big Data
Data Science
Machine Learning (ML)
R Programming
Python
+2 more
It is one of the largest communication technology companies in the world. They operate America's largest 4G LTE wireless network and the nation's premiere all-fiber broadband network.
Read more
Largest Analytical firm
Agency job
via Xpheno by Ashok P
Bengaluru (Bangalore)
4 - 14 yrs
₹10L - ₹28L / yr
Hadoop
Big Data
Spark
Scala
Python
+2 more

·        Advanced Spark Programming Skills

·        Advanced Python Skills

·        Data Engineering ETL and ELT Skills

·        Expertise on Streaming data

·        Experience in Hadoop eco system

·        Basic understanding of Cloud Platforms

·        Technical Design Skills, Alternative approaches

·        Hands on expertise on writing UDF’s

·        Hands on expertise on streaming data ingestion

·        Be able to independently tune spark scripts

·        Advanced Debugging skills & Large Volume data handling.

·        Independently breakdown and plan technical Tasks

Read more
at Artivatic.ai
1 video
3 recruiters
DP
Posted by Akanksha naini
Bengaluru (Bangalore)
2 - 5 yrs
₹10L - ₹15L / yr
Natural Language Processing (NLP)
Artificial Intelligence (AI)
Voice recognition
Machine Learning (ML)
Data Science
We at artivatic are seeking passionate, talented and research focused natural processing language engineer with strong machine learning and mathematics background to help build industry-leading technology. - The ideal candidate will have research/implementation experience modeling and developing NLP tools and experience working with machine learning/deep learning algorithms.Qualifications :- Bachelors or Master degree in Computer Science, Mathematics or related field with specialization in natural language processing, Machine Learning or Deep Learning.- Publication record in conferences/journals is a plus.- 2+ years of working/research experience building NLP based solutions is preferred.Required Skills :- Hands-on Experience building NLP models using different NLP libraries ad toolkit like NLTK, Stanford NLP etc.- Good understanding of Rule-based, Statistical and probabilistic NLP techniques.- Good knowledge of NLP approaches and concepts like topic modeling, text summarization, semantic modeling, Named Entity recognition etc.- Good understanding of Machine learning and Deep learning algorithms.- Good knowledge of Data Structures and Algorithms.- Strong programming skills in Python/Java/Scala/C/C++.- Strong problem solving and logical skills.- A go-getter kind of attitude with a willingness to learn new technologies.- Well versed with software design paradigms and good development practices.Responsibilities :- Developing novel algorithms and modeling techniques to advance the state of the art in Natural Language Processing.- Developing NLP based tools and solutions end to end.
Read more
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at A large South African technology company?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort