Data Scientist

at DataToBiz

DP
Posted by Ankush Sharma
icon
Chandigarh
icon
2 - 5 yrs
icon
₹4L - ₹6L / yr
icon
Full time
Skills
Algorithms
ETL
Python
Machine Learning (ML)
Deep Learning
Statistical Modeling
Data Structures
DevOps
Job Summary DataToBiz is an AI and Data Analytics Services startup. We are a team of young and dynamic professionals looking for an exceptional data scientist to join our team in Chandigarh. We are trying to solve some very exciting business challenges by applying cutting-edge Machine Learning and Deep Learning Technology. Being a consulting and services startup we are looking for quick learners who can work in a cross-functional team of Consultants, SMEs from various domains, UX architects, and Application development experts, to deliver compelling solutions through the application of Data Science and Machine Learning. The desired candidate will have a passion for finding patterns in large datasets, an ability to quickly understand the underlying domain and expertise to apply Machine Learning tools and techniques to create insights from the data. Responsibilities and Duties As a Data Scientist on our team, you will be responsible for solving complex big-data problems for various clients (on-site and off-site) using data mining, statistical analysis, machine learning, deep learning. One of the primary responsibilities will be to understand the business need and translate it into an actionable analytical plan in consultation with the team. Ensure that the analytical plan aligns with the customer’s overall strategic need. Understand and identify appropriate data sources required for solving the business problem at hand. Explore, diagnose and resolve any data discrepancies – including but not limited to any ETL that may be required, missing value and extreme value/outlier treatment using appropriate methods. Execute project plan to meet requirements and timelines. Identify success metrics and monitor them to ensure high-quality output for the client. Deliver production-ready models that can be deployed in the production system. Create relevant output documents, as required – power point deck/ excel files, data frames etc. Overall project management - Creating a project plan and timelines for the project and obtain sign-off. Monitor project progress in conjunction with the project plan – report risks, scope creep etc. in a timely manner. Identify and evangelize new and upcoming analytical trends in the market within the organization. Implementing the applications of these algorithms/methods/techniques in R/Python Required Experience, Skills and Qualifications 3+ years experience working Data Mining and Statistical Modeling for predictive and prescriptive enterprise analytics. 2+ years of working with Python, Machine learning with exposure to one or more ML/DL frameworks like Tensorflow, Caffe, Scikit-Learn, MXNet, CNTK. Exposure to ML techniques and algorithms to work with different data formats including Structured Data, Unstructured Data, and Natural Language. Experience working with data retrieval and manipulation tools for various data sources like: Rest/Soap APIs, Relational (MySQL) and No-SQL Databases (MongoDB), IOT data streams, Cloud-based storage, and HDFS. Strong foundation in Algorithms and Data Science theory. Strong verbal and written communication skills with other developers and business client Knowledge of Telecom and/or FinTech Domain is a plus.
Read more

About DataToBiz

Advanced data analytics consulting company offering customer analytics, spatial & marketing analytics, supply chain analytics & computer vision solutions
Read more
Founded
2018
Type
Services
Size
20-100 employees
Stage
Bootstrapped
View full company details
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Data Engineer - Global Media Agency

at client of Merito

Agency job
via Merito
Python
SQL
Tableau
PowerBI
PHP
snowflake
Data engineering
icon
Mumbai
icon
3 - 8 yrs
icon
Best in industry

Our client is the world’s largest media investment company and are a part of WPP. In fact, they are responsible for one in every three ads you see globally. We are currently looking for a Senior Software Engineer to join us. In this role, you will be responsible for coding/implementing of custom marketing applications that Tech COE builds for its customer and managing a small team of developers.

 

What your day job looks like:

  • Serve as a Subject Matter Expert on data usage – extraction, manipulation, and inputs for analytics
  • Develop data extraction and manipulation code based on business rules
  • Develop automated and manual test cases for the code written
  • Design and construct data store and procedures for their maintenance
  • Perform data extract, transform, and load activities from several data sources.
  • Develop and maintain strong relationships with stakeholders
  • Write high quality code as per prescribed standards.
  • Participate in internal projects as required

 
Minimum qualifications:

  • B. Tech./MCA or equivalent preferred
  • Excellent 3 years Hand on experience on Big data, ETL Development, Data Processing.


    What you’ll bring:

  • Strong experience in working with Snowflake, SQL, PHP/Python.
  • Strong Experience in writing complex SQLs
  • Good Communication skills
  • Good experience of working with any BI tool like Tableau, Power BI.
  • Sqoop, Spark, EMR, Hadoop/Hive are good to have.

 

 

Read more
Job posted by
Merito Talent

Senior Software Engineer - Data

at 6sense

Founded 2013  •  Product  •  1000-5000 employees  •  Raised funding
PySpark
Data engineering
Big Data
Hadoop
Spark
Apache Spark
Python
ETL
Amazon Web Services (AWS)
icon
Remote only
icon
5 - 8 yrs
icon
₹30L - ₹45L / yr

About Slintel (a 6sense company) :

Slintel, a 6sense company,  the leader in capturing technographics-powered buying intent, helps companies uncover the 3% of active buyers in their target market. Slintel evaluates over 100 billion data points and analyzes factors such as buyer journeys, technology adoption patterns, and other digital footprints to deliver market & sales intelligence.

Slintel's customers have access to the buying patterns and contact information of more than 17 million companies and 250 million decision makers across the world.

Slintel is a fast growing B2B SaaS company in the sales and marketing tech space. We are funded by top tier VCs, and going after a billion dollar opportunity. At Slintel, we are building a sales development automation platform that can significantly improve outcomes for sales teams, while reducing the number of hours spent on research and outreach.

We are a big data company and perform deep analysis on technology buying patterns, buyer pain points to understand where buyers are in their journey. Over 100 billion data points are analyzed every week to derive recommendations on where companies should focus their marketing and sales efforts on. Third party intent signals are then clubbed with first party data from CRMs to derive meaningful recommendations on whom to target on any given day.

6sense is headquartered in San Francisco, CA and has 8 office locations across 4 countries.

6sense, an account engagement platform, secured $200 million in a Series E funding round, bringing its total valuation to $5.2 billion 10 months after its $125 million Series D round. The investment was co-led by Blue Owl and MSD Partners, among other new and existing investors.

Linkedin (Slintel) : https://www.linkedin.com/company/slintel/

Industry : Software Development

Company size : 51-200 employees (189 on LinkedIn)

Headquarters : Mountain View, California

Founded : 2016

Specialties : Technographics, lead intelligence, Sales Intelligence, Company Data, and Lead Data.

Website (Slintel) : https://www.slintel.com/slintel

Linkedin (6sense) : https://www.linkedin.com/company/6sense/

Industry : Software Development

Company size : 501-1,000 employees (937 on LinkedIn)

Headquarters : San Francisco, California

Founded : 2013

Specialties : Predictive intelligence, Predictive marketing, B2B marketing, and Predictive sales

Website (6sense) : https://6sense.com/

Acquisition News : 

https://inc42.com/buzz/us-based-based-6sense-acquires-b2b-buyer-intelligence-startup-slintel/ 

Funding Details & News :

Slintel funding : https://www.crunchbase.com/organization/slintel

6sense funding : https://www.crunchbase.com/organization/6sense

https://www.nasdaq.com/articles/ai-software-firm-6sense-valued-at-%245.2-bln-after-softbank-joins-funding-round

https://www.bloomberg.com/news/articles/2022-01-20/6sense-reaches-5-2-billion-value-with-softbank-joining-round

https://xipometer.com/en/company/6sense

Slintel & 6sense Customers :

https://www.featuredcustomers.com/vendor/slintel/customers

https://www.featuredcustomers.com/vendor/6sense/customers

About the job

Responsibilities

  • Work in collaboration with the application team and integration team to design, create, and maintain optimal data pipeline architecture and data structures for Data Lake/Data Warehouse
  • Work with stakeholders including the Sales, Product, and Customer Support teams to assist with data-related technical issues and support their data analytics needs
  • Assemble large, complex data sets from third-party vendors to meet business requirements.
  • Identify, design, and implement internal process improvements: automating manual processes, optimising data delivery, re-designing infrastructure for greater scalability, etc.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, Elastic search, MongoDB, and AWS technology
  • Streamline existing and introduce enhanced reporting and analysis solutions that leverage complex data sources derived from multiple internal systems

Requirements

  • 3+ years of experience in a Data Engineer role
  • Proficiency in Linux
  • Must have SQL knowledge and experience working with relational databases, query authoring (SQL) as well as familiarity with databases including Mysql, Mongo, Cassandra, and Athena
  • Must have experience with Python/ Scala
  • Must have experience with Big Data technologies like Apache Spark
  • Must have experience with Apache Airflow
  • Experience with data pipeline and ETL tools like AWS Glue
  • Experience working with AWS cloud services: EC2 S3 RDS, Redshift and other Data solutions eg. Databricks, Snowflake

 

Desired Skills and Experience

Python, SQL, Scala, Spark, ETL

 

Read more
Job posted by
Romesh Rawat

Python Developer

at SynRadar

Founded 2017  •  Products & Services  •  0-20 employees  •  Bootstrapped
Amazon Web Services (AWS)
Docker
Python
MongoDB
Web API
Data Analytics
Machine Learning (ML)
icon
Mumbai
icon
0 - 3 yrs
icon
₹5L - ₹10L / yr

This profile will include the following responsibilities:

 

- Develop Parsers for XML and JSON Data sources/feeds

- Write Automation Scripts for product development

- Build API Integrations for 3rd Party product integration

- Perform Data Analysis

- Research on Machine learning algorithms

- Understand AWS cloud architecture and work with 3 party vendors for deployments

- Resolve issues in AWS environment

We are looking for candidates with:
Qualification: BE/BTech/Bsc-IT/MCA
Programming Language: Python
Web Development: Basic understanding of Web Development. Working knowledge of Python Flask is desirable
Database & Platform: AWS/Docker/MySQL/MongoDB
Basic Understanding of Machine Learning Models & AWS Fundamentals is recommended.
Read more
Job posted by
Ashish Rao

Data Engineer

at Nexsys

Founded 2010  •  Services  •  100-1000 employees  •  Bootstrapped
NumPy
pandas
MongoDB
SQL
NOSQL Databases
Data Structures
Algorithms
icon
Bengaluru (Bangalore)
icon
2 - 5 yrs
icon
₹10L - ₹15L / yr

What we look for: 

We are looking for an associate who will be doing data crunching from various sources and finding the key points from the data. Also help us to improve/build new pipelines as per the requests. Also, this associate will be helping us to visualize the data if required and find flaws in our existing algorithms. 

Responsibilities: 

  • Work with multiple stakeholders to gather the requirements of data or analysis and take action on them. 
  • Write new data pipelines and maintain the existing pipelines. 
  • Person will be gathering data from various DB’s and will be finding the required metrics out of it. 

Required Skills: 

  • Experience with python and Libraries like Pandas,and Numpy. 
  • Experience in SQL and understanding of NoSQL DB’s. 
  • Hands-on experience in Data engineering. 
  • Must have good analytical skills and knowledge of statistics. 
  • Understanding of Data Science concepts. 
  • Bachelor degree in Computer Science or related field. 
  • Problem-solving skills and ability to work under pressure. 

Nice to have: 

  • Experience in MongoDB or any NoSql DB. 
  • Experience in ElasticSearch. 
  • Knowledge of Tableau, Power BI or any other visualization tool.
Read more
Job posted by
Kiran Basavaraj Nirakari

GIS Analyst/GIS Data Analyst

at Energy Exemplar

Founded 1999  •  Product  •  100-500 employees  •  Profitable
GIS analysis
Geographic information system
GIS Data Management
GIS
Geospatial analysis
Spatial analysis
QGIS
ArcGIS
Machine vision
Python
Scalable Vector Graphics (SVG)
icon
Pune
icon
2 - 7 yrs
icon
Best in industry

Energy Exemplar is the market leader in Energy Modelling & Simulation Products. Energy Exemplar customers across the globe are modelling their energy markets with confidence to make billion-dollar decisions 

 

Job Overview for GIS Data Analyst 

 

You should be a self-starter with high motivation and will be a core member of the Energy Data team. You will be responsible for developing the company’s competence in geographic energy data collection, maintenance, and delivery to customers through our extensive suite of SaaS products.  Our products support companies engaged in power and natural gas markets on 6 continents.  Our customers are industry leaders in supply, transportation, delivery, operations and trading and our products allow our customers to deliver their optimal solution.  Our data are sourced from hundreds of websites in many languages and your job is to ensure that we have the geographic reference to make sense of the data. You will use your creativity, pattern recognition skills, satellite and aerial imagery and other public sources to provide geographic reference data across the energy industry.  You will work with the Energy Data Analysts to teach them basic GIS research capabilities.  Learnings out of this role will build foundation of skills necessary to advance to roles such as Energy Market Analyst or Data Scientist. 

 

Education & Experience 

 

  • Master’s/Post Graduate Diploma degree in Geoinformatics/GIS, Geoinformatics & Natural Resources Engineering, Remote sensing & GIS, GIS/Spatial Data Science with Bachelors/Masters in Energy/Energy Management /Electrical Engg/Power System/Oil and Gas Management/Petroleum/Geophysics & Other Energy Industry related degrees would be preferred OR 

Master’s/Post Graduate Diploma degree/Bachelor’s in Geoinformatics/GIS, Geoinformatics & Natural Resources Engineering, Remote sensing & GIS, GIS/Spatial Data Science, or any engineering streams 

  • 2+ years of experience working with GIS Systems/Database, GIS data management / GIS data handling and processing/GIS data engineering 

 

Essential/Necessary Skills 

 

  • Proficiency in handling GIS data with SQL, Python/R 
  • Exposure to mapping tools QGIS/ARCGIS/Visio/FME 
  • Expertise in spatial modelling and geospatial analytics (Raster/Vector tools) 
  • Proven experience working with geospatial data integration and testing  
  • Outstanding communication and collaboration skills. 
  • Strong drive for results. You have a proven record of shepherding experiments to create successful shipping products/services. 

 

About Energy Exemplar

Energy Exemplar is the global market leader in the technology of optimization-based energy market simulation. Our software suite, headlined by PLEXOS and Aurora, is used across every region of the world for a wide range of applications, from short-term analysis to long-term planning studies. Driven by the frenetic pace of advancements in computing technology and mathematical algorithms, our people continually think of novel approaches and more realistic simulations that enhance decision making, create market opportunities that benefit us all and enable utilities and regulatory authorities to become smarter, more energy efficient and profitable. Energy Exemplar continues to ‘push the envelope’, being first-to-market with the latest advances in mathematical programming and energy market simulation theory, as it strives to offer the most comprehensive simulation software to its customer base. Development continues to be headquartered in Adelaide, South Australia, led by Glenn Drayton and backed by a team with expertise in software development, operations research, economics, mathematics, statistics, and electrical engineering.

For more details, please visit http://www.energyexemplar.com

 

 

Read more
Job posted by
Payal Joshi

Data Analyst- Biglittle.ai

at Codejudge

Founded 2019  •  Product  •  20-100 employees  •  Raised funding
SQL
Python
Data architecture
Data mining
Data Analytics
icon
Bengaluru (Bangalore)
icon
3 - 7 yrs
icon
₹20L - ₹25L / yr
Job description
  • The ideal candidate is adept at using large data sets to find opportunities for product and process optimization and using models to test the effectiveness of different courses of action.
  • Mine and analyze data from company databases to drive optimization and improvement of product development, marketing techniques and business strategies.
  • Assess the effectiveness and accuracy of new data sources and data gathering techniques.
  • Develop custom data models and algorithms to apply to data sets.
  • Use predictive modeling to increase and optimize customer experiences, revenue generation, ad targeting and other business outcomes.
  • Develop company A/B testing framework and test model quality.
  • Develop processes and tools to monitor and analyze model performance and data accuracy.

Roles & Responsibilities

  • Experience using statistical languages (R, Python, SQL, etc.) to manipulate data and draw insights from large data sets.
  • Experience working with and creating data architectures.
  • Looking for someone with 3-7 years of experience manipulating data sets and building statistical models
  • Has a Bachelor's, Master's in Computer Science or another quantitative field
  • Knowledge and experience in statistical and data mining techniques :
  • GLM/Regression, Random Forest, Boosting, Trees, text mining,social network analysis, etc.
  • Experience querying databases and using statistical computer languages :R, Python, SQL, etc.
  • Experience creating and using advanced machine learning algorithms and statistics: regression, simulation, scenario analysis, modeling, clustering, decision trees,neural networks, etc.
  • Experience with distributed data/computing tools: Map/Reduce, Hadoop, Hive, Spark, Gurobi, MySQL, etc.
  • Experience visualizing/presenting data for stakeholders using: Periscope, Business Objects, D3, ggplot, etc.
Read more
Job posted by
Vaishnavi M

Data Scientist

at leading pharmacy provider

Agency job
via Econolytics
Data Science
R Programming
Python
Algorithms
Predictive modelling
icon
Noida, NCR (Delhi | Gurgaon | Noida)
icon
4 - 10 yrs
icon
₹18L - ₹24L / yr
Job Description:

• Help build a Data Science team which will be engaged in researching, designing,
implementing, and deploying full-stack scalable data analytics vision and machine learning
solutions to challenge various business issues.
• Modelling complex algorithms, discovering insights and identifying business
opportunities through the use of algorithmic, statistical, visualization, and mining techniques
• Translates business requirements into quick prototypes and enable the
development of big data capabilities driving business outcomes
• Responsible for data governance and defining data collection and collation
guidelines.
• Must be able to advice, guide and train other junior data engineers in their job.

Must Have:

• 4+ experience in a leadership role as a Data Scientist
• Preferably from retail, Manufacturing, Healthcare industry(not mandatory)
• Willing to work from scratch and build up a team of Data Scientists
• Open for taking up the challenges with end to end ownership
• Confident with excellent communication skills along with a good decision maker
Read more
Job posted by
Jyotsna Econolytics

Sr. Data Engineer ( a Fintech product company )

at Velocity.in

Founded 2019  •  Product  •  20-100 employees  •  Raised funding
Data engineering
Data Engineer
Big Data
Big Data Engineer
Python
Data Visualization
Data Warehouse (DWH)
Google Cloud Platform (GCP)
Data-flow analysis
Amazon Web Services (AWS)
PL/SQL
NOSQL Databases
PostgreSQL
ETL
data pipelining
icon
Bengaluru (Bangalore)
icon
4 - 8 yrs
icon
₹20L - ₹35L / yr

We are an early stage start-up, building new fintech products for small businesses. Founders are IIT-IIM alumni, with prior experience across management consulting, venture capital and fintech startups. We are driven by the vision to empower small business owners with technology and dramatically improve their access to financial services. To start with, we are building a simple, yet powerful solution to address a deep pain point for these owners: cash flow management. Over time, we will also add digital banking and 1-click financing to our suite of offerings.

 

We have developed an MVP which is being tested in the market. We have closed our seed funding from marquee global investors and are now actively building a world class tech team. We are a young, passionate team with a strong grip on this space and are looking to on-board enthusiastic, entrepreneurial individuals to partner with us in this exciting journey. We offer a high degree of autonomy, a collaborative fast-paced work environment and most importantly, a chance to create unparalleled impact using technology.

 

Reach out if you want to get in on the ground floor of something which can turbocharge SME banking in India!

 

Technology stack at Velocity comprises a wide variety of cutting edge technologies like, NodeJS, Ruby on Rails, Reactive Programming,, Kubernetes, AWS, NodeJS, Python, ReactJS, Redux (Saga) Redis, Lambda etc. 

 

Key Responsibilities

  • Responsible for building data and analytical engineering pipelines with standard ELT patterns, implementing data compaction pipelines, data modelling and overseeing overall data quality

  • Work with the Office of the CTO as an active member of our architecture guild

  • Writing pipelines to consume the data from multiple sources

  • Writing a data transformation layer using DBT to transform millions of data into data warehouses.

  • Implement Data warehouse entities with common re-usable data model designs with automation and data quality capabilities

  • Identify downstream implications of data loads/migration (e.g., data quality, regulatory)

 

What To Bring

  • 3+ years of software development experience, a startup experience is a plus.

  • Past experience of working with Airflow and DBT is preferred

  • 2+ years of experience working in any backend programming language. 

  • Strong first-hand experience with data pipelines and relational databases such as Oracle, Postgres, SQL Server or MySQL

  • Experience with DevOps tools (GitHub, Travis CI, and JIRA) and methodologies (Lean, Agile, Scrum, Test Driven Development)

  • Experienced with the formulation of ideas; building proof-of-concept (POC) and converting them to production-ready projects

  • Experience building and deploying applications on on-premise and AWS or Google Cloud cloud-based infrastructure

  • Basic understanding of Kubernetes & docker is a must.

  • Experience in data processing (ETL, ELT) and/or cloud-based platforms

  • Working proficiency and communication skills in verbal and written English.

 

 

 

Read more
Job posted by
chinnapareddy S

Senior Data Engineer

at SpringML

Founded 2015  •  Services  •  100-1000 employees  •  Profitable
Big Data
Data engineering
TensorFlow
Apache Spark
Java
Python
Google Cloud Platform (GCP)
icon
Remote, Hyderabad
icon
4 - 9 yrs
icon
₹12L - ₹20L / yr
REQUIRED SKILLS:

• Total of 4+ years of experience in development, architecting/designing and implementing Software solutions for enterprises.

• Must have strong programming experience in either Python or Java/J2EE.

• Minimum of 4+ year’s experience working with various Cloud platforms preferably Google Cloud Platform.

• Experience in Architecting and Designing solutions leveraging Google Cloud products such as Cloud BigQuery, Cloud DataFlow, Cloud Pub/Sub, Cloud BigTable and Tensorflow will be highly preferred.

• Presentation skills with a high degree of comfort speaking with management and developers

• The ability to work in a fast-paced, work environment

• Excellent communication, listening, and influencing skills

RESPONSIBILITIES:

• Lead teams to implement and deliver software solutions for Enterprises by understanding their requirements.

• Communicate efficiently and document the Architectural/Design decisions to customer stakeholders/subject matter experts.

• Opportunity to learn new products quickly and rapidly comprehend new technical areas – technical/functional and apply detailed and critical thinking to customer solutions.

• Implementing and optimizing cloud solutions for customers.

• Migration of Workloads from on-prem/other public clouds to Google Cloud Platform.

• Provide solutions to team members for complex scenarios.

• Promote good design and programming practices with various teams and subject matter experts.

• Ability to work on any product on the Google cloud platform.

• Must be hands-on and be able to write code as required.

• Ability to lead junior engineers and conduct code reviews



QUALIFICATION:

• Minimum B.Tech/B.E Engineering graduate
Read more
Job posted by
Sai Raj Sampath

Data Analyst

at Wheelseye Technology India Pvt Ltd.

Founded 2017  •  Product  •  100-500 employees  •  Raised funding
Python
SQL
Microsoft Excel
MS-Excel
icon
NCR (Delhi | Gurgaon | Noida)
icon
1 - 5 yrs
icon
₹8L - ₹16L / yr

About WheelsEye :
Logistics in India is a complex business - layered with multiple stakeholders, unorganized, primarily offline, and with many trivial yet deep-rooted problems. Though this industry contributes 14% to the GDP, its problems have gone unattended and ignored, until now.

WheelsEye is a logistics company, building a digital infrastructure around fleet owners. Currently, we offer solutions to empower truck fleet owners. Our proprietary software & hardware solutions help automate operations, secure fleet, save costs, improve on-time performance, and streamline their business.

 

Why WheelsEye?

  • Work on a real Indian problem of scale impact lives of 5.5 cr fleet owners, drivers and their families in a meaningful way
  • Different from current market players, heavily focused and built around truck owners Problem solving and learning-oriented organization
  • Audacious goals, high speed, and action orientation
  • Opportunity to scale the organization across the country
  • Opportunity to build and execute the culture
  • Contribute to and become a part of the action plan for building the tech, finance, and service infrastructure for the logistics industry It's Tough!

Requirements:

  • Bachelor’s degree with additional 2-5 years experience in analytics domain
  • Experience in articulating and translating business questions and using statistical techniques to arrive​ ​at an answer using available data
  • Proficient with scripting and/or programming language, e.g. Python, R(Optional), Advanced SQL; advanced knowledge​ ​of data processing, database programming and data analytics tools and techniques
  • Extensive background in data mining, modelling and statistical analysis; able to understand various data structures and common methods in data transformation​ e.g. Linear and logistic regression, clustering, decision trees etc.
  • ​Working knowledge of tools like Mixpanel, Metabase, Google sheets, Google BigQuery & Data​ ​studio is preferred
  • ​Ability to self-start and self-directed work in a fast-paced environment

If you are willing to work on solving real world problems for truck owners, Join us!
Read more
Job posted by
Rupali Goel
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at DataToBiz?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort