Cutshort logo
Carsome logo
Senior Data Scientist
Senior Data Scientist
Carsome's logo

Senior Data Scientist

Piyush Palkar's profile picture
Posted by Piyush Palkar
1 - 6 yrs
₹10L - ₹30L / yr
Remote, Kuala Lumpur
Skills
skill iconData Science
skill iconMachine Learning (ML)
skill iconPython
SQL
Problem solving
Analytical Skills
Tableau
Algorithms
skill iconAmazon Web Services (AWS)

Carsome’s Data Department is on the lookout for a Data Scientist/Senior Data Scientist who has a strong passion in building data powered products.

 

Data Science function under the Data Department has a responsibility for standardisation of methods, mentoring team of data science resources/interns, including code libraries and documentation, quality assurance of outputs, modeling techniques and statistics, leveraging a variety of technologies, open-source languages, and cloud computing platform. 

 

You will get to lead & implement projects such as price optimization/prediction, enabling iconic personalization experiences for our customer, inventory optimization etc.

 

Job Descriptions

 

  • Identifying and integrating datasets that can be leveraged through our product and work closely with data engineering team to develop data products.
  • Execute analytical experiments methodically to help solve various problems and make a true impact across functions such as operations, finance, logistics, marketing. 
  • Identify, prioritize, and design testing opportunities that will inform algorithm enhancements. 
  • Devise and utilize algorithms and models to mine big data stores, perform data and error analysis to improve models and clean and validate data for uniformity and accuracy.
  • Unlock insights by analyzing large amounts of complex website traffic and transactional data. 
  • Implement analytical models into production by collaborating with data analytics engineers.

 

Technical Requirements

 

  • Expertise in model design, training, evaluation, and implementation ML Algorithm expertise K-nearest neighbors, Random Forests, Naive Bayes, Regression Models. PyTorch, TensorFlow, Keras, deep learning expertise, tSNE, gradient boosting expertise, regression implementation expertise, Python, Pyspark, SQL, R, AWS Sagemaker /personalize etc.
  • Machine Learning / Data Science Certification

 

Experience & Education 

 

  • Bachelor’s in Engineering / Master’s in Data Science  / Postgraduate Certificate in Data Science. 
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About Carsome

Founded :
2015
Type
Size
Stage :
Raised funding
About
Carsome is Southeast Asia’s largest integrated car e-commerce platform. With presence across Malaysia, Indonesia, Thailand and Singapore, it aims to digitalize the region’s used car industry by reshaping and elevating the car buying and selling experiences with complete peace-of-mind.
Read more
Connect with the team
Profile picture
Amit Warrier
Profile picture
Piyush Palkar
Profile picture
Emre Senyuva
Company social profiles
instagramlinkedinfacebook

Similar jobs

DeepIntent
at DeepIntent
2 candid answers
17 recruiters
Indrajeet Deshmukh
Posted by Indrajeet Deshmukh
Pune
4 - 8 yrs
Best in industry
Data Warehouse (DWH)
Informatica
ETL
SQL
Google Cloud Platform (GCP)
+3 more

Who We Are:

DeepIntent is leading the healthcare advertising industry with data-driven solutions built for the future. From day one, our mission has been to improve patient outcomes through the artful use of advertising, data science, and real-world clinical data.

What You’ll Do:

We are looking for a Senior Software Engineer based in Pune, India who can master both DeepIntent’s data architectures and pharma research and analytics methodologies to make significant contributions to how health media is analyzed by our clients. This role requires an Engineer who not only understands DBA functions but also how they impact research objectives and can work with researchers and data scientists to achieve impactful results.  

This role will be in the Analytics Organization and will require integration and partnership with the Engineering Organization. The ideal candidate is a self-starter who is inquisitive who is not afraid to take on and learn from challenges and will constantly seek to improve the facets of the business they manage. The ideal candidate will also need to demonstrate the ability to collaborate and partner with others.  

  • Serve as the Engineering interface between Analytics and Engineering teams
  • Develop and standardized all interface points for analysts to retrieve and analyze data with a focus on research methodologies and data based decisioning
  • Optimize queries and data access efficiencies, serve as expert in how to most efficiently attain desired data points
  • Build “mastered” versions of the data for Analytics specific querying use cases
  • Help with data ETL, table performance optimization
  • Establish formal data practice for the Analytics practice in conjunction with rest of DeepIntent
  • Build & operate scalable and robust data architectures
  • Interpret analytics methodology requirements and apply to data architecture to create standardized queries and operations for use by analytics teams
  • Implement DataOps practices
  • Master existing and new Data Pipelines and develop appropriate queries to meet analytics specific objectives
  • Collaborate with various business stakeholders, software engineers, machine learning engineers, analysts
  • Operate between Engineers and Analysts to unify both practices for analytics insight creation

Who You Are:

  • Adept in market research methodologies and using data to deliver representative insights
  • Inquisitive, curious, understands how to query complicated data sets, move and combine data between databases
  • Deep SQL experience is a must
  • Exceptional communication skills with ability to collaborate and translate with between technical and non technical needs
  • English Language Fluency and proven success working with teams in the U.S.
  • Experience in designing, developing and operating configurable Data pipelines serving high volume and velocity data
  • Experience working with public clouds like GCP/AWS
  • Good understanding of software engineering, DataOps, and data architecture, Agile and DevOps methodologies
  • Experience building Data architectures that optimize performance and cost, whether the components are prepackaged or homegrown
  • Proficient with SQL,Python or JVM based language, Bash
  • Experience with any of Apache open source projects such as Spark, Druid, Beam, Airflow etc.and big data databases like BigQuery, Clickhouse, etc         
  • Ability to think big, take bets and innovate, dive deep, hire and develop the best talent, learn and be curious
  • Comfortable to work in EST Time Zone


Read more
Remote only
3 - 7 yrs
₹15L - ₹24L / yr
skill iconData Science
skill iconMachine Learning (ML)
Artificial Intelligence (AI)
Natural Language Processing (NLP)
skill iconR Programming
+4 more

  Senior Data Scientist

  • 6+ years Experienced in building data pipelines and deployment pipelines for machine learning models
  • 4+ years’ experience with ML/AI toolkits such as Tensorflow, Keras, AWS Sagemaker, MXNet, H20, etc.
  • 4+ years’ experience developing ML/AI models in Python/R
  • Must have leadership abilities to lead a project and team.
  • Must have leadership skills to lead and deliver projects, be proactive, take ownership, interface with business, represent the team and spread the knowledge.
  • Strong knowledge of statistical data analysis and machine learning techniques (e.g., Bayesian, regression, classification, clustering, time series, deep learning).
  • Should be able to help deploy various models and tune them for better performance.
  • Working knowledge in operationalizing models in production using model repositories, API s and data pipelines.
  • Experience with machine learning and computational statistics packages.
  • Experience with Data Bricks, Data Lake.
  • Experience with Dremio, Tableau, Power Bi.
  • Experience working with spark ML, spark DL with Pyspark would be a big plus!
  • Working knowledge of relational database systems like SQL Server, Oracle.
  • Knowledge of deploying models in platforms like PCF, AWS, Kubernetes.
  • Good knowledge in Continuous integration suites like Jenkins.
  • Good knowledge in web servers (Apache, NGINX).
  • Good knowledge in Git, Github, Bitbucket.
  • Working knowledge in operationalizing models in production using model repositories, APIs and data pipelines.
  • Java, R, and Python programming experience.
  • Should be very familiar with (MS SQL, Teradata, Oracle, DB2).
  • Big Data – Hadoop.
  • Expert knowledge using BI tools e.g.Tableau
  • Experience with machine learning and computational statistics packages.

 

Read more
Gurugram, Bengaluru (Bangalore), Chennai
2 - 9 yrs
₹9L - ₹27L / yr
DevOps
Microsoft Windows Azure
gitlab
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
+15 more
Greetings!!

We are looking out for a technically driven  "ML OPS Engineer" for one of our premium client

COMPANY DESCRIPTION:
This Company is a global management consulting firm. We are the trusted advisor to the world's leading businesses, governments, and institutions. We work with leading organizations across the private, public and social sectors. Our scale, scope, and knowledge allow us to address


Key Skills
• Excellent hands-on expert knowledge of cloud platform infrastructure and administration
(Azure/AWS/GCP) with strong knowledge of cloud services integration, and cloud security
• Expertise setting up CI/CD processes, building and maintaining secure DevOps pipelines with at
least 2 major DevOps stacks (e.g., Azure DevOps, Gitlab, Argo)
• Experience with modern development methods and tooling: Containers (e.g., docker) and
container orchestration (K8s), CI/CD tools (e.g., Circle CI, Jenkins, GitHub actions, Azure
DevOps), version control (Git, GitHub, GitLab), orchestration/DAGs tools (e.g., Argo, Airflow,
Kubeflow)
• Hands-on coding skills Python 3 (e.g., API including automated testing frameworks and libraries
(e.g., pytest) and Infrastructure as Code (e.g., Terraform) and Kubernetes artifacts (e.g.,
deployments, operators, helm charts)
• Experience setting up at least one contemporary MLOps tooling (e.g., experiment tracking,
model governance, packaging, deployment, feature store)
• Practical knowledge delivering and maintaining production software such as APIs and cloud
infrastructure
• Knowledge of SQL (intermediate level or more preferred) and familiarity working with at least
one common RDBMS (MySQL, Postgres, SQL Server, Oracle)
Read more
Top IT MNC
Chennai, Bengaluru (Bangalore), Kochi (Cochin), Coimbatore, Hyderabad, Pune, Kolkata, Noida, Gurugram, Mumbai
5 - 13 yrs
₹8L - ₹20L / yr
Snow flake schema
skill iconPython
snowflake
Greetings,

We are looking out for a Snowflake developer for one of our premium clients for their PAN India loaction
Read more
RandomTrees
at RandomTrees
1 recruiter
Amareswarreddt yaddula
Posted by Amareswarreddt yaddula
Hyderabad
5 - 16 yrs
₹1L - ₹30L / yr
ETL
Informatica
Data Warehouse (DWH)
skill iconAmazon Web Services (AWS)
SQL
+3 more

We are #hiring for AWS Data Engineer expert to join our team


Job Title: AWS Data Engineer

Experience: 5 Yrs to 10Yrs

Location: Remote

Notice: Immediate or Max 20 Days

Role: Permanent Role


Skillset: AWS, ETL, SQL, Python, Pyspark, Postgres DB, Dremio.


Job Description:

 Able to develop ETL jobs.

Able to help with data curation/cleanup, data transformation, and building ETL pipelines.

Strong Postgres DB exp and knowledge of Dremio data visualization/semantic layer between DB and the application is a plus.

Sql, Python, and Pyspark is a must.

Communication should be good





Read more
Tredence
Sharon Joseph
Posted by Sharon Joseph
Bengaluru (Bangalore), Gurugram, Chennai, Pune
7 - 10 yrs
Best in industry
skill iconData Science
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
Computer Vision
skill iconPython
+1 more

Job Summary

As a Data Science Lead, you will manage multiple consulting projects of varying complexity and ensure on-time and on-budget delivery for clients. You will lead a team of data scientists and collaborate across cross-functional groups, while contributing to new business development, supporting strategic business decisions and maintaining & strengthening client base

  1. Work with team to define business requirements, come up with analytical solution and deliver the solution with specific focus on Big Picture to drive robustness of the solution
  2. Work with teams of smart collaborators. Be responsible for their appraisals and career development.
  3. Participate and lead executive presentations with client leadership stakeholders.
  4. Be part of an inclusive and open environment. A culture where making mistakes and learning from them is part of life
  5. See how your work contributes to building an organization and be able to drive Org level initiatives that will challenge and grow your capabilities.

​​​​​​Role & Responsibilities

  1. Serve as expert in Data Science, build framework to develop Production level DS/AI models.
  2. Apply AI research and ML models to accelerate business innovation and solve impactful business problems for our clients.
  3. Lead multiple teams across clients ensuring quality and timely outcomes on all projects.
  4. Lead and manage the onsite-offshore relation, at the same time adding value to the client.
  5. Partner with business and technical stakeholders to translate challenging business problems into state-of-the-art data science solutions.
  6. Build a winning team focused on client success. Help team members build lasting career in data science and create a constant learning/development environment.
  7. Present results, insights, and recommendations to senior management with an emphasis on the business impact.
  8. Build engaging rapport with client leadership through relevant conversations and genuine business recommendations that impact the growth and profitability of the organization.
  9. Lead or contribute to org level initiatives to build the Tredence of tomorrow.

 

Qualification & Experience

  1. Bachelor's /Master's /PhD degree in a quantitative field (CS, Machine learning, Mathematics, Statistics, Data Science) or equivalent experience.
  2. 6-10+ years of experience in data science, building hands-on ML models
  3. Expertise in ML – Regression, Classification, Clustering, Time Series Modeling, Graph Network, Recommender System, Bayesian modeling, Deep learning, Computer Vision, NLP/NLU, Reinforcement learning, Federated Learning, Meta Learning.
  4. Proficient in some or all of the following techniques: Linear & Logistic Regression, Decision Trees, Random Forests, K-Nearest Neighbors, Support Vector Machines ANOVA , Principal Component Analysis, Gradient Boosted Trees, ANN, CNN, RNN, Transformers.
  5. Knowledge of programming languages SQL, Python/ R, Spark.
  6. Expertise in ML frameworks and libraries (TensorFlow, Keras, PyTorch).
  7. Experience with cloud computing services (AWS, GCP or Azure)
  8. Expert in Statistical Modelling & Algorithms E.g. Hypothesis testing, Sample size estimation, A/B testing
  9. Knowledge in Mathematical programming – Linear Programming, Mixed Integer Programming etc , Stochastic Modelling – Markov chains, Monte Carlo, Stochastic Simulation, Queuing Models.
  10. Experience with Optimization Solvers (Gurobi, Cplex) and Algebraic programming Languages(PulP)
  11. Knowledge in GPU code optimization, Spark MLlib Optimization.
  12. Familiarity to deploy and monitor ML models in production, delivering data products to end-users.
  13. Experience with ML CI/CD pipelines.
Read more
Hyderabad
4 - 8 yrs
₹6L - ₹25L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+4 more
  1. Expertise in building AWS Data Engineering pipelines with AWS Glue -> Athena -> Quick sight
  2. Experience in developing lambda functions with AWS Lambda
  3. Expertise with Spark/PySpark – Candidate should be hands on with PySpark code and should be able to do transformations with Spark
  4. Should be able to code in Python and Scala.
  5. Snowflake experience will be a plus

 

Read more
Remote, Bengaluru (Bangalore), Hyderabad
0 - 1 yrs
₹2.5L - ₹4L / yr
SQL
Data engineering
Big Data
skill iconPython
● Hands-on Work experience as a Python Developer
● Hands-on Work experience in SQL/PLSQL
● Expertise in at least one popular Python framework (like Django,
Flask or Pyramid)
● Knowledge of object-relational mapping (ORM)
● Familiarity with front-end technologies (like JavaScript and HTML5)
● Willingness to learn & upgrade to Big data and cloud technologies
like Pyspark Azure etc.
● Team spirit
● Good problem-solving skills
● Write effective, scalable code
Read more
Chennai, Bengaluru (Bangalore)
3 - 8 yrs
₹5L - ₹12L / yr
Tableau
SQL
PL/SQL
Responsibilities


In this role, candidates will be responsible for developing Tableau Reports. Should be able to write effective and scalable code. Improve functionality of existing Reports/systems.

·       Design stable, scalable code.

·       Identify potential improvements to the current design/processes.

·       Participate in multiple project discussions as a senior member of the team.

·       Serve as a coach/mentor for junior developers.


Minimum Qualifications

·       3 - 8 Years of experience

·       Excellent written and verbal communication skills

 

Must have skills

·       Meaningful work experience

·       Extensively worked on BI Reporting tool: Tableau for development of reports to fulfill the end user requirements.

·       Experienced in interacting with business users to analyze the business process and requirements and redefining requirements into visualizations and reports.

·       Must have knowledge with the selection of appropriate data visualization strategies (e.g., chart types) for specific use cases. Ability to showcase complete dashboard implementations that demonstrate       visual  standard methodologies (e.g., color themes, visualization layout, interactivity, drill-down capabilities, filtering, etc.).

·       You should be an Independent player and have experience working with senior leaders.

·       Able to explore options and suggest new solutions and visualization techniques to the customer.

·       Experience crafting joins and joins with custom SQL blending data from different data sources using Tableau Desktop.

·       Using sophisticated calculations using Tableau Desktop (Aggregate, Date, Logical, String, Table, LOD Expressions.

·       Working with relational data sources (like Oracle / SQL Server / DB2) and flat files.

·       Optimizing user queries and dashboard performance.

·       Knowledge in SQL, PL/SQL.

·       Knowledge is crafting DB views and materialized views.

·       Excellent verbal and written communication skills and interpersonal skills are required.

·       Excellent documentation and presentation skills; should be able to build business process mapping document; functional solution documents and own the acceptance/signoff process from E2E

·       Ability to make right graph choices, use of data blending feature, Connect to several DB technologies.

·       Must stay up to date on new and coming visualization technologies. 

 

Pref location: Chennai (priority)/ Bengaluru  

Read more
Monexo Fintech
at Monexo Fintech
1 video
5 recruiters
Mukesh Bubna
Posted by Mukesh Bubna
Mumbai, Chennai
1 - 3 yrs
₹3L - ₹5L / yr
skill iconData Science
skill iconPython
skill iconR Programming
The Candidate should be have: - good understanding of Statistical concepts - worked on Data Analysis and Model building for 1 year - ability to implement Data warehouse and Visualisation tools (IBM, Amazon or Tableau) - use of ETL tools - understanding of scoring models The candidate will be required: - to build models for approval or rejection of loans - build various reports (standard for monthly reporting) to optimise business - implement datawarehosue The candidate should be self-starter as well as work without supervision. You will be the 1st and only employee for this role for the next 6 months.
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos