Cutshort logo
WyngCommerce logo
Data Scientist
WyngCommerce's logo

Data Scientist

Ankit Jain's profile picture
Posted by Ankit Jain
1 - 4 yrs
₹9L - ₹15L / yr
Bengaluru (Bangalore)
Skills
skill iconData Science
skill iconPython
skill iconR Programming
Supply Chain Management (SCM)
WyngCommerce is building state of the art AI software for the Global Consumer Brands & Retailers to enable best-in-class customer experiences. Our vision is to democratize machine learning algorithms for our customers and help them realize dramatic improvements in speed, cost and flexibility. Backed by a clutch of prominent angel investors & having some of the category leaders in the retail industry as clients, we are looking to hire for our data science team. The data science team at WyngCommerce is on a mission to challenge the norms and re-imagine how retail business should be run across the world. As a Junior Data Scientist in the team, you will be driving and owning the thought leadership and impact on one of our core data science problems. You will work collaboratively with the founders, clients and engineering team to formulate complex problems, run Exploratory Data Analysis and test hypotheses, implement ML-based solutions and fine tune them with more data. This is a high impact role with goals that directly impact our business. Your Role & Responsibilities: - Implement data-driven solutions based on advanced ML and optimization algorithms to address business problems - Research, experiment, and innovate ML/statistical approaches in various application areas of interest and contribute to IP - Partner with engineering teams to build scalable, efficient, automated ML-based pipelines (training/evaluation/monitoring) - Deploy, maintain, and debug ML/decision models in production environment - Analyze and assess data to ensure high data quality and correctness of downstream processes - Communicate results to stakeholders and present data/insights to participate in and drive decision making Desired Skills & Experiences: - Bachelors or Masters in a quantitative field from a top tier college - 1-2 years experience in a data science / analytics role in a technology / analytics company - Solid mathematical background (especially in linear algebra & probability theory) - Familiarity with theoretical aspects of common ML techniques (generalized linear models, ensembles, SVMs, clustering algos, graphical models, etc.), statistical tests/metrics, experiment design, and evaluation methodologies - Demonstrable track record of dealing with ambiguity, prioritizing needs, bias for iterative learning, and delivering results in a dynamic environment with minimal guidance - Hands-on experience in at least one of the following: (a) Anomaly Detection, (b) Time Series Analysis, (c) Product Clustering, (d) Demand Forecasting, (e) Intertemporal Optimization - Good programming skills (fluent in Java/Python/SQL) with experience of using common ML toolkits (e.g., sklearn, tensor flow, keras, nltk) to build models for real world problems - Computational thinking and familiarity with practical application requirements (e.g., latency, memory, processing time) - Excellent written and verbal communication skills for both technical and non-technical audiences - (Plus Point) Experience of applying ML / other techniques in the domain of supply chain - and particularly in retail - for inventory optimization, demand forecasting, assortment planning, and other such problems - (Nice to have) Research experience and publications in top ML/Data science conferences
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About WyngCommerce

Founded :
2017
Type
Size
Stage :
Raised funding
About
WyngCommerce AI for increasing Full Price Sell Through & Profits. Optimise Assortment & Inventory Placement via First Allocation, Dynamic Replenishments, Inter-Store Transfers & OTB Recommendations.
Read more
Connect with the team
Profile picture
Sharad Lahoti
Profile picture
Ankit Jain
Profile picture
Anurag Bhatt
Company social profiles
angelinstagramlinkedintwitter

Similar jobs

Startup Focused on simplifying Buying Intent
Bengaluru (Bangalore)
4 - 9 yrs
₹28L - ₹56L / yr
Big Data
Apache Spark
Spark
Hadoop
ETL
+7 more
5+ years of experience in a Data Engineer role.
 Proficiency in Linux.
 Must have SQL knowledge and experience working with relational databases,
query authoring (SQL) as well as familiarity with databases including Mysql,
Mongo, Cassandra, and Athena.
 Must have experience with Python/Scala.
Must have experience with Big Data technologies like Apache Spark.
 Must have experience with Apache Airflow.
 Experience with data pipeline and ETL tools like AWS Glue.
 Experience working with AWS cloud services: EC2, S3, RDS, Redshift.
Read more
one-to-one, one-to-many, and many-to-many
Chennai
5 - 10 yrs
₹1L - ₹15L / yr
AWS CloudFormation
skill iconPython
PySpark
AWS Lambda

5-7 years of experience in Data Engineering with solid experience in design, development and implementation of end-to-end data ingestion and data processing system in AWS platform.

2-3 years of experience in AWS Glue, Lambda, Appflow, EventBridge, Python, PySpark, Lake House, S3, Redshift, Postgres, API Gateway, CloudFormation, Kinesis, Athena, KMS, IAM.

Experience in modern data architecture, Lake House, Enterprise Data Lake, Data Warehouse, API interfaces, solution patterns, standards and optimizing data ingestion.

Experience in build of data pipelines from source systems like SAP Concur, Veeva Vault, Azure Cost, various social media platforms or similar source systems.

Expertise in analyzing source data and designing a robust and scalable data ingestion framework and pipelines adhering to client Enterprise Data Architecture guidelines.

Proficient in design and development of solutions for real-time (or near real time) stream data processing as well as batch processing on the AWS platform.

Work closely with business analysts, data architects, data engineers, and data analysts to ensure that the data ingestion solutions meet the needs of the business.

Troubleshoot and provide support for issues related to data quality and data ingestion solutions. This may involve debugging data pipeline processes, optimizing queries, or troubleshooting application performance issues.

Experience in working in Agile/Scrum methodologies, CI/CD tools and practices, coding standards, code reviews, source management (GITHUB), JIRA, JIRA Xray and Confluence.

Experience or exposure to design and development using Full Stack tools.

Strong analytical and problem-solving skills, excellent communication (written and oral), and interpersonal skills.

Bachelor's or master's degree in computer science or related field.

 

 

Read more
Bengaluru (Bangalore)
1 - 8 yrs
₹8L - ₹14L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+8 more
In this role, you will be part of a growing, global team of data engineers, who collaborate in DevOps mode, in order to enable Merck business with state-of-the-art technology to leverage data as an asset and to take better informed decisions.

The Merck Data Engineering Team is responsible for designing, developing, testing, and supporting automated end-to-end data pipelines and applications on Merck’s data management and global analytics platform (Palantir Foundry, Hadoop, AWS and other components).

The Foundry platform comprises multiple different technology stacks, which are hosted on Amazon Web Services (AWS) infrastructure or on-premise Merck’s own data centers. Developing pipelines and applications on Foundry requires:

• Proficiency in SQL / Java / Python (Python required; all 3 not necessary)
• Proficiency in PySpark for distributed computation
• Familiarity with Postgres and ElasticSearch
• Familiarity with HTML, CSS, and JavaScript and basic design/visual competency
• Familiarity with common databases (e.g. JDBC, mySQL, Microsoft SQL). Not all types required

This position will be project based and may work across multiple smaller projects or a single large project utilizing an agile project methodology.

Roles & Responsibilities:
• Develop data pipelines by ingesting various data sources – structured and un-structured – into Palantir Foundry
• Participate in end to end project lifecycle, from requirements analysis to go-live and operations of an application
• Acts as business analyst for developing requirements for Foundry pipelines
• Review code developed by other data engineers and check against platform-specific standards, cross-cutting concerns, coding and configuration standards and functional specification of the pipeline
• Document technical work in a professional and transparent way. Create high quality technical documentation
• Work out the best possible balance between technical feasibility and business requirements (the latter can be quite strict)
• Deploy applications on Foundry platform infrastructure with clearly defined checks
• Implementation of changes and bug fixes via Merck's change management framework and according to system engineering practices (additional training will be provided)
• DevOps project setup following Agile principles (e.g. Scrum)
• Besides working on projects, act as third level support for critical applications; analyze and resolve complex incidents/problems. Debug problems across a full stack of Foundry and code based on Python, Pyspark, and Java
• Work closely with business users, data scientists/analysts to design physical data models
Read more
Deep-Rooted.co (formerly Clover)
at Deep-Rooted.co (formerly Clover)
6 candid answers
1 video
Likhithaa D
Posted by Likhithaa D
Bengaluru (Bangalore)
3 - 6 yrs
₹12L - ₹15L / yr
skill iconJava
skill iconPython
SQL
AWS Lambda
HTTP
+5 more

Deep-Rooted.Co is on a mission to get Fresh, Clean, Community (Local farmer) produce from harvest to reach your home with a promise of quality first! Our values are rooted in trust, convenience, and dependability, with a bunch of learning & fun thrown in.


Founded out of Bangalore by Arvind, Avinash, Guru and Santosh, with the support of our Investors Accel, Omnivore & Mayfield, we raised $7.5 million in Seed, Series A and Debt funding till date from investors include ACCEL, Omnivore, Mayfield among others. Our brand Deep-Rooted.Co which was launched in August 2020 was the first of its kind as India’s Fruits & Vegetables (F&V) which is present in Bangalore & Hyderabad and on a journey of expansion to newer cities which will be managed seamlessly through Tech platform that has been designed and built to transform the Agri-Tech sector.


Deep-Rooted.Co is committed to building a diverse and inclusive workplace and is an equal-opportunity employer.  

How is this possible? It’s because we work with smart people. We are looking for Engineers in Bangalore to work with thehttps://www.linkedin.com/in/gururajsrao/"> Product Leader (Founder) andhttps://www.linkedin.com/in/sriki77/"> CTO and this is a meaningful project for us and we are sure you will love the project as it touches everyday life and is fun. This will be a virtual consultation.


We want to start the conversation about the project we have for you, but before that, we want to connect with you to know what’s on your mind. Do drop a note sharing your mobile number and letting us know when we can catch up.

Purpose of the role:

* As a startup we have data distributed all across various sources like Excel, Google Sheets, Databases etc. We need swift decision making based a on a lot of data that exists as we grow. You help us bring together all this data and put it in a data model that can be used in business decision making.
* Handle nuances of Excel and Google Sheets API.
* Pull data in and manage it growth, freshness and correctness.
* Transform data in a format that aids easy decision-making for Product, Marketing and Business Heads.
* Understand the business problem, solve the same using the technology and take it to production - no hand offs - full path to production is yours.

Technical expertise:
* Good Knowledge And Experience with Programming languages - Java, SQL,Python.
* Good Knowledge of Data Warehousing, Data Architecture.
* Experience with Data Transformations and ETL; 
* Experience with API tools and more closed systems like Excel, Google Sheets etc.
* Experience AWS Cloud Platform and Lambda
* Experience with distributed data processing tools.
* Experiences with container-based deployments on cloud.

Skills:
Java, SQL, Python, Data Build Tool, Lambda, HTTP, Rest API, Extract Transform Load.
Read more
Bengaluru (Bangalore)
1 - 4 yrs
₹10L - ₹15L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+7 more

Work closely with Product Managers to drive product improvements through data driven decisions.

Conduct analysis to determine new project pilot settings, new features, user behaviour, and in-app behaviour.

Present insights and recommendations to leadership using high quality visualizations and concise messaging.

Own the implementation of data collection and tracking, and co-ordinate with engineering and product team.

Create and maintain dashboards for product and business teams.

Requirements

1+ years’ experience in analytics. Experience as Product analyst will be added advantage.

Technical skills: SQL, Advanced Excel

Good to have: R/Python, Dashboarding experience

Ability to translate structured and unstructured problems into analytical framework

Excellent analytical skills

Good communication & interpersonal skills

Ability to work in a fast-paced start-up environment, learn on the job and get things done.

Read more
Marktine
at Marktine
1 recruiter
Vishal Sharma
Posted by Vishal Sharma
Remote, Bengaluru (Bangalore)
2 - 4 yrs
₹10L - ₹20L / yr
skill iconData Science
skill iconR Programming
skill iconPython
SQL
Natural Language Processing (NLP)

- Modeling complex problems, discovering insights, and identifying opportunities through the use of statistical, algorithmic, mining, and visualization techniques

- Experience working with business understanding the requirement, creating the problem statement, and building scalable and dependable Analytical solutions

- Must have hands-on and strong experience in Python

- Broad knowledge of fundamentals and state-of-the-art in NLP and machine learning

- Strong analytical & algorithm development skills

- Deep knowledge of techniques such as Linear Regression, gradient descent, Logistic Regression, Forecasting, Cluster analysis, Decision trees, Linear Optimization, Text Mining, etc

- Ability to collaborate across teams and strong interpersonal skills

 

Skills

- Sound theoretical knowledge in ML algorithm and their application

- Hands-on experience in statistical modeling tools such as R, Python, and SQL

- Hands-on experience in Machine learning/data science

- Strong knowledge of statistics

- Experience in advanced analytics / Statistical techniques – Regression, Decision trees, Ensemble machine learning algorithms, etc

- Experience in Natural Language Processing & Deep Learning techniques 

- Pandas, NLTK, Scikit-learn, SpaCy, Tensorflow

Read more
leading pharmacy provider
Agency job
via Econolytics by Jyotsna Econolytics
Noida, NCR (Delhi | Gurgaon | Noida)
4 - 10 yrs
₹18L - ₹24L / yr
skill iconData Science
skill iconPython
skill iconR Programming
Algorithms
Predictive modelling
Job Description:

• Help build a Data Science team which will be engaged in researching, designing,
implementing, and deploying full-stack scalable data analytics vision and machine learning
solutions to challenge various business issues.
• Modelling complex algorithms, discovering insights and identifying business
opportunities through the use of algorithmic, statistical, visualization, and mining techniques
• Translates business requirements into quick prototypes and enable the
development of big data capabilities driving business outcomes
• Responsible for data governance and defining data collection and collation
guidelines.
• Must be able to advice, guide and train other junior data engineers in their job.

Must Have:

• 4+ experience in a leadership role as a Data Scientist
• Preferably from retail, Manufacturing, Healthcare industry(not mandatory)
• Willing to work from scratch and build up a team of Data Scientists
• Open for taking up the challenges with end to end ownership
• Confident with excellent communication skills along with a good decision maker
Read more
Greenway Health
at Greenway Health
2 recruiters
Agency job
via Vipsa Talent Solutions by Prashma S R
Bengaluru (Bangalore)
6 - 8 yrs
₹8L - ₹15L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+5 more
6-8years of experience in data engineer
Spark
Hadoop
Big Data
Data engineering
PySpark
Python
AWS Lambda
SQL
hadoop
kafka
Read more
Aikon Labs Private Limited
at Aikon Labs Private Limited
1 video
7 recruiters
Shankar K
Posted by Shankar K
Pune
0 - 5 yrs
₹1L - ₹8L / yr
Natural Language Processing (NLP)
skill iconMachine Learning (ML)
Data Structures
Algorithms
skill iconDeep Learning
+4 more
About us
Aikon Labs Pvt Ltd is a start-up focused on Realizing Ideas. One such idea is iEngage.io , our Intelligent Engagement Platform. We leverage Augmented Intelligence, a combination of machine-driven insights & human understanding, to serve a timely response to every interaction from the people you care about.
Get in touch If you are interested. 

Do you have a passion to be a part of an innovative startup? Here’s an opportunity for you - become an active member of our core platform development team.

Main Duties
● Quickly research the latest innovations in Machine Learning, especially with respect to
Natural Language Understanding & implement them if useful
● Train models to provide different insights, mainly from text but also other media such as Audio and Video
● Validate the models trained. Fine-tune & optimise as necessary
● Deploy validated models, wrapped in a Flask server as a REST API or containerize in docker containers
● Build preprocessing pipelines for the models that are bieng served as a REST API
● Periodically, test & validate models in use. Update where necessary

Role & Relationships
We consider ourselves a team & you will be a valuable part of it. You could be reporting to a Senior member or directly to our Founder, CEO

Educational Qualifications
We don’t discriminate. As long as you have the required skill set & the right attitude

Experience
Upto two years of experience, preferably working on ML. Freshers are welcome too!

Skills
Good
● Strong understanding of Java / Python
● Clarity on concepts of Data Science
● A strong grounding in core Machine Learning
● Ability to wrangle & manipulate data into a processable form
● Knowledge of web technologies like Web server (Flask, Django etc), REST API's
Even better
● Experience with deep learning
● Experience with frameworks like Scikit-Learn, Tensorflow, Pytorch, Keras
Competencies
● Knowledge of NLP libraries such as NLTK, spacy, gensim.
● Knowledge of NLP models such as Wod2vec, Glove, ELMO, Fasttext
● An aptitude to solve problems & learn something new
● Highly self-motivated
● Analytical frame of mind
● Ability to work in fast-paced, dynamic environment

Location
Pune

Remuneration
Once we meet, we shall make an offer depending on how good a fit you are & the experience you already have
Read more
Saama Technologies
at Saama Technologies
6 recruiters
Sandeep Chaudhary
Posted by Sandeep Chaudhary
Pune
2 - 5 yrs
₹1L - ₹18L / yr
Hadoop
Spark
Apache Hive
Apache Flume
skill iconJava
+5 more
Description Deep experience and understanding of Apache Hadoop and surrounding technologies required; Experience with Spark, Impala, Hive, Flume, Parquet and MapReduce. Strong understanding of development languages to include: Java, Python, Scala, Shell Scripting Expertise in Apache Spark 2. x framework principals and usages. Should be proficient in developing Spark Batch and Streaming job in Python, Scala or Java. Should have proven experience in performance tuning of Spark applications both from application code and configuration perspective. Should be proficient in Kafka and integration with Spark. Should be proficient in Spark SQL and data warehousing techniques using Hive. Should be very proficient in Unix shell scripting and in operating on Linux. Should have knowledge about any cloud based infrastructure. Good experience in tuning Spark applications and performance improvements. Strong understanding of data profiling concepts and ability to operationalize analyses into design and development activities Experience with best practices of software development; Version control systems, automated builds, etc. Experienced in and able to lead the following phases of the Software Development Life Cycle on any project (feasibility planning, analysis, development, integration, test and implementation) Capable of working within the team or as an individual Experience to create technical documentation
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos