Cutshort logo
Vedantu logo
Data Scientist
Data Scientist
Vedantu's logo

Data Scientist

Gaurav Singhal's profile picture
Posted by Gaurav Singhal
1 - 4 yrs
₹8L - ₹16L / yr
Full time
Bengaluru (Bangalore)
Skills
Data Science
Machine Learning (ML)
R Programming
Python
Decision Science
Natural Language Processing (NLP)
About Vedantu --------------------------- If you have ever dreamed about being in the driver’s seat of a revolution, THIS is the place for you. Vedantu is an Ed-Tech startup which is into Live Online Tutoring. Recently raised Series B funding of $11M Job Description We are looking for a Data Scientist who will support our product, sales, leadership and marketing teams with insights gained from analyzing company data. The ideal candidate is adept at using large data sets to find opportunities for product, sales and process optimization and using models to test the effectiveness of different courses of action. They must have strong experience using a variety of data analysis methods, building and implementing models and using/creating appropriate algorithms. Desired Skills 1. Experience using statistical computer languages (R, Python,etc.) to manipulate data and draw insights from large data sets. 2. Process, cleanse, and verify the integrity of data used for analysis. 3. Comfortable manipulating and analyzing complex, high-volume, high-dimensionality data from varying, heterogeneous sources 4. Experience with messy real-world data -- handling missing/incomplete/inaccurate data 5. Understanding of a broad set of Algorithms and Applied Math. 6. Good at problem solving, probability and statistics and knowledge of advanced statistical techniques and concepts (regression, properties of distributions, statistical tests and proper usage) and experience with applications. 7. Knowledge of data scraping is preferable 8. Knowledge of a variety of machine learning techniques (clustering, decision tree learning, artificial neural networks) and their real-world advantages/drawbacks. 9. Experience with big data tools (Hadoop, Hive, MapReduce) a plus.
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About Vedantu

Founded :
2011
Type
Size
Stage :
Raised funding
About
Vedantu is India's leading online tutoring company that offers personalized and interactive learning to students from grades 6 to 12, preparing them for school boards, competitive exams, and co-curricular courses. It has a pool of 500+ teachers who have taught over 1 million hours to 40,000+ students across 1000+ cities from 30+ countries.
Read more
Connect with the team
Profile picture
Vikas Kumar Gautam
Profile picture
Jijo Aliyas
Profile picture
Supreet Singh
Profile picture
Maqsood Pasha
Profile picture
yukti Mittal
Profile picture
Sanchita Rajput
Profile picture
Joseph louis
Profile picture
Nikita Chhetri
Company social profiles
linkedintwitterfacebook

Similar jobs

Goldstone Technologies Ltd
Deepika Agarwal
Posted by Deepika Agarwal
Remote only
5 - 8 yrs
₹5L - ₹15L / yr
Python
PySpark
apache airflow
Spark
Hadoop
+4 more

Requirements:

● Understanding our data sets and how to bring them together.

● Working with our engineering team to support custom solutions offered to the product development.

● Filling the gap between development, engineering and data ops.

● Creating, maintaining and documenting scripts to support ongoing custom solutions.

● Excellent organizational skills, including attention to precise details

● Strong multitasking skills and ability to work in a fast-paced environment

● 5+ years experience with Python to develop scripts.

● Know your way around RESTFUL APIs.[Able to integrate not necessary to publish]

● You are familiar with pulling and pushing files from SFTP and AWS S3.

● Experience with any Cloud solutions including GCP / AWS / OCI / Azure.

● Familiarity with SQL programming to query and transform data from relational Databases.

● Familiarity to work with Linux (and Linux work environment).

● Excellent written and verbal communication skills

● Extracting, transforming, and loading data into internal databases and Hadoop

● Optimizing our new and existing data pipelines for speed and reliability

● Deploying product build and product improvements

● Documenting and managing multiple repositories of code

● Experience with SQL and NoSQL databases (Casendra, MySQL)

● Hands-on experience in data pipelining and ETL. (Any of these frameworks/tools: Hadoop, BigQuery,

RedShift, Athena)

● Hands-on experience in AirFlow

● Understanding of best practices, common coding patterns and good practices around

● storing, partitioning, warehousing and indexing of data

● Experience in reading the data from Kafka topic (both live stream and offline)

● Experience in PySpark and Data frames

Responsibilities:

You’ll

● Collaborating across an agile team to continuously design, iterate, and develop big data systems.

● Extracting, transforming, and loading data into internal databases.

● Optimizing our new and existing data pipelines for speed and reliability.

● Deploying new products and product improvements.

● Documenting and managing multiple repositories of code.

Read more
Amagi Media Labs
at Amagi Media Labs
3 recruiters
Rajesh C
Posted by Rajesh C
Bengaluru (Bangalore), Chennai
12 - 15 yrs
₹50L - ₹60L / yr
Data Science
Machine Learning (ML)
ETL
Data Warehouse (DWH)
Amazon Web Services (AWS)
+5 more
Job Title: Data Architect
Job Location: Chennai

Job Summary
The Engineering team is seeking a Data Architect. As a Data Architect, you will drive a
Data Architecture strategy across various Data Lake platforms. You will help develop
reference architecture and roadmaps to build highly available, scalable and distributed
data platforms using cloud based solutions to process high volume, high velocity and
wide variety of structured and unstructured data. This role is also responsible for driving
innovation, prototyping, and recommending solutions. Above all, you will influence how
users interact with Conde Nast’s industry-leading journalism.
Primary Responsibilities
Data Architect is responsible for
• Demonstrated technology and personal leadership experience in architecting,
designing, and building highly scalable solutions and products.
• Enterprise scale expertise in data management best practices such as data integration,
data security, data warehousing, metadata management and data quality.
• Extensive knowledge and experience in architecting modern data integration
frameworks, highly scalable distributed systems using open source and emerging data
architecture designs/patterns.
• Experience building external cloud (e.g. GCP, AWS) data applications and capabilities is
highly desirable.
• Expert ability to evaluate, prototype and recommend data solutions and vendor
technologies and platforms.
• Proven experience in relational, NoSQL, ELT/ETL technologies and in-memory
databases.
• Experience with DevOps, Continuous Integration and Continuous Delivery technologies
is desirable.
• This role requires 15+ years of data solution architecture, design and development
delivery experience.
• Solid experience in Agile methodologies (Kanban and SCRUM)
Required Skills
• Very Strong Experience in building Large Scale High Performance Data Platforms.
• Passionate about technology and delivering solutions for difficult and intricate
problems. Current on Relational Databases and No sql databases on cloud.
• Proven leadership skills, demonstrated ability to mentor, influence and partner with
cross teams to deliver scalable robust solutions..
• Mastery of relational database, NoSQL, ETL (such as Informatica, Datastage etc) /ELT
and data integration technologies.
• Experience in any one of Object Oriented Programming (Java, Scala, Python) and
Spark.
• Creative view of markets and technologies combined with a passion to create the
future.
• Knowledge on cloud based Distributed/Hybrid data-warehousing solutions and Data
Lake knowledge is mandate.
• Good understanding of emerging technologies and its applications.
• Understanding of code versioning tools such as GitHub, SVN, CVS etc.
• Understanding of Hadoop Architecture and Hive SQL
• Knowledge in any one of the workflow orchestration
• Understanding of Agile framework and delivery

Preferred Skills:
● Experience in AWS and EMR would be a plus
● Exposure in Workflow Orchestration like Airflow is a plus
● Exposure in any one of the NoSQL database would be a plus
● Experience in Databricks along with PySpark/Spark SQL would be a plus
● Experience with the Digital Media and Publishing domain would be a
plus
● Understanding of Digital web events, ad streams, context models

About Condé Nast

CONDÉ NAST INDIA (DATA)
Over the years, Condé Nast successfully expanded and diversified into digital, TV, and social
platforms - in other words, a staggering amount of user data. Condé Nast made the right
move to invest heavily in understanding this data and formed a whole new Data team
entirely dedicated to data processing, engineering, analytics, and visualization. This team
helps drive engagement, fuel process innovation, further content enrichment, and increase
market revenue. The Data team aimed to create a company culture where data was the
common language and facilitate an environment where insights shared in real-time could
improve performance.
The Global Data team operates out of Los Angeles, New York, Chennai, and London. The
team at Condé Nast Chennai works extensively with data to amplify its brands' digital
capabilities and boost online revenue. We are broadly divided into four groups, Data
Intelligence, Data Engineering, Data Science, and Operations (including Product and
Marketing Ops, Client Services) along with Data Strategy and monetization. The teams built
capabilities and products to create data-driven solutions for better audience engagement.
What we look forward to:
We want to welcome bright, new minds into our midst and work together to create diverse
forms of self-expression. At Condé Nast, we encourage the imaginative and celebrate the
extraordinary. We are a media company for the future, with a remarkable past. We are
Condé Nast, and It Starts Here.
Read more
Cloudbloom Systems LLP
at Cloudbloom Systems LLP
5 recruiters
Sahil Rana
Posted by Sahil Rana
Bengaluru (Bangalore)
6 - 10 yrs
₹13L - ₹25L / yr
Machine Learning (ML)
Python
Data Science
Natural Language Processing (NLP)
Computer Vision
+3 more
Description
Duties and Responsibilities:
 Research and Develop Innovative Use Cases, Solutions and Quantitative Models
 Quantitative Models in Video and Image Recognition and Signal Processing for cloudbloom’s
cross-industry business (e.g., Retail, Energy, Industry, Mobility, Smart Life and
Entertainment).
 Design, Implement and Demonstrate Proof-of-Concept and Working Proto-types
 Provide R&D support to productize research prototypes.
 Explore emerging tools, techniques, and technologies, and work with academia for cutting-
edge solutions.
 Collaborate with cross-functional teams and eco-system partners for mutual business benefit.
 Team Management Skills
Academic Qualification
 7+ years of professional hands-on work experience in data science, statistical modelling, data
engineering, and predictive analytics assignments
 Mandatory Requirements: Bachelor’s degree with STEM background (Science, Technology,
Engineering and Management) with strong quantitative flavour
 Innovative and creative in data analysis, problem solving and presentation of solutions.
 Ability to establish effective cross-functional partnerships and relationships at all levels in a
highly collaborative environment
 Strong experience in handling multi-national client engagements
 Good verbal, writing & presentation skills
Core Expertise
 Excellent understanding of basics in mathematics and statistics (such as differential
equations, linear algebra, matrix, combinatorics, probability, Bayesian statistics, eigen
vectors, Markov models, Fourier analysis).
 Building data analytics models using Python, ML libraries, Jupyter/Anaconda and Knowledge
database query languages like  SQL
 Good knowledge of machine learning methods like k-Nearest Neighbors, Naive Bayes, SVM,
Decision Forests. 
 Strong Math Skills (Multivariable Calculus and Linear Algebra) - understanding the
fundamentals of Multivariable Calculus and Linear Algebra is important as they form the basis
of a lot of predictive performance or algorithm optimization techniques. 
 Deep learning : CNN, neural Network, RNN, tensorflow, pytorch, computervision,
 Large-scale data extraction/mining, data cleansing, diagnostics, preparation for Modeling
 Good applied statistical skills, including knowledge of statistical tests, distributions,
regression, maximum likelihood estimators, Multivariate techniques & predictive modeling
cluster analysis, discriminant analysis, CHAID, logistic & multiple regression analysis
 Experience with Data Visualization Tools like Tableau, Power BI, Qlik Sense that help to
visually encode data
 Excellent Communication Skills – it is incredibly important to describe findings to a technical
and non-technical audience
 Capability for continuous learning and knowledge acquisition.
 Mentor colleagues for growth and success
 Strong Software Engineering Background
 Hands-on experience with data science tools
Read more
Noida
2 - 5 yrs
₹30L - ₹40L / yr
Data Science
Deep Learning
R Programming
Python

Data Scientist

Requirements

● B.Tech/Masters in Mathematics, Statistics, Computer Science or another
quantitative field
● 2-3+ years of work experience in ML domain ( 2-5 years experience )
● Hands-on coding experience in Python
● Experience in machine learning techniques such as Regression, Classification,
Predictive modeling, Clustering, Deep Learning stack, NLP
● Working knowledge of Tensorflow/PyTorch

Optional Add-ons-

● Experience with distributed computing frameworks: Map/Reduce, Hadoop, Spark
etc.
● Experience with databases: MongoDB

Read more
Chennai, Bengaluru (Bangalore), Hyderabad
2 - 5 yrs
₹4L - ₹13L / yr
Machine Learning (ML)
Data Science
Python
NumPy
pandas
+3 more

 

Job Title : Analyst / Sr. Analyst – Data Science Developer - Python

Exp : 2 to 5 yrs

Loc : B’lore / Hyd / Chennai

NP: Candidate should join us in 2 months (Max) / Immediate Joiners Pref.

 

About the role:

 

We are looking for an Analyst / Senior Analyst who works in the analytics domain with a strong python background.

 

Desired Skills, Competencies & Experience:

 

•                     • 2-4 years of experience in working in the analytics domain with a strong python background.

•                     • Visualization skills in python with plotly, matplotlib, seaborn etc. Ability to create customized plots using such tools.

•                     • Ability to write effective, scalable and modular code. Should be able to understand, test and debug existing python project modules quickly and contribute to that.

•                     • Should be familiarized with Git workflows.

 

Good to Have:

•                     • Familiarity with cloud platforms like AWS, AzureML, Databricks, GCP etc.

•                     • Understanding of shell scripting, python package development.

•                     • Experienced with Python data science packages like Pandas, numpy, sklearn etc.

•                     • ML model building and evaluation experience using sklearn.

 

Read more
netmedscom
at netmedscom
3 recruiters
Vijay Hemnath
Posted by Vijay Hemnath
Chennai
2 - 5 yrs
₹6L - ₹25L / yr
Big Data
Hadoop
Apache Hive
Scala
Spark
+12 more

We are looking for an outstanding Big Data Engineer with experience setting up and maintaining Data Warehouse and Data Lakes for an Organization. This role would closely collaborate with the Data Science team and assist the team build and deploy machine learning and deep learning models on big data analytics platforms.

Roles and Responsibilities:

  • Develop and maintain scalable data pipelines and build out new integrations and processes required for optimal extraction, transformation, and loading of data from a wide variety of data sources using 'Big Data' technologies.
  • Develop programs in Scala and Python as part of data cleaning and processing.
  • Assemble large, complex data sets that meet functional / non-functional business requirements and fostering data-driven decision making across the organization.  
  • Responsible to design and develop distributed, high volume, high velocity multi-threaded event processing systems.
  • Implement processes and systems to validate data, monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it.
  • Perform root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Provide high operational excellence guaranteeing high availability and platform stability.
  • Closely collaborate with the Data Science team and assist the team build and deploy machine learning and deep learning models on big data analytics platforms.

Skills:

  • Experience with Big Data pipeline, Big Data analytics, Data warehousing.
  • Experience with SQL/No-SQL, schema design and dimensional data modeling.
  • Strong understanding of Hadoop Architecture, HDFS ecosystem and eexperience with Big Data technology stack such as HBase, Hadoop, Hive, MapReduce.
  • Experience in designing systems that process structured as well as unstructured data at large scale.
  • Experience in AWS/Spark/Java/Scala/Python development.
  • Should have Strong skills in PySpark (Python & SPARK). Ability to create, manage and manipulate Spark Dataframes. Expertise in Spark query tuning and performance optimization.
  • Experience in developing efficient software code/frameworks for multiple use cases leveraging Python and big data technologies.
  • Prior exposure to streaming data sources such as Kafka.
  • Should have knowledge on Shell Scripting and Python scripting.
  • High proficiency in database skills (e.g., Complex SQL), for data preparation, cleaning, and data wrangling/munging, with the ability to write advanced queries and create stored procedures.
  • Experience with NoSQL databases such as Cassandra / MongoDB.
  • Solid experience in all phases of Software Development Lifecycle - plan, design, develop, test, release, maintain and support, decommission.
  • Experience with DevOps tools (GitHub, Travis CI, and JIRA) and methodologies (Lean, Agile, Scrum, Test Driven Development).
  • Experience building and deploying applications on on-premise and cloud-based infrastructure.
  • Having a good understanding of machine learning landscape and concepts. 

 

Qualifications and Experience:

Engineering and post graduate candidates, preferably in Computer Science, from premier institutions with proven work experience as a Big Data Engineer or a similar role for 3-5 years.

Certifications:

Good to have at least one of the Certifications listed here:

    AZ 900 - Azure Fundamentals

    DP 200, DP 201, DP 203, AZ 204 - Data Engineering

    AZ 400 - Devops Certification

Read more
KAUSHIK BAKERY
Neha Sharma
Posted by Neha Sharma
NCR (Delhi | Gurgaon | Noida), NCR (Delhi | Gurgaon | Noida)
0 - 7 yrs
₹2L - ₹5L / yr
Data Science
Business Analysis
Business Intelligence (BI)
data analytics data science business analytics business intelligence
Read more
company logo
Agency job
via UpgradeHR by Sangita Deka
Hyderabad
6 - 10 yrs
₹10L - ₹15L / yr
Big Data
Data Science
Machine Learning (ML)
R Programming
Python
+2 more
It is one of the largest communication technology companies in the world. They operate America's largest 4G LTE wireless network and the nation's premiere all-fiber broadband network.
Read more
Societe Generale Global Solution Centre
Bushra Syeda
Posted by Bushra Syeda
Bengaluru (Bangalore)
3 - 7 yrs
₹12L - ₹18L / yr
Data Science
Python
R Programming
Machine Learning (ML)
• Model design, feature planning, system infrastructure, production setup and monitoring, and release management. 
• Excellent understanding of machine learning techniques and algorithms, such as SVM, Decision Forests, k-NN, Naive Bayes etc.
• Experience in selecting features, building and optimizing classifiers using machine learning techniques.
• Prior experience with data visualization tools, such as D3.js, GGplot, etc..
• Good knowledge on statistics skills, such as distributions, statistical testing, regression, etc..
• Adequate presentation and communication skills to explain results and methodologies to non-technical stakeholders.
• Basic understanding of the banking industry is value add

Develop, process, cleanse and enhance data collection procedures from multiple data sources.
• Conduct & deliver experiments and proof of concepts to validate business ideas and potential value.
• Test, troubleshoot and enhance the developed models in a distributed environments to improve it's accuracy.
• Work closely with product teams to implement algorithms with Python and/or R.
• Design and implement scalable predictive models, classifiers leveraging machine learning, data regression.
• Facilitate integration with enterprise applications using APIs to enrich implementations
 
Read more
Noida, NCR (Delhi | Gurgaon | Noida)
1 - 4 yrs
₹8L - ₹18L / yr
Analytics
Predictive analytics
Linear regression
Logistic regression
Python
+1 more
Job Description : Role : Analytics Scientist - Risk Analytics Experience Range : 1 to 4 Years Job Location : Noida Key responsibilities include •Building models to predict risk and other key metrics •Coming up with data driven solutions to control risk •Finding opportunities to acquire more customers by modifying/optimizing existing rules •Doing periodic upgrades of the underwriting strategy based on business requirements •Evaluating 3rd party solutions for predicting/controlling risk of the portfolio •Running periodic controlled tests to optimize underwriting •Monitoring key portfolio metrics and take data driven actions based on the performance Business Knowledge: Develop an understanding of the domain/function. Manage business process (es) in the work area. The individual is expected to develop domain expertise in his/her work area. Teamwork: Develop cross site relationships to enhance leverage of ideas. Set and manage partner expectations. Drive implementation of projects with Engineering team while partnering seamlessly with cross site team members Communication: Responsibly perform end to end project communication across the various levels in the organization. Candidate Specification: Skills: • Knowledge of analytical tool - R Language or Python • Established competency in Predictive Analytics (Logistic & Regression) • Experience in handling complex data sources •Dexterity with MySQL, MS Excel is good to have •Strong Analytical aptitude and logical reasoning ability •Strong presentation and communication skills Preferred: •1 - 3 years of experience in Financial Services/Analytics Industry •Understanding of the financial services business • Experience in working on advanced machine learning techniques If interested, please send your updated profile in word format with below details for further discussion at the earliest. 1. Current Company 2. Current Designation 3. Total Experience 4. Current CTC( Fixed & Variable) 5. Expected CTC 6. Notice Period 7. Current Location 8. Reason for Change 9. Availability for face to face interview on weekdays 10.Education Degreef the financial services business Thanks & Regards, Hema Talent Socio
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos