Data Scientist

at Railofy

DP
Posted by Manan Jain
icon
Mumbai
icon
2 - 5 yrs
icon
₹5L - ₹12L / yr (ESOP available)
icon
Full time
Skills
Data Science
Python
R Programming

About Us:

We are a VC-funded startup solving one of the biggest transportation problems India faces. Most passengers in India travel long distance by IRCTC trains. At time of booking, approx 1 out of every 2 passengers end up with a Waitlisted or RAC ticket. This creates a lot of anxiety for passengers, as Railway only announces only 4 hour before departure if they have a confirmed seat. We solve this problem through our Waitlist & RAC Protection. Protection can be bought against each IRCTC ticket at time of booking. If train ticket is not confirmed, we fly the passenger to the destination. Our team consists of 3 Founders from IIT, IIM and ISB.

Functional Experience:

  • Computer Science or IT Engineering background with solid understanding of basics of Data Structures and Algorithms
  • 2+ years of data science experience working with large datasets
  • Expertise in Python packages like pandas, numPy, sklearn, matplotlib, seaborn, keras and tensorflow
  • Expertise in Big Data technologies like Hadoop, Cassandra and PostgreSQL
  • Expertise in Cloud computing on AWS with EC2, AutoML, Lambda and RDS
  • Good knowledge of Machine Learning and Statistical time series analysis (optional)
  • Unparalleled logical ability making you the go to guy for all things related to data
  • You love coding like a hobby and are up for a challenge!

 

Cultural:

  • Assume a strong sense of ownership of analytics : Design, develop & deploy
  • Collaborate with senior management, operations & business team
  • Ensure Quality & sustainability of the architecture
  • Motivation to join an early stage startup should go beyond compensation
Read more

About Railofy

Railofy is a travel tech start-up and an IRCTC Authorized Premium Partner with the mission to transform the train travel experience for Bharat! To make train journeys a more pleasurable and comfortable experience, Railofy offers a variety of services -

1. Railofy Hotplate - Order food from the top restaurants in town and get it delivered at your train seat

2. Train ticket booking - with an AI-powered recommendation engine as well as free cancellation feature..

3. Travel Guarantee - protection against Waitlist and RAC tickets. Get flight, bus, tatkal tickets and more at train ticket prices, if your ticket doesn’t get confirmed. 

4. Whatsapp Bot - Check PNR Status now directly on your Whatsapp. Hassle free and without any downloads. Get automatic updates on Whatsapp. Just Whatsapp your PNR no to 9881193322.


We are a VC-funded startup solving one of the biggest transportation problems India faces Our team consists of 3 Founders from IIT, IIM and ISB.

Read more
Founded
2018
Type
Product
Size
20-100 employees
Stage
Raised funding
View full company details
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Lead Computer Vision Engineer

at An AI based company

Agency job
via Qrata
Computer Vision
OpenCV
Python
TensorFlow
PyTorch
icon
Gurugram, Delhi, Noida, Ghaziabad, Faridabad
icon
5 - 10 yrs
icon
₹25L - ₹70L / yr
Job Title : Lead Computer Vision Engineer
Location : Gurgaon

About the company:
The company is changing the way cataloging is done across the Globe. Our vision is to empower the smallest of sellers, situated in the farthest of corners, to create superior product images and videos, without the need for any external professional help. Imagine 30M+ merchants shooting Product Images or Videos using their Smartphones, and then choosing Filters for Amazon, Asos, Airbnb, Doordash, etc to instantly compose High-Quality "tuned-in" product visuals, instantly. The company has built the world’s leading image editing AI software, to capture and process beautiful product images for online selling. We are also fortunate and proud to be backed by the biggest names in the investment community including the likes of Accel Partners, Angellist and prominent Founders and Internet company operators, who believe that there is an intelligent and efficient way of doing Digital Production than how the world operates currently.

Job Description :
- We are looking for a seasoned Computer Vision Engineer with AI/ML/CV and Deep Learning skills to
play a senior leadership role in our Product & Technology Research Team.
- You will be leading a team of CV researchers to build models that automatically transform millions of e
commerce, automobiles, food, real-estate ram images into processed final images.
- You will be responsible for researching the latest art of the possible in the field of computer vision,
designing the solution architecture for our offerings and lead the Computer Vision teams to build the core
algorithmic models & deploy them on Cloud Infrastructure.
- Working with the Data team to ensure your data pipelines are well set up and
models are being constantly trained and updated
- Working alongside product team to ensure that AI capabilities are built as democratized tools that
provides internal as well external stakeholders to innovate on top of it and make our customers
successful
- You will work closely with the Product & Engineering teams to convert the models into beautiful products
that will be used by thousands of Businesses everyday to transform their images and videos.

Job Requirements:
- Min 3+ years of work experience in Computer Vision with 5-10 years work experience overall
- BS/MS/ Phd degree in Computer Science, Engineering or a related subject from a ivy league institute
- Exposure on Deep Learning Techniques, TensorFlow/Pytorch
- Prior expertise on building Image processing applications using GANs, CNNs, Diffusion models
- Expertise with Image Processing Python libraries like OpenCV, etc.
- Good hands-on experience on Python, Flask or Django framework
- Authored publications at peer-reviewed AI conferences (e.g. NeurIPS, CVPR, ICML, ICLR,ICCV, ACL)
- Prior experience of managing teams and building large scale AI / CV projects is a big plus
- Great interpersonal and communication skills
- Critical thinker and problem-solving skills
Read more
Job posted by
Prajakta Kulkarni

Data Scientist

at Strategic Toolkit for Capital Productivity

Agency job
via Qrata
Data Science
Machine Learning (ML)
Natural Language Processing (NLP)
Python
TensorFlow
Scikit-Learn
icon
Remote only
icon
5 - 10 yrs
icon
₹12L - ₹45L / yr
What would make you a good fit?

o You’re both relentless and kind, and don’t see these as being mutually
exclusive
o You have a self-directed learning style, an insatiable curiosity, and a
hands-on execution mindset
o You have deep experience working with product and engineering teams
to launch machine learning products that users love in new or rapidly
evolving markets
o You flourish in uncertain environments and can turn incomplete,
conflicting, or ambiguous inputs into solid data-science action plans
o You bring best practices to feature engineering, model development, and
ML operations
o Your experience in deploying and monitoring the performance of models
in production enables us to implement a best-in-class solution
o You have exceptional writing and speaking skills with a talent for
articulating how data science can be applied to solve customer problems

Must-Have Qualifications

o Graduate degree in engineering, data science, mathematics, physics, or
another quantitative field
o 5+ years of hands-on experience in building and deploying production-
grade ML models with ML frameworks (TensorFlow, Keras, PyTorch) and
libraries like scikit-learn
o Track-record in building ML pipelines for time series, classification, and
predictive applications
o Expert level skills in Python for data analysis and visualization, hypothesis
testing, and model building
o Deep experience with ensemble ML approaches including random forests
and xgboost, and experience with databases and querying models for
structured and unstructured data
o A knack for using data visualization and analysis tools to tell a story

o You naturally think quantitatively about problems and work backward
from a customer outcome

What’ll make you stand out (but not required)

o You have a keen awareness or interest in network analysis/graph analysis
or NLP
o You have experience in distributed systems and graph databases
o You have a strong connection to finance teams or closely related
domains, the challenges they face, and a deep appreciation for their
aspirations
Read more
Job posted by
Rayal Rajan

Sr Data Scientist [India]

at Egnyte

Founded 2008  •  Product  •  500-1000 employees  •  Profitable
Data Science
data scientist
Machine Learning (ML)
Time series
QoS
Clustering
Forecasting
Statistical Modeling
Data mining
Decision trees
TensorFlow
PyTorch
icon
Remote, Mumbai
icon
4 - 10 yrs
icon
Best in industry

Job Description

We are looking for an experienced engineer to join our data science team, who will help us design, develop, and deploy machine learning models in production. You will develop robust models, prepare their deployment into production in a controlled manner, while providing appropriate means to monitor their performance and stability after deployment.

 

What You’ll Do will include (But not limited to):

  • Preparing datasets needed to train and validate our machine learning models
  • Anticipate and build solutions for problems that interrupt availability, performance, and stability in our systems, services, and products at scale.
  • Defining and implementing metrics to evaluate the performance of the models, both for computing performance (such as CPU & memory usage) and for ML performance (such as precision, recall, and F1)
  • Supporting the deployment of machine learning models on our infrastructure, including containerization, instrumentation, and versioning
  • Supporting the whole lifecycle of our machine learning models, including gathering data for retraining, A/B testing, and redeployments
  • Developing, testing, and evaluating tools for machine learning models deployment, monitoring, retraining.
  • Working closely within a distributed team to analyze and apply innovative solutions over billions of documents
  • Supporting solutions ranging from rule-bases, classical ML techniques  to the latest deep learning systems.
  • Partnering with cross-functional team members to bring large scale data engineering solutions to production
  • Communicating your approach and results to a wider audience through presentations

Your Qualifications:

  • Demonstrated success with machine learning in a SaaS or Cloud environment, with hands–on knowledge of model creation and deployments in production at scale
  • Good knowledge of traditional machine learning methods and neural networks
  • Experience with practical machine learning modeling, especially on time-series forecasting, analysis, and causal inference.
  • Experience with data mining algorithms and statistical modeling techniques for anomaly detection in time series such as clustering, classification, ARIMA, and decision trees is preferred.
  • Ability to implement data import, cleansing and transformation functions at scale
  • Fluency in Docker, Kubernetes
  • Working knowledge of relational and dimensional data models with appropriate visualization techniques such as PCA.
  • Solid English skills to effectively communicate with other team members

 

Due to the nature of the role, it would be nice if you have also:

  • Experience with large datasets and distributed computing, especially with the Google Cloud Platform
  • Fluency in at least one deep learning framework: PyTorch, TensorFlow / Keras
  • Experience with No–SQL and Graph databases
  • Experience working in a Colab, Jupyter, or Python notebook environment
  • Some experience with monitoring, analysis, and alerting tools like New Relic, Prometheus, and the ELK stack
  • Knowledge of Java, Scala or Go-Lang programming languages
  • Familiarity with KubeFlow
  • Experience with transformers, for example the Hugging Face libraries
  • Experience with OpenCV

 

About Egnyte

In a content critical age, Egnyte fuels business growth by enabling content-rich business processes, while also providing organizations with visibility and control over their content assets. Egnyte’s cloud-native content services platform leverages the industry’s leading content intelligence engine to deliver a simple, secure, and vendor-neutral foundation for managing enterprise content across business applications and storage repositories. More than 16,000 customers trust Egnyte to enhance employee productivity, automate data management, and reduce file-sharing cost and complexity. Investors include Google Ventures, Kleiner Perkins, Caufield & Byers, and Goldman Sachs. For more information, visit www.egnyte.com

 

#LI-Remote

Read more
Job posted by
Prasanth Mulleti

Data Engineer

at PayU

Founded 2002  •  Product  •  500-1000 employees  •  Profitable
Python
ETL
Data engineering
Informatica
SQL
Spark
Snow flake schema
icon
Remote, Bengaluru (Bangalore)
icon
2 - 5 yrs
icon
₹5L - ₹20L / yr

Role: Data Engineer  
Company: PayU

Location: Bangalore/ Mumbai

Experience : 2-5 yrs


About Company:

PayU is the payments and fintech business of Prosus, a global consumer internet group and one of the largest technology investors in the world. Operating and investing globally in markets with long-term growth potential, Prosus builds leading consumer internet companies that empower people and enrich communities.

The leading online payment service provider in 36 countries, PayU is dedicated to creating a fast, simple and efficient payment process for merchants and buyers. Focused on empowering people through financial services and creating a world without financial borders where everyone can prosper, PayU is one of the biggest investors in the fintech space globally, with investments totalling $700 million- to date. PayU also specializes in credit products and services for emerging markets across the globe. We are dedicated to removing risks to merchants, allowing consumers to use credit in ways that suit them and enabling a greater number of global citizens to access credit services.

Our local operations in Asia, Central and Eastern Europe, Latin America, the Middle East, Africa and South East Asia enable us to combine the expertise of high growth companies with our own unique local knowledge and technology to ensure that our customers have access to the best financial services.

India is the biggest market for PayU globally and the company has already invested $400 million in this region in last 4 years. PayU in its next phase of growth is developing a full regional fintech ecosystem providing multiple digital financial services in one integrated experience. We are going to do this through 3 mechanisms: build, co-build/partner; select strategic investments. 

PayU supports over 350,000+ merchants and millions of consumers making payments online with over 250 payment methods and 1,800+ payment specialists. The markets in which PayU operates represent a potential consumer base of nearly 2.3 billion people and a huge growth potential for merchants. 

Job responsibilities:

  • Design infrastructure for data, especially for but not limited to consumption in machine learning applications 
  • Define database architecture needed to combine and link data, and ensure integrity across different sources 
  • Ensure performance of data systems for machine learning to customer-facing web and mobile applications using cutting-edge open source frameworks, to highly available RESTful services, to back-end Java based systems 
  • Work with large, fast, complex data sets to solve difficult, non-routine analysis problems, applying advanced data handling techniques if needed 
  • Build data pipelines, includes implementing, testing, and maintaining infrastructural components related to the data engineering stack.
  • Work closely with Data Engineers, ML Engineers and SREs to gather data engineering requirements to prototype, develop, validate and deploy data science and machine learning solutions

Requirements to be successful in this role: 

  • Strong knowledge and experience in Python, Pandas, Data wrangling, ETL processes, statistics, data visualisation, Data Modelling and Informatica.
  • Strong experience with scalable compute solutions such as in Kafka, Snowflake
  • Strong experience with workflow management libraries and tools such as Airflow, AWS Step Functions etc. 
  • Strong experience with data engineering practices (i.e. data ingestion pipelines and ETL) 
  • A good understanding of machine learning methods, algorithms, pipelines, testing practices and frameworks 
  • Preferred) MEng/MSc/PhD degree in computer science, engineering, mathematics, physics, or equivalent (preference: DS/ AI) 
  • Experience with designing and implementing tools that support sharing of data, code, practices across organizations at scale 
Read more
Job posted by
Vishakha Sonde

Data Scientist (Forecasting)

at Anicaa Data

Agency job
via wrackle
TensorFlow
PyTorch
Machine Learning (ML)
Data Science
data scientist
Forecasting
C++
Python
Artificial Neural Network (ANN)
moving average
ARIMA
Big Data
Data Analytics
Amazon Web Services (AWS)
azure
Google Cloud Platform (GCP)
icon
Bengaluru (Bangalore)
icon
3 - 6 yrs
icon
₹10L - ₹25L / yr

Job Title – Data Scientist (Forecasting)

Anicca Data is seeking a Data Scientist (Forecasting) who is motivated to apply his/her/their skill set to solve complex and challenging problems. The focus of the role will center around applying deep learning models to real-world applications.  The candidate should have experience in training, testing deep learning architectures.  This candidate is expected to work on existing codebases or write an optimized codebase at Anicca Data. The ideal addition to our team is self-motivated, highly organized, and a team player who thrives in a fast-paced environment with the ability to learn quickly and work independently.

 

Job Location: Remote (for time being) and Bangalore, India (post-COVID crisis)

 

Required Skills:

  • At least 3+ years of experience in a Data Scientist role
  • Bachelor's/Master’s degree in Computer Science, Engineering, Statistics, Mathematics, or similar quantitative discipline. D. will add merit to the application process
  • Experience with large data sets, big data, and analytics
  • Exposure to statistical modeling, forecasting, and machine learning. Deep theoretical and practical knowledge of deep learning, machine learning, statistics, probability, time series forecasting
  • Training Machine Learning (ML) algorithms in areas of forecasting and prediction
  • Experience in developing and deploying machine learning solutions in a cloud environment (AWS, Azure, Google Cloud) for production systems
  • Research and enhance existing in-house, open-source models, integrate innovative techniques, or create new algorithms to solve complex business problems
  • Experience in translating business needs into problem statements, prototypes, and minimum viable products
  • Experience managing complex projects including scoping, requirements gathering, resource estimations, sprint planning, and management of internal and external communication and resources
  • Write C++ and Python code along with TensorFlow, PyTorch to build and enhance the platform that is used for training ML models

Preferred Experience

  • Worked on forecasting projects – both classical and ML models
  • Experience with training time series forecasting methods like Moving Average (MA) and Autoregressive Integrated Moving Average (ARIMA) with Neural Networks (NN) models as Feed-forward NN and Nonlinear Autoregressive
  • Strong background in forecasting accuracy drivers
  • Experience in Advanced Analytics techniques such as regression, classification, and clustering
  • Ability to explain complex topics in simple terms, ability to explain use cases and tell stories
Read more
Job posted by
Naveen Taalanki

Senior Data Engineer

at CoStrategix Technologies

Founded 2006  •  Services  •  100-1000 employees  •  Profitable
Data engineering
Data Structures
Programming
Python
C#
Azure Data Factory
.NET
Data Warehouse (DWH)
icon
Remote, Bengaluru (Bangalore)
icon
4 - 8 yrs
icon
₹10L - ₹28L / yr

 

Job Description - Sr Azure Data Engineer

 

 

Roles & Responsibilities:

  1. Hands-on programming in C# / .Net,
  2. Develop serverless applications using Azure Function Apps.
  3. Writing complex SQL Queries, Stored procedures, and Views. 
  4. Creating Data processing pipeline(s).
  5. Develop / Manage large-scale Data Warehousing and Data processing solutions.
  6. Provide clean, usable data and recommend data efficiency, quality, and data integrity.

 

Skills

  1. Should have working experience on C# /.Net.
  2. Proficient with writing SQL queries, Stored Procedures, and Views
  3. Should have worked on Azure Cloud Stack.
  4. Should have working experience ofin developing serverless code.
  5. Must have MANDATORILY worked on Azure Data Factory.

 

Experience 

  1. 4+ years of relevant experience

 

Read more
Job posted by
Jayasimha Kulkarni

Data Engineer

at Hammoq

Founded 2020  •  Products & Services  •  20-100 employees  •  Raised funding
pandas
NumPy
Data engineering
Data Engineer
Apache Spark
PySpark
Image Processing
Scikit-Learn
Machine Learning (ML)
Python
Web Scraping
icon
Remote, Indore, Ujjain, Hyderabad, Bengaluru (Bangalore)
icon
5 - 8 yrs
icon
₹5L - ₹15L / yr
  • Does analytics to extract insights from raw historical data of the organization. 
  • Generates usable training dataset for any/all MV projects with the help of Annotators, if needed.
  • Analyses user trends, and identifies their biggest bottlenecks in Hammoq Workflow.
  • Tests the short/long term impact of productized MV models on those trends.
  • Skills - Numpy, Pandas, SPARK, APACHE SPARK, PYSPARK, ETL mandatory. 
Read more
Job posted by
Nikitha Muthuswamy

Computer Vision Engineer

at ClearQuote

Founded 2018  •  Products & Services  •  20-100 employees  •  Profitable
Computer Vision
Deep Learning
Python
PyTorch
OpenCV
icon
Remote only
icon
1 - 3 yrs
icon
₹8L - ₹12L / yr

Role:

1. Work on identifying and implementing data pre-processing pipelines for various datasets (images, videos) based on problem statement

2. Experimentation, Identifying, testing the right techniques / Deep Learning libraries to use to train models

3. Training models for inferencing on cloud and edge

4. Work on improving the accuracy of deployed models


Requirements :

1. Experience in state-of-the-art Deep Learning Convolutional Neural Networks 

2. Sound knowledge of Object detection, Semantic segmentation, Instance segmentation (Faster-RCNN, Single Shot Detector(SSD), Mask RCNN, Mobile-net)

3. Hands-on experience in classic Image Processing Techniques (feature engineering) using OpenCV, Pillow

4. Proficiency in Python along with OOPs concepts

5. Should be comfortable building ML models on various deep learning and machine learning libraries using Pytorch

6. Work experience in a startup environment is essential

 

Read more
Job posted by
Venkat Sreeram

Data Engineer

at PAGO Analytics India Pvt Ltd

Founded 2019  •  Services  •  20-100 employees  •  Profitable
Python
PySpark
Microsoft Windows Azure
SQL Azure
Data Analytics
Java
J2EE
Data storage
MLS
Spark
Dat lake
icon
Remote, Bengaluru (Bangalore), Mumbai, NCR (Delhi | Gurgaon | Noida)
icon
2 - 8 yrs
icon
₹8L - ₹15L / yr
Be an integral part of large scale client business development and delivery engagements
Develop the software and systems needed for end-to-end execution on large projects
Work across all phases of SDLC, and use Software Engineering principles to build scaled solutions
Build the knowledge base required to deliver increasingly complex technology projects


Object-oriented languages (e.g. Python, PySpark, Java, C#, C++ ) and frameworks (e.g. J2EE or .NET)
Database programming using any flavours of SQL
Expertise in relational and dimensional modelling, including big data technologies
Exposure across all the SDLC process, including testing and deployment
Expertise in Microsoft Azure is mandatory including components like Azure Data Factory, Azure Data Lake Storage, Azure SQL, Azure DataBricks, HD Insights, ML Service etc.
Good knowledge of Python and Spark are required
Good understanding of how to enable analytics using cloud technology and ML Ops
Experience in Azure Infrastructure and Azure Dev Ops will be a strong plus
Read more
Job posted by
Vijay Cheripally

Data Visualization

at TechUnity Software Systems India Pvt Ltd;

Founded 1996  •  Services  •  20-100 employees  •  Profitable
Data Visualization
SQL
Stackless Python
R Programming
matplotlib
ggplot2
seaborn
Shiny
Dash
icon
Coimbatore
icon
2 - 5 yrs
icon
₹3L - ₹4L / yr

We are looking for a Data Scientist to analyze large amounts of raw information to find patterns that will help improve our company.
We will rely on you to build data products to extract valuable business insights.
In this role, you should be highly analytical with a knack for analysis, math and statistics.
Critical thinking and problem-solving skills are essential for interpreting data.
We also want to see a passion for machine-learning and research.
Your goal will be to help our company analyze trends to make better decisions.
Responsibilities:

  • Identify valuable data sources and automate collection processes
  • Undertake preprocessing of structured and unstructured data
  • Analyze large amounts of information to discover trends and patterns
  • Build predictive models and machine-learning algorithms
  • Combine models through ensemble modeling
  • Present information using data visualization techniques
  • Propose solutions and strategies to business challenges
  • Collaborate with engineering and product development teams

Requirements:

  • Proven experience as a Data Scientist or Data Analyst
  • Experience in data mining
  • Understanding of machine-learning and operations research
  • Knowledge of SQL,Python,R,ggplot2, matplotlib, seaborn, Shiny, Dash; familiarity with Scala, Java or C++ is an asset
  • Experience using business intelligence tools (e.g. Tableau) and data frameworks
  • Analytical mind and business acumen
  • Strong math skills in statistics, algebra
  • Problem-solving aptitude
  • Excellent communication and presentation skills
  • BSc/BE in Computer Science, Engineering or relevant field;
  • graduate degree in Data Science or other quantitative field is preferred
Read more
Job posted by
Prithivi s
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at Railofy?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort