Cutshort logo
Ather Energy logo
Data Scientist
Ather Energy's logo

Data Scientist

Shabin Belliappa's profile picture
Posted by Shabin Belliappa
2 - 5 yrs
₹10L - ₹25L / yr
Bengaluru (Bangalore)
Skills
skill iconMachine Learning (ML)
skill iconData Science
skill iconPython

YOU'LL BE OUR : Data Scientist                                                            YOU'LL BE BASED AT: IBC Knowledge Park, Bangalore

YOU'LL BE ALIGNED WITH :Engineering Manager

YOU'LL BE A MEMBER OF : Data Intelligence

 

WHAT YOU'LL DO AT ATHER:

  • Work with the vehicle intelligence platform to evolve the algorithms and the platform enhancing ride experience.

  • Provide data driven solutions from simple to fairly complex insights on the data collected from the vehicle

  • Identify measures and metrics that could be used insightfully to make decisions across firmware components and productionize these.

  • Support the data science lead and manager and partner in fairly intensive projects around diagnostics, predictive modeling, BI and Engineering data sciences.

  • Build and automate scripts that could be re-used efficiently.

  • Build interactive reports/dashboards that could be re-used across engineering teams for their discussions/ explorations iteratively

  • Support monitoring, measuring the success of algorithms and features build and lead innovation through objective reasoning and thinking Engage with the data science lead and the engineering team stakeholders on the solution approach and draft a plan of action.

  • Contribute to product/team roadmap by generating and implementing innovative data and analysis based ideas as product features

  • Handhold/Guide team in successful conceptualization and implementation of key product differentiators through effective benchmarking.

 

HERE'S WHAT WE ARE LOOKING FOR :

• Good understanding of C++, Golang programming skills and system architecture understanding

• Experience with IOT, telemetry will be a plus

• Proficient in R markdown/ Python/ Grafana 

• Proficient in SQL and No-SQL

• Proficient in R  / Python programming

• Good understanding of ML techniques/ Sparks ML

 

YOU BRING TO ATHER:

• B.E/B.Tech preferably in Computer Science

•  3 to 5 yrs of work experience as Data Scientist

 

Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About Ather Energy

Founded :
2013
Type
Size
Stage :
Raised funding
About
The automobile industry is in the midst of a huge technological disruption. Today, electric is the preferred choice because of its inherent efficiency that will shape urban commute and the cities of tomorrow. In parallel, the world around us is getting connected, enabling integration of devices and making our life experiences seamless. Intelligent vehicles will revolutionize our commute experience in the future and the Ather 450 stands at the cusp of this exciting reality.
Read more
Connect with the team
Profile picture
Shabin Belliappa
Profile picture
Arpit Agrawal
Profile picture
Akshita Jain
Profile picture
Swapnil Jain
Company social profiles
angelbloglinkedintwitterfacebook

Similar jobs

Remote only
3 - 7 yrs
₹15L - ₹24L / yr
skill iconData Science
skill iconMachine Learning (ML)
Artificial Intelligence (AI)
Natural Language Processing (NLP)
skill iconR Programming
+4 more

  Senior Data Scientist

  • 6+ years Experienced in building data pipelines and deployment pipelines for machine learning models
  • 4+ years’ experience with ML/AI toolkits such as Tensorflow, Keras, AWS Sagemaker, MXNet, H20, etc.
  • 4+ years’ experience developing ML/AI models in Python/R
  • Must have leadership abilities to lead a project and team.
  • Must have leadership skills to lead and deliver projects, be proactive, take ownership, interface with business, represent the team and spread the knowledge.
  • Strong knowledge of statistical data analysis and machine learning techniques (e.g., Bayesian, regression, classification, clustering, time series, deep learning).
  • Should be able to help deploy various models and tune them for better performance.
  • Working knowledge in operationalizing models in production using model repositories, API s and data pipelines.
  • Experience with machine learning and computational statistics packages.
  • Experience with Data Bricks, Data Lake.
  • Experience with Dremio, Tableau, Power Bi.
  • Experience working with spark ML, spark DL with Pyspark would be a big plus!
  • Working knowledge of relational database systems like SQL Server, Oracle.
  • Knowledge of deploying models in platforms like PCF, AWS, Kubernetes.
  • Good knowledge in Continuous integration suites like Jenkins.
  • Good knowledge in web servers (Apache, NGINX).
  • Good knowledge in Git, Github, Bitbucket.
  • Working knowledge in operationalizing models in production using model repositories, APIs and data pipelines.
  • Java, R, and Python programming experience.
  • Should be very familiar with (MS SQL, Teradata, Oracle, DB2).
  • Big Data – Hadoop.
  • Expert knowledge using BI tools e.g.Tableau
  • Experience with machine learning and computational statistics packages.

 

Read more
Technogen India PvtLtd
at Technogen India PvtLtd
4 recruiters
Mounika G
Posted by Mounika G
Hyderabad
11 - 16 yrs
₹24L - ₹27L / yr
Data Warehouse (DWH)
Informatica
ETL
skill iconAmazon Web Services (AWS)
SQL
+1 more

Daily and monthly responsibilities

  • Review and coordinate with business application teams on data delivery requirements.
  • Develop estimation and proposed delivery schedules in coordination with development team.
  • Develop sourcing and data delivery designs.
  • Review data model, metadata and delivery criteria for solution.
  • Review and coordinate with team on test criteria and performance of testing.
  • Contribute to the design, development and completion of project deliverables.
  • Complete in-depth data analysis and contribution to strategic efforts
  • Complete understanding of how we manage data with focus on improvement of how data is sourced and managed across multiple business areas.

 

Basic Qualifications

  • Bachelor’s degree.
  • 5+ years of data analysis working with business data initiatives.
  • Knowledge of Structured Query Language (SQL) and use in data access and analysis.
  • Proficient in data management including data analytical capability.
  • Excellent verbal and written communications also high attention to detail.
  • Experience with Python.
  • Presentation skills in demonstrating system design and data analysis solutions.


Read more
Molecular Connections
at Molecular Connections
4 recruiters
Molecular Connections
Posted by Molecular Connections
Bengaluru (Bangalore)
8 - 10 yrs
₹15L - ₹20L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+4 more
  1. Big data developer with 8+ years of professional IT experience with expertise in Hadoop ecosystem components in ingestion, Data modeling, querying, processing, storage, analysis, Data Integration and Implementing enterprise level systems spanning Big Data.
  2. A skilled developer with strong problem solving, debugging and analytical capabilities, who actively engages in understanding customer requirements.
  3. Expertise in Apache Hadoop ecosystem components like Spark, Hadoop Distributed File Systems(HDFS), HiveMapReduce, Hive, Sqoop, HBase, Zookeeper, YARN, Flume, Pig, Nifi, Scala and Oozie.
  4. Hands on experience in creating real - time data streaming solutions using Apache Spark core, Spark SQL & DataFrames, Kafka, Spark streaming and Apache Storm.
  5. Excellent knowledge of Hadoop architecture and daemons of Hadoop clusters, which include Name node,Data node, Resource manager, Node Manager and Job history server.
  6. Worked on both Cloudera and Horton works in Hadoop Distributions. Experience in managing Hadoop clustersusing Cloudera Manager tool.
  7. Well versed in installation, Configuration, Managing of Big Data and underlying infrastructure of Hadoop Cluster.
  8. Hands on experience in coding MapReduce/Yarn Programs using Java, Scala and Python for analyzing Big Data.
  9. Exposure to Cloudera development environment and management using Cloudera Manager.
  10. Extensively worked on Spark using Scala on cluster for computational (analytics), installed it on top of Hadoop performed advanced analytical application by making use of Spark with Hive and SQL/Oracle .
  11. Implemented Spark using PYTHON and utilizing Data frames and Spark SQL API for faster processing of data and handled importing data from different data sources into HDFS using Sqoop and performing transformations using Hive, MapReduce and then loading data into HDFS.
  12. Used Spark Data Frames API over Cloudera platform to perform analytics on Hive data.
  13. Hands on experience in MLlib from Spark which are used for predictive intelligence, customer segmentation and for smooth maintenance in Spark streaming.
  14. Experience in using Flume to load log files into HDFS and Oozie for workflow design and scheduling.
  15. Experience in optimizing MapReduce jobs to use HDFS efficiently by using various compression mechanisms.
  16. Working on creating data pipeline for different events of ingestion, aggregation, and load consumer response data into Hive external tables in HDFS location to serve as feed for tableau dashboards.
  17. Hands on experience in using Sqoop to import data into HDFS from RDBMS and vice-versa.
  18. In-depth Understanding of Oozie to schedule all Hive/Sqoop/HBase jobs.
  19. Hands on expertise in real time analytics with Apache Spark.
  20. Experience in converting Hive/SQL queries into RDD transformations using Apache Spark, Scala and Python.
  21. Extensive experience in working with different ETL tool environments like SSIS, Informatica and reporting tool environments like SQL Server Reporting Services (SSRS).
  22. Experience in Microsoft cloud and setting cluster in Amazon EC2 & S3 including the automation of setting & extending the clusters in AWS Amazon cloud.
  23. Extensively worked on Spark using Python on cluster for computational (analytics), installed it on top of Hadoop performed advanced analytical application by making use of Spark with Hive and SQL.
  24. Strong experience and knowledge of real time data analytics using Spark Streaming, Kafka and Flume.
  25. Knowledge in installation, configuration, supporting and managing Hadoop Clusters using Apache, Cloudera (CDH3, CDH4) distributions and on Amazon web services (AWS).
  26. Experienced in writing Ad Hoc queries using Cloudera Impala, also used Impala analytical functions.
  27. Experience in creating Data frames using PySpark and performing operation on the Data frames using Python.
  28. In depth understanding/knowledge of Hadoop Architecture and various components such as HDFS and MapReduce Programming Paradigm, High Availability and YARN architecture.
  29. Establishing multiple connections to different Redshift clusters (Bank Prod, Card Prod, SBBDA Cluster) and provide the access for pulling the information we need for analysis. 
  30. Generated various kinds of knowledge reports using Power BI based on Business specification. 
  31. Developed interactive Tableau dashboards to provide a clear understanding of industry specific KPIs using quick filters and parameters to handle them more efficiently.
  32. Well Experience in projects using JIRA, Testing, Maven and Jenkins build tools.
  33. Experienced in designing, built, and deploying and utilizing almost all the AWS stack (Including EC2, S3,), focusing on high-availability, fault tolerance, and auto-scaling.
  34. Good experience with use-case development, with Software methodologies like Agile and Waterfall.
  35. Working knowledge of Amazon's Elastic Cloud Compute( EC2 ) infrastructure for computational tasks and Simple Storage Service ( S3 ) as Storage mechanism.
  36. Good working experience in importing data using Sqoop, SFTP from various sources like RDMS, Teradata, Mainframes, Oracle, Netezza to HDFS and performed transformations on it using Hive, Pig and Spark .
  37. Extensive experience in Text Analytics, developing different Statistical Machine Learning solutions to various business problems and generating data visualizations using Python and R.
  38. Proficient in NoSQL databases including HBase, Cassandra, MongoDB and its integration with Hadoop cluster.
  39. Hands on experience in Hadoop Big data technology working on MapReduce, Pig, Hive as Analysis tool, Sqoop and Flume data import/export tools.
Read more
Hy-Vee
Bengaluru (Bangalore)
5 - 10 yrs
₹15L - ₹33L / yr
ETL
Informatica
Data Warehouse (DWH)
skill iconPython
skill iconGit
+4 more

Technical & Business Expertise:

-Hands on integration experience in SSIS/Mulesoft
- Hands on experience Azure Synapse
- Proven advanced level of writing database experience in SQL Server
- Proven advanced level of understanding about Data Lake
- Proven intermediate level of writing Python or similar programming language
- Intermediate understanding of Cloud Platforms (GCP) 
- Intermediate understanding of Data Warehousing
- Advanced Understanding of Source Control (Github)

Read more
Angel One
at Angel One
4 recruiters
Vineeta Singh
Posted by Vineeta Singh
Remote, Mumbai
3 - 7 yrs
₹5L - ₹15L / yr
skill iconData Science
Data Scientist
skill iconPython
SQL
skill iconR Language
+1 more

Role : 

  • Understand and translate statistics and analytics to address business problems
  • Responsible for helping in data preparation and data pull, which is the first step in machine learning
  • Should be able to do cut and slice data to extract interesting insights from the data
  • Model development for better customer engagement and retention
  • Hands on experience in relevant tools like SQL(expert), Excel, R/Python
  • Working on strategy development to increase business revenue

 


Requirements:

  • Hands on experience in relevant tools like SQL(expert), Excel, R/Python
  • Statistics: Strong knowledge of statistics
  • Should able to do data scraping & Data mining
  • Be self-driven, and show ability to deliver on ambiguous projects
  • An ability and interest in working in a fast-paced, ambiguous and rapidly-changing environment
  • Should have worked on Business Projects for an organization, Ex: customer acquisition, Customer retention.
Read more
Pune
2 - 6 yrs
₹12L - ₹16L / yr
SQL
ETL
Data engineering
Big Data
skill iconJava
+2 more
  • Design, create, test, and maintain data pipeline architecture in collaboration with the Data Architect.
  • Build the infrastructure required for extraction, transformation, and loading of data from a wide variety of data sources using Java, SQL, and Big Data technologies.
  • Support the translation of data needs into technical system requirements. Support in building complex queries required by the product teams.
  • Build data pipelines that clean, transform, and aggregate data from disparate sources
  • Develop, maintain and optimize ETLs to increase data accuracy, data stability, data availability, and pipeline performance.
  • Engage with Product Management and Business to deploy and monitor products/services on cloud platforms.
  • Stay up-to-date with advances in data persistence and big data technologies and run pilots to design the data architecture to scale with the increased data sets of consumer experience.
  • Handle data integration, consolidation, and reconciliation activities for digital consumer / medical products.

Job Qualifications:

  • Bachelor’s or master's degree in Computer Science, Information management, Statistics or related field
  • 5+ years of experience in the Consumer or Healthcare industry in an analytical role with a focus on building on data pipelines, querying data, analyzing, and clearly presenting analyses to members of the data science team.
  • Technical expertise with data models, data mining.
  • Hands-on Knowledge of programming languages in Java, Python, R, and Scala.
  • Strong knowledge in Big data tools like the snowflake, AWS Redshift, Hadoop, map-reduce, etc.
  • Having knowledge in tools like AWS Glue, S3, AWS EMR, Streaming data pipelines, Kafka/Kinesis is desirable.
  • Hands-on knowledge in SQL and No-SQL database design.
  • Having knowledge in CI/CD for the building and hosting of the solutions.
  • Having AWS certification is an added advantage.
  • Having Strong knowledge in visualization tools like Tableau, QlikView is an added advantage
  • A team player capable of working and integrating across cross-functional teams for implementing project requirements. Experience in technical requirements gathering and documentation.
  • Ability to work effectively and independently in a fast-paced agile environment with tight deadlines
  • A flexible, pragmatic, and collaborative team player with the innate ability to engage with data architects, analysts, and scientists
Read more
Global content marketplace
Agency job
via Qrata by Mrunal Kokate
Mumbai
4 - 8 yrs
₹20L - ₹30L / yr
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
skill iconPython

We are building a global content marketplace that brings companies and content

creators together to scale up content creation processes across 50+ content verticals and 150+ industries. Over the past 2.5 years, we’ve worked with companies like India Today, Amazon India, Adobe, Swiggy, Dunzo, Businessworld, Paisabazaar, IndiGo Airlines, Apollo Hospitals, Infoedge, Times Group, Digit, BookMyShow, UpGrad, Yulu, YourStory, and 350+ other brands.
Our mission is to become the world’s largest content creation and distribution platform for all kinds of content creators and brands.

 

Our Team

 

We are a 25+ member company and is scaling up rapidly in both team size and our ambition.

If we were to define the kind of people and the culture we have, it would be -

a) Individuals with an Extreme Sense of Passion About Work

b) Individuals with Strong Customer and Creator Obsession

c) Individuals with Extraordinary Hustle, Perseverance & Ambition

We are on the lookout for individuals who are always open to going the extra mile and thrive in a fast-paced environment. We are strong believers in building a great, enduring

a company that can outlast its builders and create a massive impact on the lives of our

employees, creators, and customers alike.

 

Our Investors

 

We are fortunate to be backed by some of the industry’s most prolific angel investors - Kunal Bahl and Rohit Bansal (Snapdeal founders), YourStory Media. (Shradha Sharma); Dr. Saurabh Srivastava, Co-founder of IAN and NASSCOM; Slideshare co-founder Amit Ranjan; Indifi's Co-founder and CEO Alok Mittal; Sidharth Rao, Chairman of Dentsu Webchutney; Ritesh Malik, Co-founder and CEO of Innov8; Sanjay Tripathy, former CMO, HDFC Life, and CEO of Agilio Labs; Manan Maheshwari, Co-founder of WYSH; and Hemanshu Jain, Co-founder of Diabeto.
Backed by Lightspeed Venture Partners



Job Responsibilities:
● Design, develop, test, deploy, maintain and improve ML models
● Implement novel learning algorithms and recommendation engines
● Apply Data Science concepts to solve routine problems of target users
● Translates business analysis needs into well-defined machine learning problems, and
selecting appropriate models and algorithms
● Create an architecture, implement, maintain and monitor various data source pipelines
that can be used across various different types of data sources
● Monitor performance of the architecture and conduct optimization
● Produce clean, efficient code based on specifications
● Verify and deploy programs and systems
● Troubleshoot, debug and upgrade existing applications
● Guide junior engineers for productive contribution to the development
The ideal candidate must -

ML and NLP Engineer
● 4 or more years of experience in ML Engineering
● Proven experience in NLP
● Familiarity with language generative model - GPT3
● Ability to write robust code in Python
● Familiarity with ML frameworks and libraries
● Hands on experience with AWS services like Sagemaker and Personalize
● Exposure to state of the art techniques in ML and NLP
● Understanding of data structures, data modeling, and software architecture
● Outstanding analytical and problem-solving skills
● Team player, an ability to work cooperatively with the other engineers.
● Ability to make quick decisions in high-pressure environments with limited information.
Read more
Remote, Dubai
7 - 12 yrs
₹25L - ₹25L / yr
skill iconData Science
skill iconMachine Learning (ML)
skill iconPython
Oracle
skill iconR Programming

High Level Scope of Work :

 

  • Work with AI / Analytics team to priorities MACHINE LEARNING Identified USE CASES for Development and Rollout
  • Meet and understand current retail / Marketing Requirements and how AI/ML solution will address and automate the decision process.
  • Develop AI/ML Programs using DATAIKU Solution Python or open source tech with focus to deliver high Quality and accurate ML prediction Model
  • Gather additional and external data sources to support the AI/ML Model as desired .
  • Support the ML Model and FINE TUNEit to ensure high accuracy all the time.
  • Example of use cases (Customer Segmentation , Product Recommendation, Price Optimization, Retail Customer Personalization Offers, Next Best Location for Business Est, CCTV Computer Vision, NLP and Voice Recognition Solutions)

Required technology expertise :

  • Deep Knowledge & Understanding on MACHINE LEARNING ALGORITHMS (Supervised / Unsupervised Learning / Deep Learning Models)
  • Hands on EXP for at least 5+ years with PYTHON and R STATISTICS PROGRAMMING Languages
  • Strong Database Development knowledge using SQL and PL/SQL
  • Must have EXP using Commercial Data Science Solution particularly DATAIKU and (Altryx, SAS, Azure ML, Google ML, Oracle ML is a plus)
  • Strong hands on EXP with BIG DATA Solution Architecture and Optimization for AI/ML Workload.
  • Data Analytics and BI Tools Hand on EXP particularly (Oracle OBIEE and Power BI)
  • Have implemented and Developed at least 3 successful AI/ML Projects with tangible Business Outcomes In retail Focused Industry
  • Have at least 5+ Years EXP in Retail Industry and Customer Focus Business.
  • Ability to communicate with Business Owner & stakeholders to understand their current issues and provide MACHINE LEARNING Solution accordingly.

Qualifications

  • Bachelor Degree or Master Degree in Data Science, Artificial Intelligent, Computer Science
  • Certified as DATA SCIENTIST or MACHINE LEARNING Expert.
Read more
SigTuple
at SigTuple
1 video
5 recruiters
Sneha Chakravorty
Posted by Sneha Chakravorty
Bengaluru (Bangalore)
2 - 6 yrs
₹4L - ₹20L / yr
skill iconData Science
skill iconR Programming
skill iconPython
skill iconMachine Learning (ML)
We are looking for highly passionate and enthusiastic players for solving problems in medical data analysis using a combination of image processing, machine learning and deep learning. As a Senior Computer Scientist at SigTuple you will have the onus of creating and leveraging the state-of-the-art algorithms in machine learning, image processing and AI which will impact billions of people across the world by creating healthcare solutions that are accurate and affordable. You will collaborate with our current team of super awesome geeks in cracking super complex problems in a simple way by creating experiments, algorithms and prototypes that not only yield high-accuracy but are also designed and engineered to scale. We believe in innovation - needless to say that you will be part of creating intellectual properties like patents and contributing to the research communities by publishing papers - it is something that we value the most What we are looking for: · Hands on experience along with a strong understanding of foundational algorithms in either machine learning, computer vision or deep learning. Prior experience of applying these techniques on images and videos would be good-to-have. · Hands on experience in building and implementing advanced statistical analysis and machine learning and data mining algorithms. · Programming experience in C, C++, Python What should you have: · 2 - 5 years of relevant experience in solving problems using machine learning or computer vision · Bachelor degree or Master degree or PhD in computer science or related fields. · Be an innovative and creative thinker, somebody who is not afraid to try something new and inspire others to do so. · Thrive in a fast-paced and fun environment. · Work with a bunch of data scientist geeks and disruptors striving for a big cause. What SigTuple can offer: You will be working with an incredible team of smart & supportive people, driven by a common force to change things for the better. With an opportunity to deliver high-calibre mobile and desktop solutions integrated with hardware that will transform healthcare ground up, there will ultimately be different challenges for you to face. Sufficient to say that if you thrive in these environments, the buzz alone will keep you energized. In short you will snag a place at the table of one of the most vibrant start-ups in the industry!!
Read more
LimeTray
at LimeTray
5 recruiters
tanika monga
Posted by tanika monga
NCR (Delhi | Gurgaon | Noida)
4 - 6 yrs
₹15L - ₹18L / yr
skill iconMachine Learning (ML)
skill iconPython
Cassandra
MySQL
Apache Kafka
+2 more
Requirements: Minimum 4-years work experience in building, managing and maintaining Analytics applications B.Tech/BE in CS/IT from Tier 1/2 Institutes Strong Fundamentals of Data Structures and Algorithms Good analytical & problem-solving skills Strong hands-on experience in Python In depth Knowledge of queueing systems (Kafka/ActiveMQ/RabbitMQ) Experience in building Data pipelines & Real time Analytics Systems Experience in SQL (MYSQL) & NoSQL (Mongo/Cassandra) databases is a plus Understanding of Service Oriented Architecture Delivered high-quality work with a significant contribution Expert in git, unit tests, technical documentation and other development best practices Experience in Handling small teams
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos