Senior Data Scientist

at Bidgely

Agency job
icon
Bengaluru (Bangalore)
icon
3 - 8 yrs
icon
₹20L - ₹40L / yr (ESOP available)
icon
Full time
Skills
Data Science
Deep Learning
Machine Learning (ML)
Natural Language Processing (NLP)
Artificial Neural Network (ANN)
Matlab
Computer Vision
Python
TensorFlow
Keras
Weka
Roles and Responsibilities

● Research and develop advanced statistical and machine learning models for
analysis of large-scale, high-dimensional data.

● Dig deeper into data, understand characteristics of data, evaluate alternate
models and validate hypotheses through theoretical and empirical approaches.

● Productize has proven or working models into production-quality code.

● Collaborate with product management, marketing, and engineering teams in
Business Units to elicit & understand their requirements & challenges and
develop potential solutions

● Stay current with the latest research and technology ideas; share knowledge by
clearly articulating results and ideas to key decision-makers.

● File patents for innovative solutions that add to the company's IP portfolio

Requirements

● 4 to 6 years of strong experience in data mining, machine learning and
statistical analysis.

● BS/MS/Ph.D. in Computer Science, Statistics, Applied Math, or related areas
from Premier institutes ( only IITs / IISc / BITS / Top NITs or top US university
should apply)

● Experience in productizing models to code in a fast-paced start-up
environment.

● Fluency in analytical tools such as Matlab, R, Weka etc.

● Strong intuition for data and Keen aptitude on large scale data analysis

● Strong communication and collaboration skills.

About Bidgely

Bidgely transforms the way utilities engage their consumers. By leveraging the power of disaggregation, Bidgely provides personalized and actionable insights that help customers save energy and enable utilities to achieve customer delight and build enduring consumer relationships. The company works with utilities serving residential customers around the world.
Founded
2011
Type
Product
Size
100-500 employees
Stage
Raised funding
View full company details
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Data scientist

at SocialPrachar.com

Founded 2014  •  Services  •  20-100 employees  •  Profitable
Data Science
Machine Learning (ML)
Natural Language Processing (NLP)
Computer Vision
recommendation algorithm
icon
Hyderabad
icon
0 - 1 yrs
icon
₹1.8L - ₹3L / yr

Hi We are Looking foe Data Science AI Professional role for our KPHB, Hyderabad Branch Requirements:

• Min 6 months to 1 Year experience in Data Science AI

• Need to have proven Experience with Good GitHub profile and projects

• Need to good with Data Science & AI Concepts

• Need to be good with Python, ML, Stats, Deep Learning, NLP, OpenCV etc

• Good Communication and presentation skills

Job posted by
Mahesh Ch

Computer Vision

at Quidich

Founded 2014  •  Products & Services  •  20-100 employees  •  Bootstrapped
Computer Vision
TensorFlow
C++
slam
EKF
Linear algebra
3D Geometry
Probability
3D rendering
Machine Learning (ML)
Deep Learning
icon
Mumbai
icon
0 - 9 yrs
icon
₹2L - ₹14L / yr

About Quidich


Quidich Innovation Labs pioneers products and customized technology solutions for the Sports Broadcast & Film industry. With a mission to bring machines and machine learning to sports, we use camera technology to develop services using remote controlled systems like drones and buggies that add value to any broadcast or production. Quidich provides services to some of the biggest sports & broadcast clients in India and across the globe. A few recent projects include Indian Premier League, ICC World Cup for Men and Women, Kaun Banega Crorepati, Bigg Boss, Gully Boy & Sanju.

What’s Unique About Quidich?

  • Your work will be consumed by millions of people within months of your joining and will impact consumption patterns of how live sport is viewed across the globe.
  • You work with passionate, talented, and diverse people who inspire and support you to achieve your goals.
  • You work in a culture of trust, care, and compassion.
  • You have the autonomy to shape your role, and drive your own learning and growth. 

Opportunity

  • You will be a part of world class sporting events
  • Your contribution to the software will help shape the final output seen on television
  • You will have an opportunity to work in live broadcast scenarios
  • You will work in a close knit team that is driven by innovation

Role

We are looking for a tech enthusiast who can work with us to help further the development of our Augmented Reality product, Spatio, to keep us ahead of the technology curve. We are one of the few companies in the world currently offering this product for live broadcast. We have a tight product roadmap that needs enthusiastic people to solve problems in the realm of software development and computer vision systems. Qualified candidates will be driven self-starters, robust thinkers, strong collaborators, and adept at operating in a highly dynamic environment. We look for candidates that are passionate about the product and embody our values.




Responsibilities

  • Working with the research team to develop, evaluate and optimize various state of the art algorithms.
  • Deploying high performance, readable, and reliable code on edge devices or any other target environments.
  • Continuously exploring new frameworks and identifying ways to incorporate those in the product.
  • Collaborating with the core team to bring ideas to life and keep pace with the latest research in Computer Vision, Deep Learning etc.

Minimum Qualifications, Skills and Competencies

  • B.E/B.Tech or Masters in Computer Science, Mathematics or relevant experience
  • 3+ years of experience in computer vision algorithms like - sfm/SLAM, optical flow, visual-inertial odometry
  • Experience in sensor fusion (camera, imu, lidars) and in probabilistic filters - EKF, UKF
  • Proficiency in programming - C++ and algorithms
  • Strong mathematical understanding - linear algebra, 3d-geometry, probability.

Preferred Qualifications, Skills and Competencies

  • Proven experience in optical flow, multi-camera geometry, 3D reconstruction
  • Strong background in Machine Learning and Deep Learning frameworks.

Reporting To: Product Lead 

Joining Date: Immediate (Mumbai)

Job posted by
Parag Sule

Data Scientist - Analytics

at Hypersonix Inc

Founded 2018  •  Product  •  100-500 employees  •  Profitable
Data Science
Data Scientist
R Programming
Python
Amazon Web Services (AWS)
Analytics
Machine Learning (ML)
SQL
Clustering
redshift
R
icon
Remote, Bengaluru (Bangalore)
icon
6 - 12 yrs
icon
₹15L - ₹40L / yr

Hypersonix.ai - Data Scientist

As a Data Scientist with Hypersonix (https://hypersonix.ai/), you will play a key role in translating data into insights for our clients. You will design, develop and implement processes and framework into our AI- Platform, that will help our clients make sense of the data they generate, and consume the insights to make informed decisions.

Role – Analytics

Job Responsibilities

Solve business problems & develop a business solution: Use problem-solving methodologies to propose creative solutions to solve a business problem. Recommend design and develop state-of-the-art data-driven analysis using statistical & advanced analytics methodologies to solve business problems. Develop models & recommend insights. Form hypothesis and run experiments to gain empirical insights and validate the hypothesis. Identify and eliminate possible obstacles and identify an alternative creative solution.

  • Experience in design and review of new solution concepts and leading the delivery of high-impact analytics solutions and programs for global clients
  • Identify opportunities and partner with key stakeholders to set priorities, manage expectations, facilitate change required to activate insights, and measure the impact
  • Deconstruct problems and goals to form a clear picture for hypothesis generation and use best practices around decision science approaches and technology to solve business challenges;
  • Integrate custom analytical solutions (e.g., predictive modeling, segmentation, issue tree frameworks) to support data-driven decision-making;
  • Translate and communicate results, recommendations, and opportunities to improve data solutions to internal and external leadership with easily consumable reports and presentations.
  • Should be able to apply domain knowledge to functional areas like market size estimation, business growth strategy, strategic revenue management, marketing effectiveness
  • Have business acumen to manage revenues profitably and meet financial goals consistently. Able to quantify business value for clients and create win-win commercial propositions.
  • Good thought leadership & ability to structure & solve business problems, innovating, where required
  • Must have the ability to adapt to changing business priorities in a fast-paced business environment

 

Technical Expertise

 

  • Should have the ability to handle structured /unstructured data and have prior experience in loading, validating, and cleaning various types of data
  • Should have a very good understanding of data structures and algorithms
  • Experience leading and working independently on end-to-end projects in a fast-paced environment is strongly preferred
  • Advanced knowledge of SQL/redshift with proficiency with Python/R
  • Sound knowledge of advanced analytics and machine learning techniques such as segmentation/clustering, recommendation engines, propensity models, and forecasting to drive growth throughout the customer lifecycle. Should be able to evaluate and bring in new advanced techniques to enhance the value-add for clients
Job posted by
Gowshini Maheswaran

Data Scientist

at Networking & Cybersecurity Solutions

Agency job
via Multi Recruit
Data Science
Data Scientist
R Programming
Python
Amazon Web Services (AWS)
Spark
Kafka
icon
Bengaluru (Bangalore)
icon
4 - 8 yrs
icon
₹40L - ₹60L / yr
  • Research and develop statistical learning models for data analysis
  • Collaborate with product management and engineering departments to understand company needs and devise possible solutions
  • Keep up-to-date with latest technology trends
  • Communicate results and ideas to key decision makers
  • Implement new statistical or other mathematical methodologies as needed for specific models or analysis
  • Optimize joint development efforts through appropriate database use and project design

Qualifications/Requirements:

  • Masters or PhD in Computer Science, Electrical Engineering, Statistics, Applied Math or equivalent fields with strong mathematical background
  • Excellent understanding of machine learning techniques and algorithms, including clustering, anomaly detection, optimization, neural network etc
  • 3+ years experiences building data science-driven solutions including data collection, feature selection, model training, post-deployment validation
  • Strong hands-on coding skills (preferably in Python) processing large-scale data set and developing machine learning models
  • Familiar with one or more machine learning or statistical modeling tools such as Numpy, ScikitLearn, MLlib, Tensorflow
  • Good team worker with excellent communication skills written, verbal and presentation

Desired Experience:

  • Experience with AWS, S3, Flink, Spark, Kafka, Elastic Search
  • Knowledge and experience with NLP technology
  • Previous work in a start-up environment
Job posted by
Ashwini Miniyar

Big Data Engineer

at Netmeds.com

Founded 2015  •  Product  •  500-1000 employees  •  Raised funding
Big Data
Hadoop
Apache Hive
Scala
Spark
Datawarehousing
Machine Learning (ML)
Deep Learning
SQL
Data modeling
PySpark
Python
Amazon Web Services (AWS)
Java
Cassandra
DevOps
HDFS
icon
Chennai
icon
2 - 5 yrs
icon
₹6L - ₹25L / yr

We are looking for an outstanding Big Data Engineer with experience setting up and maintaining Data Warehouse and Data Lakes for an Organization. This role would closely collaborate with the Data Science team and assist the team build and deploy machine learning and deep learning models on big data analytics platforms.

Roles and Responsibilities:

  • Develop and maintain scalable data pipelines and build out new integrations and processes required for optimal extraction, transformation, and loading of data from a wide variety of data sources using 'Big Data' technologies.
  • Develop programs in Scala and Python as part of data cleaning and processing.
  • Assemble large, complex data sets that meet functional / non-functional business requirements and fostering data-driven decision making across the organization.  
  • Responsible to design and develop distributed, high volume, high velocity multi-threaded event processing systems.
  • Implement processes and systems to validate data, monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it.
  • Perform root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Provide high operational excellence guaranteeing high availability and platform stability.
  • Closely collaborate with the Data Science team and assist the team build and deploy machine learning and deep learning models on big data analytics platforms.

Skills:

  • Experience with Big Data pipeline, Big Data analytics, Data warehousing.
  • Experience with SQL/No-SQL, schema design and dimensional data modeling.
  • Strong understanding of Hadoop Architecture, HDFS ecosystem and eexperience with Big Data technology stack such as HBase, Hadoop, Hive, MapReduce.
  • Experience in designing systems that process structured as well as unstructured data at large scale.
  • Experience in AWS/Spark/Java/Scala/Python development.
  • Should have Strong skills in PySpark (Python & SPARK). Ability to create, manage and manipulate Spark Dataframes. Expertise in Spark query tuning and performance optimization.
  • Experience in developing efficient software code/frameworks for multiple use cases leveraging Python and big data technologies.
  • Prior exposure to streaming data sources such as Kafka.
  • Should have knowledge on Shell Scripting and Python scripting.
  • High proficiency in database skills (e.g., Complex SQL), for data preparation, cleaning, and data wrangling/munging, with the ability to write advanced queries and create stored procedures.
  • Experience with NoSQL databases such as Cassandra / MongoDB.
  • Solid experience in all phases of Software Development Lifecycle - plan, design, develop, test, release, maintain and support, decommission.
  • Experience with DevOps tools (GitHub, Travis CI, and JIRA) and methodologies (Lean, Agile, Scrum, Test Driven Development).
  • Experience building and deploying applications on on-premise and cloud-based infrastructure.
  • Having a good understanding of machine learning landscape and concepts. 

 

Qualifications and Experience:

Engineering and post graduate candidates, preferably in Computer Science, from premier institutions with proven work experience as a Big Data Engineer or a similar role for 3-5 years.

Certifications:

Good to have at least one of the Certifications listed here:

    AZ 900 - Azure Fundamentals

    DP 200, DP 201, DP 203, AZ 204 - Data Engineering

    AZ 400 - Devops Certification

Job posted by
Vijay Hemnath

Product Analyst

at upGrad

Founded 2015  •  Product  •  100-500 employees  •  Raised funding
product analyst
Data Analytics
Python
SQL
Tableau
icon
Bengaluru (Bangalore), Mumbai
icon
2 - 5 yrs
icon
₹14L - ₹20L / yr
 

About Us

upGrad is an online education platform building the careers of tomorrow by offering the most industry-relevant programs in an immersive learning experience. Our mission is to create a new digital-first learning experience to deliver tangible career impact to individuals at scale. upGrad currently offers programs in Data Science, Machine Learning, Product Management, Digital Marketing, and Entrepreneurship, etc. upGrad is looking for people passionate about management and education to help design learning programs for working professionals to stay sharp and stay relevant and help build the careers of tomorrow.

  • upGrad was awarded the Best Tech for Education by IAMAI for 2018-19

  • upGrad was also ranked as one of the LinkedIn Top Startups 2018: The 25 most

    sought-after startups in India

  • upGrad was earlier selected as one of the top ten most innovative companies in India

    by FastCompany.

  • We were also covered by the Financial Times along with other disruptors in Ed-Tech

  • upGrad is the official education partner for Government of India - Startup India

    program

  • Our program with IIIT B has been ranked #1 program in the country in the domain of

    Artificial Intelligence and Machine Learning

    Role Summary

    We Are looking for an analytically inclined , Insights Driven Data Analyst to make our organisation more data driven. In this role you will be responsible for creating dashboards to drive insights for product and business teams. Be it Day to Day decisions as well as long term impact assessment, Measuring the Efficacy of different products or certain teams, You'll be Empowering each of them. The growing nature of the team will require you to be in touch with all of the teams at upgrad. Are you the "Go To" person everyone looks at for getting Data, Then this role is for you.

    Roles & Responsibilities

    • Lead and own the analysis of highly complex data sources, identifying trends and patterns in data and provide insights/recommendations based on analysis results

    • Build, maintain, own and communicate detailed reports to assist Marketing, Growth/Learning Experience and Other Business/Executive Teams

    • Own the design, development, and maintenance of ongoing metrics, reports, analyses, dashboards, etc. to drive key business decisions.

    • Analyze data and generate insights in the form of user analysis, user segmentation, performance reports, etc.

    • Facilitate review sessions with management, business users and other team members

    • Design and create visualizations to present actionable insights related to data sets and business questions at hand

    • Develop intelligent models around channel performance, user profiling, and personalization

      Skills Required

      • Having 3-5 yrs hands-on experience with Product related analytics and reporting

      • Experience with building dashboards in Tableau or other data visualization tools

        such as D3

      • Strong data, statistics, and analytical skills with a good grasp of SQL.

      • Programming experience in Python is must

      • Comfortable managing large data sets

      • Good Excel/data management skills

Job posted by
Priyanka Muralidharan

Data Engineer

at PAGO Analytics India Pvt Ltd

Founded 2019  •  Services  •  20-100 employees  •  Profitable
Python
PySpark
Microsoft Windows Azure
SQL Azure
Data Analytics
Java
J2EE
Data storage
MLS
Spark
Dat lake
icon
Remote, Bengaluru (Bangalore), Mumbai, NCR (Delhi | Gurgaon | Noida)
icon
2 - 8 yrs
icon
₹8L - ₹15L / yr
Be an integral part of large scale client business development and delivery engagements
Develop the software and systems needed for end-to-end execution on large projects
Work across all phases of SDLC, and use Software Engineering principles to build scaled solutions
Build the knowledge base required to deliver increasingly complex technology projects


Object-oriented languages (e.g. Python, PySpark, Java, C#, C++ ) and frameworks (e.g. J2EE or .NET)
Database programming using any flavours of SQL
Expertise in relational and dimensional modelling, including big data technologies
Exposure across all the SDLC process, including testing and deployment
Expertise in Microsoft Azure is mandatory including components like Azure Data Factory, Azure Data Lake Storage, Azure SQL, Azure DataBricks, HD Insights, ML Service etc.
Good knowledge of Python and Spark are required
Good understanding of how to enable analytics using cloud technology and ML Ops
Experience in Azure Infrastructure and Azure Dev Ops will be a strong plus
Job posted by
Vijay Cheripally

ETL developer

at fintech

Agency job
via Talentojcom
ETL
Druid Database
Java
Scala
SQL
Tableau
Python
icon
Remote only
icon
2 - 6 yrs
icon
₹9L - ₹30L / yr
● Education in a science, technology, engineering, or mathematics discipline, preferably a
bachelor’s degree or equivalent experience
● Knowledge of database fundamentals and fluency in advanced SQL, including concepts
such as windowing functions
● Knowledge of popular scripting languages for data processing such as Python, as well as
familiarity with common frameworks such as Pandas
● Experience building streaming ETL pipelines with tools such as Apache Flink, Apache
Beam, Google Cloud Dataflow, DBT and equivalents
● Experience building batch ETL pipelines with tools such as Apache Airflow, Spark, DBT, or
custom scripts
● Experience working with messaging systems such as Apache Kafka (and hosted
equivalents such as Amazon MSK), Apache Pulsar
● Familiarity with BI applications such as Tableau, Looker, or Superset
● Hands on coding experience in Java or Scala
Job posted by
Raksha Pant

Deep Learning Coputer VIsion Data Scientist

at Number Theory

Founded 2016  •  Product  •  20-100 employees  •  Raised funding
Deep Learning
OpenCV
Data Science
Keras
TensorFlow
pytorch
icon
NCR (Delhi | Gurgaon | Noida)
icon
1 - 7 yrs
icon
₹20L - ₹35L / yr
We are looking for Smart Deep learning Computer Vision Data Scientist. Phd person is preferable.
Job posted by
Tarun Gulyani

Senior Data Scientist

at Dataweave Pvt Ltd

Founded 2011  •  Products & Services  •  100-1000 employees  •  Raised funding
Machine Learning (ML)
Python
Data Science
Natural Language Processing (NLP)
Deep Learning
Statistical Modeling
Image processing
icon
Bengaluru (Bangalore)
icon
6 - 10 yrs
icon
Best in industry
About us
DataWeave provides Retailers and Brands with “Competitive Intelligence as a Service” that enables them to take key decisions that impact their revenue. Powered by AI, we provide easily consumable and actionable competitive intelligence by aggregating and analyzing billions of publicly available data points on the Web to help businesses develop data-driven strategies and make smarter decisions.

Data [email protected]
We the Data Science team at DataWeave (called Semantics internally) build the core machine learning backend and structured domain knowledge needed to deliver insights through our data products. Our underpinnings are: innovation, business awareness, long term thinking, and pushing the envelope. We are a fast paced labs within the org applying the latest research in Computer Vision, Natural Language Processing, and Deep Learning to hard problems in different domains.

How we work?
It's hard to tell what we love more, problems or solutions! Every day, we choose to address some of the hardest data problems that there are. We are in the business of making sense of messy public data on the web. At serious scale!

What do we offer?
- Some of the most challenging research problems in NLP and Computer Vision. Huge text and image datasets that you can play with!
- Ability to see the impact of your work and the value you're adding to our customers almost immediately.
- Opportunity to work on different problems and explore a wide variety of tools to figure out what really excites you.
- A culture of openness. Fun work environment. A flat hierarchy. Organization wide visibility. Flexible working hours.
- Learning opportunities with courses and tech conferences. Mentorship from seniors in the team.
- Last but not the least, competitive salary packages and fast paced growth opportunities.

Who are we looking for?
The ideal candidate is a strong software developer or a researcher with experience building and shipping production grade data science applications at scale. Such a candidate has keen interest in liaising with the business and product teams to understand a business problem, and translate that into a data science problem. You are also expected to develop capabilities that open up new business productization opportunities.


We are looking for someone with 6+ years of relevant experience working on problems in NLP or Computer Vision with a Master's degree (PhD preferred).


Key problem areas
- Preprocessing and feature extraction noisy and unstructured data -- both text as well as images.
- Keyphrase extraction, sequence labeling, entity relationship mining from texts in different domains.
- Document clustering, attribute tagging, data normalization, classification, summarization, sentiment analysis.
- Image based clustering and classification, segmentation, object detection, extracting text from images, generative models, recommender systems.
- Ensemble approaches for all the above problems using multiple text and image based techniques.

Relevant set of skills
- Have a strong grasp of concepts in computer science, probability and statistics, linear algebra, calculus, optimization, algorithms and complexity.
- Background in one or more of information retrieval, data mining, statistical techniques, natural language processing, and computer vision.
- Excellent coding skills on multiple programming languages with experience building production grade systems. Prior experience with Python is a bonus.
- Experience building and shipping machine learning models that solve real world engineering problems. Prior experience with deep learning is a bonus.
- Experience building robust clustering and classification models on unstructured data (text, images, etc). Experience working with Retail domain data is a bonus.
- Ability to process noisy and unstructured data to enrich it and extract meaningful relationships.
- Experience working with a variety of tools and libraries for machine learning and visualization, including numpy, matplotlib, scikit-learn, Keras, PyTorch, Tensorflow.
- Use the command line like a pro. Be proficient in Git and other essential software development tools.
- Working knowledge of large-scale computational models such as MapReduce and Spark is a bonus.
- Be a self-starter—someone who thrives in fast paced environments with minimal ‘management’.
- It's a huge bonus if you have some personal projects (including open source contributions) that you work on during your spare time. Show off some of your projects you have hosted on GitHub.

Role and responsibilities
- Understand the business problems we are solving. Build data science capability that align with our product strategy.
- Conduct research. Do experiments. Quickly build throw away prototypes to solve problems pertaining to the Retail domain.
- Build robust clustering and classification models in an iterative manner that can be used in production.
- Constantly think scale, think automation. Measure everything. Optimize proactively.
- Take end to end ownership of the projects you are working on. Work with minimal supervision.
- Help scale our delivery, customer success, and data quality teams with constant algorithmic improvements and automation.
- Take initiatives to build new capabilities. Develop business awareness. Explore productization opportunities.
- Be a tech thought leader. Add passion and vibrance to the team. Push the envelope. Be a mentor to junior members of the team.
- Stay on top of latest research in deep learning, NLP, Computer Vision, and other relevant areas.
Job posted by
Sanket Patil
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at Bidgely?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort