Data Scientist - Product Development

at RS Consultants

DP
Posted by Rahul Inamdar
icon
Pune
icon
4 - 6 yrs
icon
₹18L - ₹30L / yr
icon
Full time
Skills
Python
Amazon Web Services (AWS)
Machine Learning (ML)
Data Science
Java
Airflow
Adobe PageMaker
Keras

Data Scientist - Product Development

Employment Type: Full Time, Permanent

Experience: 3-5 Years as a Full Time Data Scientist

Job Description:

We are looking for an exceptional Data Scientist who is passionate about data and motivated to build large scale machine learning solutions to shine our data products. This person will be contributing to the analytics of data for insight discovery and development of machine learning pipeline to support modeling of terabytes (TB) of daily data for various use cases.

 

Location: Pune (Currently remote up till pandemic, later you need to relocate)

About the Organization: A funded product development company, headquarter in Singapore and offices in Australia, United States, Germany, United Kingdom and India. You will gain work experience in a global environment. Qualifications:

 

Candidate Profile:

  • 3+ years relevant working experience
  • Master / Bachelor’s in computer science or engineering
  • Working knowledge of Python, Spark / Pyspark, SQL
  • Experience working with large-scale data
  • Experience in data manipulation, analytics, visualization, model building, model deployment
  • Proficiency of various ML algorithms for supervised and unsupervised learning
  • Experience working in Agile/Lean model
  • Exposure to building large-scale ML models using one or more of modern tools and libraries such as AWS Sagemaker, Spark ML-Lib, Tensorflow, PyTorch, Keras, GCP ML Stack
  • Exposure to MLOps tools such as MLflow, Airflow
  • Exposure to modern Big Data tech such as Cassandra/Scylla, Snowflake, Kafka, Ceph, Hadoop
  • Exposure to IAAS platforms such as AWS, GCP, Azure
  • Experience with Java and Golang is a plus
  • Experience with BI toolkit such as Superset, Tableau, Quicksight, etc is a plus

 

****** Looking for someone who can join immediately / within a month and carries experience with product development companies and dealt with streaming data. Experience working in a product development team is desirable. AWS experience is a must. Strong experience in Python and its related library is required.

About RS Consultants

Solutions for Talent Acquisition, Human Resource and Payroll Outsourcing.

Our clients are funded software product development companies.
Founded
2010
Type
Services
Size
20-100 employees
Stage
Profitable
View full company details
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Data Scientist

at A stealth mode realty tech start-up

Agency job
via Qrata
Data Science
Natural Language Processing (NLP)
R Programming
Python
SQL
Algorithms
API
icon
Bengaluru (Bangalore)
icon
1 - 5 yrs
icon
₹10L - ₹32L / yr
A good understanding of the fundamentals of data science/algorithms or software
engineering
2. Preferably should have done some project or internship related to the field
3. Knowledge of SQL is a plus
4. A deep desire to learn new things and be a part of a vibrant start-up.
5. You will have a lot of freehand and this comes with immense responsibility - so it
is expected that you will be willing to master new things that come along!

Job Description:
1. Design and build a pipeline to train models for NLP problems like Classification,
NER
2. Develop APIs that showcase our models' capabilities and enable third-party
integrations
3. Work across a microservices architecture that processes thousands of
documents per day.
Job posted by
Prajakta Kulkarni

Data Engineer

at Inviz Ai Solutions Private Limited

Founded 2019  •  Products & Services  •  20-100 employees  •  Profitable
Python
PySpark
Scala
Google Cloud Platform (GCP)
Amazon Web Services (AWS)
Windows Azure
Apache Hive
Hadoop
Relational Database (RDBMS)
SQL
Big Data
Apache Kafka
JSON
Data flow
icon
Bengaluru (Bangalore)
icon
2 - 8 yrs
icon
₹15L - ₹40L / yr

InViz is Bangalore Based Startup helping Enterprises simplifying the Search and Discovery experiences for both their end customers as well as their internal users. We use state-of-the-art technologies in Computer Vision, Natural Language Processing, Text Mining, and other ML techniques to extract information/concepts from data of different formats- text, images, videos and make them easily discoverable through simple human-friendly touchpoints. 

 

Experience: 2-8 years 

Responsibility: 

  • The person will be responsible for leading the development and implementing advanced analytical approaches across a variety of projects, domain and solutions. 
  • Should have a mix of analytical, technical skills, someone who can work with business requirements and develop them into a useable and scalable solution. 
  • One should fully understand the value proposition of data mining and analytical methods. 
  • Should be able to oversee the maintenance and enhancements of existing models, algorithms and processes as well as oversee the development and maintenance of code and process documentation. 

Required Skillset: 

  • Good hands-on experience on Kafka, Hive, Airflow, Shell scripting, No-SQL database 
  • Good exposure to RDBMS and SQL. 
  • Should have skills in data ingestion, transformation, staging and storing of data, analysis of data from Parquet, Avro, JSON, and other formats. 
  • Experience in building and optimizing “big data” data pipelines, architectures and data sets. 
  • Good hands-on experience on Python/Pyspark/Scala-spark. Having exposure to data science libraries is a plus. 
  • Good experience in BigQuery,DataFlow,Pub/Sub,Composer,Cloud Functions.
  • Create custom software components (e.g., specialized UDFs) and analytics applications. 
  • Hands on experience in Statistical Methods such as Regression, Logistic regression, decision trees, random forest, other segmentation & clustering methods. 
  • Build high-performance algorithms, prototypes, predictive models and proof of concepts.
  • Experience in e-commerce domain is a plus .
Job posted by
Shridhar Nayak

Data Scientist

at Kwalee

Founded 2011  •  Product  •  100-500 employees  •  Profitable
Data Science
R Programming
Python
icon
Bengaluru (Bangalore)
icon
3 - 15 yrs
icon
Best in industry

Kwalee is one of the world’s leading multiplatform game publishers and developers, with well over 750 million downloads worldwide for mobile hits such as Draw It, Teacher Simulator, Let’s Be Cops 3D, Traffic Cop 3D and Makeover Studio 3D. Alongside this, we also have a growing PC and Console team of incredible pedigree that is on the hunt for great new titles to join TENS!, Eternal Hope and Die by the Blade. 

With a team of talented people collaborating daily between our studios in Leamington Spa, Bangalore and Beijing, or on a remote basis from Turkey, Brazil, the Philippines and many more places, we have a truly global team making games for a global audience. And it’s paying off: Kwalee games have been downloaded in every country on earth! If you think you’re a good fit for one of our remote vacancies, we want to hear from you wherever you are based.

Founded in 2011 by David Darling CBE, a key architect of the UK games industry who previously co-founded and led Codemasters for many years, our team also includes legends such as Andrew Graham (creator of Micro Machines series) and Jason Falcus (programmer of classics including NBA Jam) alongside a growing and diverse team of global gaming experts. Everyone contributes creatively to Kwalee’s success, with all employees eligible to pitch their own game ideas on Creative Wednesdays, and we’re proud to have built our success on this inclusive principle. Could your idea be the next global hit?

What’s the job?

As a Data Scientist you will help utilise masses of data generated by Kwalee players all over the world to solve complex problems using cutting edge techniques.   

What you tell your friends you do 

"My models optimise the performance of Kwalee games and advertising every day!”

What you will really be doing 

  • Building intelligent systems which generate value from the data which our players and marketing activities produce.
  • Leveraging statistical modelling and machine learning techniques to perform automated decision making on a large scale.
  • Developing complex, multi-faceted and highly valuable data products which fuel the growth of Kwalee and our games.
  • Owning and managing data science projects from concept to deployment.
  • Collaborating with key stakeholders across the company to develop new products and avenues of research.

How you will be doing this

  • You’ll be part of an agile, multidisciplinary and creative team and work closely with them to ensure the best results.
  • You'll think creatively and be motivated by challenges and constantly striving for the best.
  • You’ll work with cutting edge technology, if you need software or hardware to get the job done efficiently, you will get it. We even have a robot!

Team

Our talented team is our signature. We have a highly creative atmosphere with more than 200 staff where you’ll have the opportunity to contribute daily to important decisions. You’ll work within an extremely experienced, passionate and diverse team, including David Darling and the creator of the Micro Machines video games.

Skills and Requirements

  • A degree in a numerically focussed degree discipline such as, Maths, Physics, Economics, Chemistry, Engineering, Biological Sciences
  • A record of outstanding contribution to data science projects.
  • Experience using Python for data analysis and visualisation.
  • A good understanding of a deep learning framework such as Tensorflow.
  • Experience manipulating data in SQL and/or NoSQL databases

We offer

  • We want everyone involved in our games to share our success, that’s why we have a generous team profit sharing scheme from day 1 of employment
  • In addition to a competitive salary we also offer private medical cover and life assurance
  • Creative Wednesdays!(Design and make your own games every Wednesday)
  • 20 days of paid holidays plus bank holidays 
  • Hybrid model available depending on the department and the role
  • Relocation support available 
  • Great work-life balance with flexible working hours
  • Quarterly team building days - work hard, play hard!
  • Monthly employee awards
  • Free snacks, fruit and drinks

Our philosophy

We firmly believe in creativity and innovation and that a fundamental requirement for a successful and happy company is having the right mix of individuals. With the right people in the right environment anything and everything is possible.

Kwalee makes games to bring people, their stories, and their interests together. As an employer, we’re dedicated to making sure that everyone can thrive within our team by welcoming and supporting people of all ages, races, colours, beliefs, sexual orientations, genders and circumstances. With the inclusion of diverse voices in our teams, we bring plenty to the table that’s fresh, fun and exciting; it makes for a better environment and helps us to create better games for everyone! This is how we move forward as a company – because these voices are the difference that make all the difference.

Job posted by
Michael Hoppitt

Data Science Core Developer

at Kwalee

Founded 2011  •  Product  •  100-500 employees  •  Profitable
Data Analytics
Data Science
Python
NOSQL Databases
SQL
icon
Bengaluru (Bangalore)
icon
3 - 15 yrs
icon
Best in industry

Kwalee is one of the world’s leading multiplatform game publishers and developers, with well over 750 million downloads worldwide for mobile hits such as Draw It, Teacher Simulator, Let’s Be Cops 3D, Traffic Cop 3D and Makeover Studio 3D. Alongside this, we also have a growing PC and Console team of incredible pedigree that is on the hunt for great new titles to join TENS!, Eternal Hope and Die by the Blade. 

With a team of talented people collaborating daily between our studios in Leamington Spa, Bangalore and Beijing, or on a remote basis from Turkey, Brazil, the Philippines and many more places, we have a truly global team making games for a global audience. And it’s paying off: Kwalee games have been downloaded in every country on earth! If you think you’re a good fit for one of our remote vacancies, we want to hear from you wherever you are based.

Founded in 2011 by David Darling CBE, a key architect of the UK games industry who previously co-founded and led Codemasters for many years, our team also includes legends such as Andrew Graham (creator of Micro Machines series) and Jason Falcus (programmer of classics including NBA Jam) alongside a growing and diverse team of global gaming experts. Everyone contributes creatively to Kwalee’s success, with all employees eligible to pitch their own game ideas on Creative Wednesdays, and we’re proud to have built our success on this inclusive principle. Could your idea be the next global hit?

What’s the job?

As a Data Science Core Developer you will build tools and develop technology that deliver data science products to a team of strategists, marketing experts and game developers.


What you will be doing

  • Create analytical tools, from simple scripts to full stack applications.
  • Develop successful prototype tools into highly tested automated programs
  • Work with the marketing, publishing and development teams to understand the problems they are facing, how to solve them and deliver products that are understandable to non-data scientists
  • Solve challenging data management and data flow problems to fuel Kwalee’s analysis


How you will be doing this

  • You’ll be part of an agile, multidisciplinary and creative team and work closely with them to ensure the best results.
  • You'll think creatively and be motivated by challenges and constantly striving for the best.
  • You’ll work with cutting edge technology, if you need software or hardware to get the job done efficiently, you will get it. We even have a robot!


Team

Our talented team is our signature. We have a highly creative atmosphere with more than 200 staff where you’ll have the opportunity to contribute daily to important decisions. You’ll work within an extremely experienced, passionate and diverse team, including David Darling and the creator of the Micro Machines video games.


Skills and Requirements

  • A proven track record of writing high quality program code in Python
  • Experience with machine learning python frameworks and libraries such as Tensorflow and Scikit-Learn
  • The ability to write quick scripts to accelerate manual tasks
  • Knowledge of NoSQL and SQL databases like Couchbase, Elasticsearch and PostgreSQL  will be helpful but not necessary
  • An avid interest in the development, marketing and monetisation of mobile games


We offer

  • We want everyone involved in our games to share our success, that’s why we have a generous team profit sharing scheme from day 1 of employment
  • In addition to a competitive salary we also offer private medical cover and life assurance
  • Creative Wednesdays! (Design and make your own games every Wednesday)
  • 20 days of paid holidays plus bank holidays 
  • Hybrid model available depending on the department and the role
  • Relocation support available 
  • Great work-life balance with flexible working hours
  • Quarterly team building days - work hard, play hard!
  • Monthly employee awards
  • Free snacks, fruit and drinks


Our philosophy

We firmly believe in creativity and innovation and that a fundamental requirement for a successful and happy company is having the right mix of individuals. With the right people in the right environment anything and everything is possible.

Kwalee makes games to bring people, their stories, and their interests together. As an employer, we’re dedicated to making sure that everyone can thrive within our team by welcoming and supporting people of all ages, races, colours, beliefs, sexual orientations, genders and circumstances. With the inclusion of diverse voices in our teams, we bring plenty to the table that’s fresh, fun and exciting; it makes for a better environment and helps us to create better games for everyone! This is how we move forward as a company – because these voices are the difference that make all the difference.

Job posted by
Michael Hoppitt

Senior Software Engineer

at GroundtRuth

Founded 2009  •  Product  •  100-500 employees  •  Profitable
PySpark
Data engineering
Big Data
Hadoop
Spark
Amazon Web Services (AWS)
Python
Data Structures
icon
Remote only
icon
7 - 12 yrs
icon
₹15L - ₹32L / yr

You will:

  • Create highly scalable AWS micro-services utilizing cutting edge cloud technologies.
  • Design and develop Big Data pipelines handling huge geospatial data.
  • Bring clarity to large complex technical challenges.
  • Collaborate with Engineering leadership to help drive technical strategy.
  • Project scoping, planning and estimation.
  • Mentor and coach team members at different levels of experience.
  • Participate in peer code reviews and technical meetings.
  • Cultivate a culture of engineering excellence.
  • Seek, implement and adhere to standards, frameworks and best practices in the industry.
  • Participate in on-call rotation.

You have:

  • Bachelor’s/Master’s degree in computer science, computer engineering or relevant field.
  • 5+ years of experience in software design, architecture and development.
  • 5+ years of experience using object-oriented languages (Java, Python).
  • Strong experience with Big Data technologies like Hadoop, Spark, Map Reduce, Kafka, etc.
  • Strong experience in working with different AWS technologies.
  • Excellent competencies in data structures & algorithms.

Nice to have:

  • Proven track record of delivering large scale projects, and an ability to break down large tasks into smaller deliverable chunks
  • Experience in developing high throughput low latency backend services
  • Affinity to spatial data structures and algorithms.
  • Familiarity with Postgres DB, Google Places or Mapbox APIs

What we offer

At GroundTruth, we want our employees to be comfortable with their benefits so they can focus on doing the work they love.

  • Unlimited Paid Time Off
  • In Office Daily Catered Lunch
  • Fully stocked snacks/beverages
  • 401(k) employer match
  • Health coverage including medical, dental, vision and option for HSA or FSA
  • Generous parental leave
  • Company-wide DEIB Committee
  • Inclusion Academy Seminars
  • Wellness/Gym Reimbursement
  • Pet Expense Reimbursement
  • Company-wide Volunteer Day
  • Education reimbursement program
  • Cell phone reimbursement
  • Equity Analysis to ensure fair pay
Job posted by
Priti Singh

Data Architect

at Hypersonix Inc

Founded 2018  •  Product  •  100-500 employees  •  Profitable
Big Data
Data Warehouse (DWH)
Apache Kafka
Spark
Hadoop
Data engineering
Artificial Intelligence (AI)
Machine Learning (ML)
Data Structures
Data modeling
Data wrangling
Data integration
Data-driven testing
Database performance tuning
Apache Storm
Python
Scala
SQL
Amazon Web Services (AWS)
SQL Azure
kafka
databricks
Flinks
druid
Airflow
Luigi
Nifi
Talend
icon
Bengaluru (Bangalore)
icon
10 - 15 yrs
icon
₹15L - ₹20L / yr
Hypersonix.ai is disrupting the Business Intelligence and Analytics space with AI, ML and NLP capabilities to drive specific business insights with a conversational user experience. Hypersonix.ai has been built ground up with new age technology to simplify the consumption of data for our customers in Restaurants, Hospitality and other industry verticals.

Hypersonix.ai is seeking a Data Evangelist who can work closely with customers to understand the data sources, acquire data and drive product success by delivering insights based on customer needs.

Primary Responsibilities :

- Lead and deliver complete application lifecycle design, development, deployment, and support for actionable BI and Advanced Analytics solutions

- Design and develop data models and ETL process for structured and unstructured data that is distributed across multiple Cloud platforms

- Develop and deliver solutions with data streaming capabilities for a large volume of data

- Design, code and maintain parts of the product and drive customer adoption

- Build data acquisition strategy to onboard customer data with speed and accuracy

- Working both independently and with team members to develop, refine, implement, and scale ETL processes

- On-going support and maintenance of live-clients for their data and analytics needs

- Defining the data automation architecture to drive self-service data load capabilities

Required Qualifications :

- Bachelors/Masters/Ph.D. in Computer Science, Information Systems, Data Science, Artificial Intelligence, Machine Learning or related disciplines

- 10+ years of experience guiding the development and implementation of Data architecture in structured, unstructured, and semi-structured data environments.

- Highly proficient in Big Data, data architecture, data modeling, data warehousing, data wrangling, data integration, data testing and application performance tuning

- Experience with data engineering tools and platforms such as Kafka, Spark, Databricks, Flink, Storm, Druid and Hadoop

- Strong with hands-on programming and scripting for Big Data ecosystem (Python, Scala, Spark, etc)

- Experience building batch and streaming ETL data pipelines using workflow management tools like Airflow, Luigi, NiFi, Talend, etc

- Familiarity with cloud-based platforms like AWS, Azure or GCP

- Experience with cloud data warehouses like Redshift and Snowflake

- Proficient in writing complex SQL queries.

- Excellent communication skills and prior experience of working closely with customers

- Data savvy who loves to understand large data trends and obsessed with data analysis

- Desire to learn about, explore, and invent new tools for solving real-world problems using data

Desired Qualifications :

- Cloud computing experience, Amazon Web Services (AWS)

- Prior experience in Data Warehousing concepts, multi-dimensional data models

- Full command of Analytics concepts including Dimension, KPI, Reports & Dashboards

- Prior experience in managing client implementation of Analytics projects

- Knowledge and prior experience of using machine learning tools
Job posted by
Gowshini Maheswaran
ETL
Data Warehouse (DWH)
ETL Developer
Relational Database (RDBMS)
Spark
Hadoop
SQL server
SSIS
ADF
Python
Java
talend
Azure Data Factory
icon
Bengaluru (Bangalore)
icon
5 - 8 yrs
icon
₹8L - ₹13L / yr

 Minimum of 4 years’ experience of working on DW/ETL projects and expert hands-on working knowledge of ETL tools.

Experience with Data Management & data warehouse development

Star schemas, Data Vaults, RDBMS, and ODS

Change Data capture

Slowly changing dimensions

Data governance

Data quality

Partitioning and tuning

Data Stewardship

Survivorship

Fuzzy Matching

Concurrency

Vertical and horizontal scaling

ELT, ETL

Spark, Hadoop, MPP, RDBMS

Experience with Dev/OPS architecture, implementation and operation

Hand's on working knowledge of Unix/Linux

Building Complex SQL Queries. Expert SQL and data analysis skills, ability to debug and fix data issue.

Complex ETL program design coding

Experience in Shell Scripting, Batch Scripting.

Good communication (oral & written) and inter-personal skills

Expert SQL and data analysis skill, ability to debug and fix data issue Work closely with business teams to understand their business needs and participate in requirements gathering, while creating artifacts and seek business approval.

Helping business define new requirements, Participating in End user meetings to derive and define the business requirement, propose cost effective solutions for data analytics and familiarize the team with the customer needs, specifications, design targets & techniques to support task performance and delivery.

Propose good design & solutions and adherence to the best Design & Standard practices.

Review & Propose industry best tools & technology for ever changing business rules and data set. Conduct Proof of Concepts (POC) with new tools & technologies to derive convincing benchmarks.

Prepare the plan, design and document the architecture, High-Level Topology Design, Functional Design, and review the same with customer IT managers and provide detailed knowledge to the development team to familiarize them with customer requirements, specifications, design standards and techniques.

Review code developed by other programmers, mentor, guide and monitor their work ensuring adherence to programming and documentation policies.

Work with functional business analysts to ensure that application programs are functioning as defined. 

Capture user-feedback/comments on the delivered systems and document it for the client and project manager’s review. Review all deliverables before final delivery to client for quality adherence.

Technologies (Select based on requirement)

Databases - Oracle, Teradata, Postgres, SQL Server, Big Data, Snowflake, or Redshift

Tools – Talend, Informatica, SSIS, Matillion, Glue, or Azure Data Factory

Utilities for bulk loading and extracting

Languages – SQL, PL-SQL, T-SQL, Python, Java, or Scala

J/ODBC, JSON

Data Virtualization Data services development

Service Delivery - REST, Web Services

Data Virtualization Delivery – Denodo

 

ELT, ETL

Cloud certification Azure

Complex SQL Queries

 

Data Ingestion, Data Modeling (Domain), Consumption(RDMS)
Job posted by
Jerrin Thomas

Data Scientist 1

at Global internet of things connected solutions provider(H1)

Agency job
via Multi Recruit
Data Science
Computer Vision
Machine Learning (ML)
icon
Bengaluru (Bangalore)
icon
2 - 5 yrs
icon
₹8L - ₹10L / yr
  • Required to work individually or as part of a team on data science projects and work closely with lines of business to understand business problems and translate them into identifiable machine learning problems which can be delivered as technical solutions.
  • Build quick prototypes to check feasibility and value to the business.
  • Design, training, and deploying neural networks for computer vision and machine learning-related problems.
  • Perform various complex activities related to statistical/machine learning.
  • Coordinate with business teams to provide analytical support for developing, evaluating, implementing, monitoring, and executing models.
  • Collaborate with technology teams to deploy the models to production.

 

Key Criteria:

  • 2+ years of experience in solving complex business problems using machine learning.
  • Understanding and modeling experience in supervised, unsupervised, and deep learning models; hands-on knowledge of data wrangling, data cleaning/ preparation, dimensionality reduction is required.
  • Experience in Computer Vision/Image Processing/Pattern Recognition, Machine Learning, Deep Learning, or Artificial Intelligence.
  • Understanding of Deep Learning Architectures like InceptionNet, VGGNet, FaceNet, YOLO, SSD, RCNN, MASK Rcnn, ResNet.
  • Experience with one or more deep learning frameworks e.g., TensorFlow, PyTorch.
  • Knowledge of vector algebra, statistical and probabilistic modeling is desirable.
  • Proficiency in programming skills involving Python, C/C++, and Python Data Science Stack (NumPy, SciPy, Pandas, Scikit-learn, Jupyter, IPython).
  • Experience working with Amazon SageMaker or Azure ML Studio for deployments is a plus.
  • Experience in data visualization software such as Tableau, ELK, etc is a plus.
  • Strong analytical, critical thinking, and problem-solving skills.
Qualifications:
  • B.E/ B.Tech./ M. E/ M. Tech in Computer Science, Applied Mathematics, Statistics, Data Science, or related Engineering field.
  • Minimum 60% in Graduation or Post-Graduation
  • Great interpersonal and communication skills
Job posted by
Santhosh Kumar KR

Deep Learning Coputer VIsion Data Scientist

at Number Theory

Founded 2016  •  Product  •  20-100 employees  •  Raised funding
Deep Learning
OpenCV
Data Science
Keras
TensorFlow
pytorch
icon
NCR (Delhi | Gurgaon | Noida)
icon
1 - 7 yrs
icon
₹20L - ₹35L / yr
We are looking for Smart Deep learning Computer Vision Data Scientist. Phd person is preferable.
Job posted by
Tarun Gulyani

Data Scientist

at Pluto Seven Business Solutions Pvt Ltd

Founded 2017  •  Products & Services  •  20-100 employees  •  Raised funding
Statistical Modeling
Data Science
TensorFlow
Python
Machine Learning (ML)
Deep Learning
Data Analytics
Google Cloud Storage
Scikit-Learn
Regression analysis
icon
Bengaluru (Bangalore)
icon
2 - 7 yrs
icon
₹4L - ₹20L / yr
Data Scientist : Pluto7 is a services and solutions company focused on building ML, Ai, Analytics, and IoT tailored solutions to accelerate business transformation.We are a Premier Google Cloud Partner, servicing Retail, Manufacturing, Healthcare, and Hi-Tech industries. We are a Google premium partner in AI & ML, which means you'll have the opportunity to work and collaborate with folks from Google. Are you an innovator, have a passion to work with data and find insights, have the inquisitive mind with the constant yearning to learn new ideas; then we are looking for you.As a Pluto7 Data Scientist engineer, you will be one of the key members of our innovative artificial intelligence and machine learning team. You are expected to be unfazed with large volumes of data, love to apply various models, use technology to process and filter data for analysis. Responsibilities: Build and Optimize Machine Learning models. Work with large/complex datasets to solve difficult and non-routine analysis problems, applying advanced analytical methods as needed. Build and prototype data pipelines for analysis at scale. Work cross-functionally with Business Analysts and Data Engineers to help develop cutting edge and innovative artificial intelligence and machine learning models. Make recommendations for selections on machine learning models. Drive accuracy levels to the next stage of the given ML models. Experience in developing visualisation and User Good exposure in exploratory data analysis Strong experience in Statistics and ML algorithms. Minimum qualifications: 2+ years of relevant work experience in ML and advanced data analytics(e.g., as a Machine Learning Specialist / Data scientist ). Strong Experience using machine learning and artificial intelligence frameworks such as Tensorflow, sci-kit learn, Keras using python. Good in Python/R/SAS programming. Understanding of Cloud platforms like GCP, AWS, or other. Preferred qualifications: Work experience in building data pipelines to ingest, cleanse and transform data. Applied experience with machine learning on large datasets and experience translating analysis results into business recommendations. Demonstrated skills in selecting the right statistical tools given a data analysis problem. Demonstrated effective written and verbal communication skills. Demonstrated willingness to both teach others and learn new techniques Work location : Bangalore
Job posted by
Sindhu Narayan
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at RS Consultants?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort