Data Scientist - Product Development

at RS Consultants

DP
Posted by Rahul Inamdar
icon
Pune
icon
4 - 6 yrs
icon
₹18L - ₹30L / yr
icon
Full time
Skills
Python
Amazon Web Services (AWS)
Machine Learning (ML)
Data Science
Java
Airflow
Adobe PageMaker
Keras

Data Scientist - Product Development

Employment Type: Full Time, Permanent

Experience: 3-5 Years as a Full Time Data Scientist

Job Description:

We are looking for an exceptional Data Scientist who is passionate about data and motivated to build large scale machine learning solutions to shine our data products. This person will be contributing to the analytics of data for insight discovery and development of machine learning pipeline to support modeling of terabytes (TB) of daily data for various use cases.

 

Location: Pune (Currently remote up till pandemic, later you need to relocate)

About the Organization: A funded product development company, headquarter in Singapore and offices in Australia, United States, Germany, United Kingdom and India. You will gain work experience in a global environment. Qualifications:

 

Candidate Profile:

  • 3+ years relevant working experience
  • Master / Bachelor’s in computer science or engineering
  • Working knowledge of Python, Spark / Pyspark, SQL
  • Experience working with large-scale data
  • Experience in data manipulation, analytics, visualization, model building, model deployment
  • Proficiency of various ML algorithms for supervised and unsupervised learning
  • Experience working in Agile/Lean model
  • Exposure to building large-scale ML models using one or more of modern tools and libraries such as AWS Sagemaker, Spark ML-Lib, Tensorflow, PyTorch, Keras, GCP ML Stack
  • Exposure to MLOps tools such as MLflow, Airflow
  • Exposure to modern Big Data tech such as Cassandra/Scylla, Snowflake, Kafka, Ceph, Hadoop
  • Exposure to IAAS platforms such as AWS, GCP, Azure
  • Experience with Java and Golang is a plus
  • Experience with BI toolkit such as Superset, Tableau, Quicksight, etc is a plus

 

****** Looking for someone who can join immediately / within a month and carries experience with product development companies and dealt with streaming data. Experience working in a product development team is desirable. AWS experience is a must. Strong experience in Python and its related library is required.

Read more

About RS Consultants

Solutions for Talent Acquisition, Human Resource and Payroll Outsourcing.

Our clients are funded software product development companies.
Read more
Founded
2010
Type
Services
Size
20-100 employees
Stage
Profitable
View full company details
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Senior Data Scientist

at Top 3 Fintech Startup

Agency job
via Jobdost
Machine Learning (ML)
Data Science
Natural Language Processing (NLP)
Computer Vision
Python
DevOps
SQL
Git
Amazon Web Services (AWS)
PySpark
Postman
icon
Bengaluru (Bangalore)
icon
4 - 7 yrs
icon
₹11L - ₹17L / yr
Responsible to lead a team of analysts to build and deploy predictive models to infuse core business functions with deep analytical insights. The Senior Data Scientist will also work
closely with the Kinara management team to investigate strategically important business
questions.

Lead a team through the entire analytical and machine learning model life cycle:

 Define the problem statement
 Build and clean datasets
 Exploratory data analysis
 Feature engineering
 Apply ML algorithms and assess the performance
 Code for deployment
 Code testing and troubleshooting
 Communicate Analysis to Stakeholders
 Manage Data Analysts and Data Scientists
Read more
Job posted by
Sathish Kumar

Cloud Data Engineer

at Intuitive Technology Partners

Founded  •   •  employees  • 
OLTP
data ops
cloud data
Amazon Web Services (AWS)
Google Cloud Platform (GCP)
Windows Azure
PySpark
ETL
Scala
CI/CD
Data-flow analysis
icon
Remote only
icon
9 - 20 yrs
icon
Best in industry

THE ROLE:Sr. Cloud Data Infrastructure Engineer

As a Sr. Cloud Data Infrastructure Engineer with Intuitive, you will be responsible for building or converting legacy data pipelines from legacy environments to modern cloud environments to help the analytics and data science initiatives across our enterprise customers. You will be working closely with SMEs in Data Engineering and Cloud Engineering, to create solutions and extend Intuitive's DataOps Engineering Projects and Initiatives. The Sr. Cloud Data Infrastructure Engineer will be a central critical role for establishing the DataOps/DataX data logistics and management for building data pipelines, enforcing best practices, ownership for building complex and performant Data Lake Environments, work closely with Cloud Infrastructure Architects and DevSecOps automation teams. The Sr. Cloud Data Infrastructure Engineer is the main point of contact for all things related to DataLake formation and data at scale. In this role, we expect our DataOps leaders to be obsessed with data and providing insights to help our end customers.

ROLES & RESPONSIBILITIES:

  • Design, develop, implement, and tune large-scale distributed systems and pipelines that process large volume of data; focusing on scalability, low-latency, and fault-tolerance in every system built
  • Developing scalable and re-usable frameworks for ingesting large data from multiple sources.
  • Modern Data Orchestration engineering - query tuning, performance tuning, troubleshooting, and debugging big data solutions.
  • Provides technical leadership, fosters a team environment, and provides mentorship and feedback to technical resources.
  • Deep understanding of ETL/ELT design methodologies, patterns, personas, strategy, and tactics for complex data transformations.
  • Data processing/transformation using various technologies such as spark and cloud Services.
  • Understand current data engineering pipelines using legacy SAS tools and convert to modern pipelines.

 

Data Infrastructure Engineer Strategy Objectives: End to End Strategy

Define how data is acquired, stored, processed, distributed, and consumed.
Collaboration and Shared responsibility across disciplines as partners in delivery for progressing our maturity model in the End-to-End Data practice.

  • Understanding and experience with modern cloud data orchestration and engineering for one or more of the following cloud providers - AWS, Azure, GCP.
  • Leading multiple engagements to design and develop data logistic patterns to support data solutions using data modeling techniques (such as file based, normalized or denormalized, star schemas, schema on read, Vault data model, graphs) for mixed workloads, such as OLTP, OLAP, streaming using any formats (structured, semi-structured, unstructured).
  • Applying leadership and proven experience with architecting and designing data implementation patterns and engineered solutions using native cloud capabilities that span data ingestion & integration (ingress and egress), data storage (raw & cleansed), data prep & processing, master & reference data management, data virtualization & semantic layer, data consumption & visualization.
  • Implementing cloud data solutions in the context of business applications, cost optimization, client's strategic needs and future growth goals as it relates to becoming a 'data driven' organization.
  • Applying and creating leading practices that support high availability, scalable, process and storage intensive solutions architectures to data integration/migration, analytics and insights, AI, and ML requirements.
  • Applying leadership and review to create high quality detailed documentation related to cloud data Engineering.
  • Understanding of one or more is a big plus -CI/CD, cloud devops, containers (Kubernetes/Docker, etc.), Python/PySpark/JavaScript.
  • Implementing cloud data orchestration and data integration patterns (AWS Glue, Azure Data Factory, Event Hub, Databricks, etc.), storage and processing (Redshift, Azure Synapse, BigQuery, Snowflake)
  • Possessing a certification(s) in one of the following is a big plus - AWS/Azure/GCP data engineering, and Migration.

 

 

KEY REQUIREMENTS:

  • 10+ years’ experience as data engineer.
  • Must have 5+ Years in implementing data engineering solutions with multiple cloud providers and toolsets.
  • This is hands on role building data pipelines using Cloud Native and Partner Solutions. Hands-on technical experience with Data at Scale.
  • Must have deep expertise in one of the programming languages for data processes (Python, Scala). Experience with Python, PySpark, Hadoop, Hive and/or Spark to write data pipelines and data processing layers.
  • Must have worked with multiple database technologies and patterns. Good SQL experience for writing complex SQL transformation.
  • Performance Tuning of Spark SQL running on S3/Data Lake/Delta Lake/ storage and Strong Knowledge on Databricks and Cluster Configurations.
  • Nice to have Databricks administration including security and infrastructure features of Databricks.
  • Experience with Development Tools for CI/CD, Unit and Integration testing, Automation and Orchestration
Read more
Job posted by
shalu Jain

Sr. Database Developer

at AppsTek Corp

Agency job
via Venaatics Consulting
Data management
Data modeling
PostgreSQL
SQL
MySQL
NOSQL Databases
Spark
Airflow
icon
Gurugram, Chennai
icon
6 - 10 yrs
icon
Best in industry

Function : Sr. DB Developer

Location : India/Gurgaon/Tamilnadu

 

>> THE INDIVIDUAL

  • Have a strong background in data platform creation and management.
  • Possess in-depth knowledge of Data Management, Data Modelling, Ingestion  - Able to develop data models and ingestion frameworks based on client requirements and advise on system optimization.
  • Hands-on experience in SQL database (PostgreSQL) and No-SQL database (MongoDB) 
  • Hands-on experience in performance tuning of DB
  • Good to have knowledge of database setup in cluster node
  • Should be well versed with data security aspects and data governance framework
  • Hands-on experience in Spark, Airflow, ELK.
  • Good to have knowledge on any data cleansing tool like apache Griffin
  • Preferably getting involved during project implementation so have a background on business knowledge and technical requirement as well.
  • Strong analytical and problem-solving skills. Have exposure to data analytics skills and knowledge of advanced data analytical tools will be an advantage.
  • Strong written and verbal communication skills (presentation skills).
  • Certifications in the above technologies is preferred.

 

>> Qualification

 

  1. Tech /B.E. / MCA /M. Tech from a reputed institute.

Experience of Data Management, Data Modelling, Ingestion for more than 4 years. Total experience of 8-10 Years

Read more
Job posted by
Mastanvali Shaik

Data/Sr. Data Scientist

at Antuit

Founded 2013  •  Product  •  100-500 employees  •  Profitable
Data Science
Machine Learning (ML)
Artificial Intelligence (AI)
Python
Algorithms
Linear regression
Logistic regression
Time series
PySpark
icon
Bengaluru (Bangalore)
icon
4 - 7 yrs
icon
₹15L - ₹20L / yr

About antuit.ai

 

Antuit.ai is the leader in AI-powered SaaS solutions for Demand Forecasting & Planning, Merchandising and Pricing. We have the industry’s first solution portfolio – powered by Artificial Intelligence and Machine Learning – that can help you digitally transform your Forecasting, Assortment, Pricing, and Personalization solutions. World-class retailers and consumer goods manufacturers leverage antuit.ai solutions, at scale, to drive outsized business results globally with higher sales, margin and sell-through.

 

Antuit.ai’s executives, comprised of industry leaders from McKinsey, Accenture, IBM, and SAS, and our team of Ph.Ds., data scientists, technologists, and domain experts, are passionate about delivering real value to our clients. Antuit.ai is funded by Goldman Sachs and Zodius Capital.

 

The Role:

 

Antuit is looking for a Data / Sr. Data Scientist who has the knowledge and experience in developing machine learning algorithms, particularly in supply chain and forecasting domain with data science toolkits like Python.

 

In this role, you will design the approach, develop and test machine learning algorithms, implement the solution.  The candidate should have excellent communication skills and be results driven with a customer centric approach to problem solving.  Experience working in the demand forecasting or supply chain domain is a plus. This job also requires the ability to operate in a multi-geographic delivery environment and a good understanding of cross-cultural sensitivities.

 

Responsibilities:

 

Responsibilities includes, but are not limited to the following:

 

  • Design, build, test, and implement predictive Machine Learning models.
  • Collaborate with client to align business requirements with data science systems and process solutions that ensure client’s overall objectives are met.
  • Create meaningful presentations and analysis that tell a “story” focused on insights, to communicate the results/ideas to key decision makers.
  • Collaborate cross-functionally with domain experts to identify gaps and structural problems.
  • Contribute to standard business processes and practices as part of a community of practise.
  • Be the subject matter expert across multiple work streams and clients.
  • Mentor and coach team members.
  • Set a clear vision for the team members and working cohesively to attain it.

 

Qualifications and Skills:

 

Requirements

  • Experience / Education:
    • Master’s or Ph.D. in Computer Science, Computer Engineering, Electrical Engineering, Statistics, Applied Mathematics or other related 
  • 5+ years’ experience working in applied machine learning or relevant research experience for recent Ph.D. graduates.
  • Highly technical:
  • Skilled in machine learning, problem-solving, pattern recognition and predictive modeling with expertise in PySpark and Python.
  • Understanding of data structures and data modeling.
  • Effective communication and presentation skills
  • Able to collaborate closely and effectively with teams.
  • Experience in time series forecasting is preferred.
  • Experience working in start-up type environment preferred.
  • Experience in CPG and/or Retail preferred.
  • Effective communication and presentation skills.
  • Strong management track record.
  • Strong inter-personal skills and leadership qualities.

 

Information Security Responsibilities

  • Understand and adhere to Information Security policies, guidelines and procedure, practice them for protection of organizational data and Information System.
  • Take part in Information Security training and act accordingly while handling information.
  • Report all suspected security and policy breach to Infosec team or appropriate authority (CISO).

 

EEOC

 

Antuit.ai is an at-will, equal opportunity employer.  We consider applicants for all positions without regard to race, color, religion, national origin or ancestry, gender identity, sex, age (40+), marital status, disability, veteran status, or any other legally protected status under local, state, or federal law.
Read more
Job posted by
Purnendu Shakunt

Data Scientist

at 5 years old AI Startup

Data Science
Machine Learning (ML)
Python
Natural Language Processing (NLP)
Deep Learning
icon
Pune
icon
2 - 6 yrs
icon
₹12L - ₹18L / yr
  •  3+ years of experience in Machine Learning
  • Bachelors/Masters in Computer Engineering/Science.
  • Bachelors/Masters in Engineering/Mathematics/Statistics with sound knowledge of programming and computer concepts.
  • 10 and 12th acedemics 70 % & above.

Skills :
 - Strong Python/ programming skills
 - Good conceptual understanding of Machine Learning/Deep Learning/Natural Language            Processing
 - Strong verbal and written communication skills.
 - Should be able to manage team, meet project deadlines and interface with clients.
 - Should be able to work across different domains and quickly ramp up the business                   processes & flows & translate business problems into the data solutions

Read more
Job posted by
Ramya D

AI/ML, NLP, Chatbot Developer

at Lincode Labs India Pvt Ltd

Founded 2017  •  Products & Services  •  20-100 employees  •  Raised funding
Machine Learning (ML)
Natural Language Processing (NLP)
Artificial Intelligence (AI)
chat bot
icon
Bengaluru (Bangalore)
icon
3 - 7 yrs
icon
₹4L - ₹20L / yr
Role Description: The Chatbot Developer will develop software and system architecture while ensuring alignment with enterprise technology standards (e.g., solution patterns, application frameworks).

Responsibilities:

- Develop REST/JSON API’s Design code for high scale/availability/resiliency.

- Develop responsive web apps and integrate APIs using NodeJS.

- Presenting Chat efficiency reports to higher Management

- Develop system flow diagrams to automate a business function and identify impacted systems; metrics to depict the cost benefit analysis of the solutions developed.

- Work closely with business operations to convert requirements into system solutions and collaborate with development teams to ensure delivery of highly scalable and available systems.

- Using tools to classify/categorize the chat based on intents and coming up with F1 score for Chat Analysis

- Experience in analyzing real agents Chat conversation with agent to train the Chatbot.

- Developing Conversational Flows in the chatbot

- Calculating Chat efficiency reports.

Good to Have:

- Monitors performance and quality control plans to identify performance. 

- Works on problems of moderate and varied complexity where analysis of data may require adaptation of standardized practices. 

- Works with management to prioritize business and information needs.

- Experience in analyzing real agents Chat conversation with agent to train the Chatbot.

- Identifies, analyzes, and interprets trends or patterns in complex data sets.

- Ability to manage multiple assignments.

- Understanding of ChatBot Architecture.

- Experience of Chatbot training
Read more
Job posted by
Ritika Nigam

Freelance Faculty

at Simplilearn Solutions

Founded 2009  •  Product  •  500-1000 employees  •  Profitable
Java
Amazon Web Services (AWS)
Big Data
Corporate Training
Data Science
Digital Marketing
Hadoop
icon
Anywhere, United States, Canada
icon
3 - 10 yrs
icon
₹2L - ₹10L / yr
To introduce myself I head Global Faculty Acquisition for Simplilearn. About My Company: SIMPLILEARN is a company which has transformed 500,000+ carriers across 150+ countries with 400+ courses and yes we are a Registered Professional Education Provider providing PMI-PMP, PRINCE2, ITIL (Foundation, Intermediate & Expert), MSP, COBIT, Six Sigma (GB, BB & Lean Management), Financial Modeling with MS Excel, CSM, PMI - ACP, RMP, CISSP, CTFL, CISA, CFA Level 1, CCNA, CCNP, Big Data Hadoop, CBAP, iOS, TOGAF, Tableau, Digital Marketing, Data scientist with Python, Data Science with SAS & Excel, Big Data Hadoop Developer & Administrator, Apache Spark and Scala, Tableau Desktop 9, Agile Scrum Master, Salesforce Platform Developer, Azure & Google Cloud. : Our Official website : www.simplilearn.com If you're interested in teaching, interacting, sharing real life experiences and passion to transform Careers, please join hands with us. Onboarding Process • Updated CV needs to be sent to my email id , with relevant certificate copy. • Sample ELearning access will be shared with 15days trail post your registration in our website. • My Subject Matter Expert will evaluate you on your areas of expertise over a telephonic conversation - Duration 15 to 20 minutes • Commercial Discussion. • We will register you to our on-going online session to introduce you to our course content and the Simplilearn style of teaching. • A Demo will be conducted to check your training style, Internet connectivity. • Freelancer Master Service Agreement Payment Process : • Once a workshop/ Last day of the training for the batch is completed you have to share your invoice. • An automated Tracking Id will be shared from our automated ticketing system. • Our Faculty group will verify the details provided and share the invoice to our internal finance team to process your payment, if there are any additional information required we will co-ordinate with you. • Payment will be processed in 15 working days as per the policy this 15 days is from the date of invoice received. Please share your updated CV to get this for next step of on-boarding process.
Read more
Job posted by
STEVEN JOHN

Data Scientist

at Octro Inc

Founded 2014  •  Product  •  100-500 employees  •  Profitable
Data Science
R Programming
Python
icon
Noida, NCR (Delhi | Gurgaon | Noida)
icon
1 - 7 yrs
icon
₹10L - ₹20L / yr

Octro Inc. is looking for a Data Scientist who will support the product, leadership and marketing teams with insights gained from analyzing multiple sources of data. The ideal candidate is adept at using large data sets to find opportunities for product and process optimization and using models to test the effectiveness of different courses of action. 

 

They must have strong experience using a variety of data mining/data analysis methods, using a variety of data tools, building and implementing models, using/creating algorithms and creating/running simulations. They must have a proven ability to drive business results with their data-based insights. 

 

They must be comfortable working with a wide range of stakeholders and functional teams. The right candidate will have a passion for discovering solutions hidden in large data sets and working with stakeholders to improve business outcomes.

Responsibilities :

- Work with stakeholders throughout the organization to identify opportunities for leveraging company data to drive business solutions.

- Mine and analyze data from multiple databases to drive optimization and improvement of product development, marketing techniques and business strategies.

- Assess the effectiveness and accuracy of new data sources and data gathering techniques.

- Develop custom data models and algorithms to apply to data sets.

- Use predictive modelling to increase and optimize user experiences, revenue generation, ad targeting and other business outcomes.

- Develop various A/B testing frameworks and test model qualities.

- Coordinate with different functional teams to implement models and monitor outcomes.

- Develop processes and tools to monitor and analyze model performance and data accuracy.

Qualifications :

- Strong problem solving skills with an emphasis on product development and improvement.

- Advanced knowledge of SQL and its use in data gathering/cleaning.

- Experience using statistical computer languages (R, Python, etc.) to manipulate data and draw insights from large data sets.

- Experience working with and creating data architectures.

- Knowledge of a variety of machine learning techniques (clustering, decision tree learning, artificial neural networks, etc.) and their real-world advantages/drawbacks.

- Knowledge of advanced statistical techniques and concepts (regression, properties of distributions, statistical tests and proper usage, etc.) and experience with applications.

- Excellent written and verbal communication skills for coordinating across teams.

Read more
Job posted by
Reshma Suleman

Data Engineer

at Rely

Founded 2018  •  Product  •  20-100 employees  •  Raised funding
Python
Hadoop
Spark
Amazon Web Services (AWS)
Big Data
Amazon EMR
RabbitMQ
icon
Bengaluru (Bangalore)
icon
2 - 10 yrs
icon
₹8L - ₹35L / yr

Intro

Our data and risk team is the core pillar of our business that harnesses alternative data sources to guide the decisions we make at Rely. The team designs, architects, as well as develop and maintain a scalable data platform the powers our machine learning models. Be part of a team that will help millions of consumers across Asia, to be effortlessly in control of their spending and make better decisions.


What will you do
The data engineer is focused on making data correct and accessible, and building scalable systems to access/process it. Another major responsibility is helping AI/ML Engineers write better code.

• Optimize and automate ingestion processes for a variety of data sources such as: click stream, transactional and many other sources.

  • Create and maintain optimal data pipeline architecture and ETL processes
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Develop data pipeline and infrastructure to support real-time decisions
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS big data' technologies.
  • Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
  • Work with stakeholders to assist with data-related technical issues and support their data infrastructure needs.


What will you need
• 2+ hands-on experience building and implementation of large scale production pipeline and Data Warehouse
• Experience dealing with large scale

  • Proficiency in writing and debugging complex SQLs
  • Experience working with AWS big data tools
    • Ability to lead the project and implement best data practises and technology

Data Pipelining

  • Strong command in building & optimizing data pipelines, architectures and data sets
  • Strong command on relational SQL & noSQL databases including Postgres
  • Data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.

Big Data: Strong experience in big data tools & applications

  • Tools: Hadoop, Spark, HDFS etc
  • AWS cloud services: EC2, EMR, RDS, Redshift
  • Stream-processing systems: Storm, Spark-Streaming, Flink etc.
  • Message queuing: RabbitMQ, Spark etc

Software Development & Debugging

  • Strong experience in object-oriented programming/object function scripting languages: Python, Java, C++, Scala, etc
  • Strong hold on data structures & algorithms

What would be a bonus

  • Prior experience working in a fast-growth Startup
  • Prior experience in the payments, fraud, lending, advertising companies dealing with large scale data
Read more
Job posted by
Hizam Ismail

Machine learning Developer

at Chariot Tech

Founded 2017  •  Product  •  20-100 employees  •  Raised funding
Machine Learning (ML)
Big Data
Data Science
icon
NCR (Delhi | Gurgaon | Noida)
icon
1 - 5 yrs
icon
₹15L - ₹16L / yr
We are looking for a Machine Learning Developer who possesses apassion for machine technology & big data and will work with nextgeneration Universal IoT platform.Responsibilities:•Design and build machine that learns , predict and analyze data.•Build and enhance tools to mine data at scale• Enable the integration of Machine Learning models in Chariot IoTPlatform•Ensure the scalability of Machine Learning analytics across millionsof networked sensors•Work with other engineering teams to integrate our streaming,batch, or ad-hoc analysis algorithms into Chariot IoT's suite ofapplications•Develop generalizable APIs so other engineers can use our workwithout needing to be a machine learning expert
Read more
Job posted by
Raj Garg
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at RS Consultants?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort