Data Scientist

at Client is a Machine Learning company based in New Delhi.

Agency job
icon
NCR (Delhi | Gurgaon | Noida)
icon
2 - 6 yrs
icon
₹10L - ₹25L / yr
icon
Full time
Skills
Data Science
R Programming
Python
Machine Learning (ML)
Entity Framework
Natural Language Processing (NLP)
Computer Vision

Job Responsibilities

  • Design machine learning systems
  • Research and implement appropriate ML algorithms and tools
  • Develop machine learning applications according to requirements
  • Select appropriate datasets and data representation methods
  • Run machine learning tests and experiments
  • Perform statistical analysis and fine-tuning using test results
  • Train and retrain systems when necessary

 

Requirements for the Job

 

  1. Bachelor’s/Master's/PhD in Computer Science, Mathematics, Statistics or equivalent field andmust have a minimum of 2 years of overall experience in tier one colleges 
  1. Minimum 1 year of experience working as a Data Scientist in deploying ML at scale in production
  2. Experience in machine learning techniques (e.g. NLP, Computer Vision, BERT, LSTM etc..) andframeworks (e.g. TensorFlow, PyTorch, Scikit-learn, etc.)
  1. Working knowledge in deployment of Python systems (using Flask, Tensorflow Serving)
  2. Previous experience in following areas will be preferred: Natural Language Processing(NLP) - Using LSTM and BERT; chatbots or dialogue systems, machine translation, comprehension of text, text summarization.
  3. Computer Vision - Deep Neural Networks/CNNs for object detection and image classification, transfer learning pipeline and object detection/instance segmentation (Mask R-CNN, Yolo, SSD).
Read more
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Machine Learning Engineer - SDE-1/2/3

at Anarock Technology

Founded 2017  •  Products & Services  •  20-100 employees  •  Bootstrapped
Data Science
Machine Learning (ML)
Natural Language Processing (NLP)
Computer Vision
pandas
NumPy
Algorithms
Data Structures
Object Oriented Programming (OOPs)
Programming
NOSQL Databases
Jupyter Notebook
icon
Mumbai, Bengaluru (Bangalore), Gurugram
icon
1 - 4 yrs
icon
₹12L - ₹30L / yr

At Anarock Tech, we are building a modern technology platform with automated analytics and reporting tools. This offers timely solutions to our real estate clients while delivering financially favourable and efficient results.

If it excites you to - drive innovation, create industry-first solutions, build new capabilities ground-up, and work with multiple new technologies, ANAROCK is the place for you.

We are looking for a Machine Learning (ML) Engineer to help us create artificial intelligence products. Machine Learning Engineer’s responsibilities include creating machine learning models and retraining systems. To do this job successfully, you need exceptional skills in statistics and programming. If you also have knowledge of data science and software engineering, we’d like to meet you. Your ultimate goal will be to shape and build efficient self-learning applications.

Key job responsibilities

  • Designing and developing machine learning and deep learning systems
  • Running machine learning tests and experiments
  • Implementing appropriate ML algorithms

Basic Qualifications

  • Good understanding of Algorithms and data structures.
  • Experience in designing scalable systems ML is a plus
  • Expert programming experience in any one general programming language (strong OO skills preferred). Experience in at least one general programming language (Ruby, Python, Java, Elixir, C/C++)
  • Experience with ML/DL frameworks

Experience with pandas, numpy etc.

Skills that will help you build a success story with us

  • Worked in a start-up environment with high levels of ownership and full dedication.
  • Experience in NoSQL datastores like Redis, MongoDB, Couchdb etc with an understanding of  underlying sharding and scaling techniques
  • Experience in building highly scalable business applications, which involve implementing large complex business flows and dealing with a huge amount of data

Experience: 1 - 4 years

 

Locations: Bangalore or Mumbai or Gurgaon

 

Quick Glances:

 

 


 

Anarock Ethos - Values Over Value:

Our assurance of consistent ethical dealing with clients and partners reflects our motto - Values Over Value.

We value diversity within ANAROCK Group and are committed to offering equal opportunities in employment. We do not discriminate against any team member or applicant for employment based on nationality, race, color, religion, caste, gender identity / expression, sexual orientation, disability, social origin and status, indigenous status, political opinion, age, marital status or any other personal characteristics or status. ANAROCK Group values all talent and will do its utmost to hire, nurture and grow them

 

Read more
Job posted by
Arpita Saha

GCP Data Engineer, WFH

at Multinational Company

Agency job
via Telamon HR Solutions
Data engineering
Google Cloud Platform (GCP)
Python
icon
Remote only
icon
5 - 15 yrs
icon
₹27L - ₹30L / yr

• The incumbent should have hands on experience in data engineering and GCP data technologies.

• Should Work with client teams to design and implement modern, scalable data solutions using a range of new and emerging technologies from the Google Cloud Platform.

• Should Work with Agile and DevOps techniques and implementation approaches in the delivery.

• Showcase your GCP Data engineering experience when communicating with clients on their requirements, turning these into technical data solutions.

• Build and deliver Data solutions using GCP products and offerings.
• Have hands on Experience on Python 
Experience on SQL or MySQL. Experience on Looker is an added advantage.

Read more
Job posted by
Praveena Sagar

Data Engineer

at Slintel

Agency job
via Qrata
Big Data
ETL
Apache Spark
Spark
Data engineer
Data engineering
Linux/Unix
MySQL
Python
Amazon Web Services (AWS)
icon
Bengaluru (Bangalore)
icon
4 - 9 yrs
icon
₹20L - ₹28L / yr
Responsibilities
  • Work in collaboration with the application team and integration team to design, create, and maintain optimal data pipeline architecture and data structures for Data Lake/Data Warehouse.
  • Work with stakeholders including the Sales, Product, and Customer Support teams to assist with data-related technical issues and support their data analytics needs.
  • Assemble large, complex data sets from third-party vendors to meet business requirements.
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, Elasticsearch, MongoDB, and AWS technology.
  • Streamline existing and introduce enhanced reporting and analysis solutions that leverage complex data sources derived from multiple internal systems.

Requirements
  • 5+ years of experience in a Data Engineer role.
  • Proficiency in Linux.
  • Must have SQL knowledge and experience working with relational databases, query authoring (SQL) as well as familiarity with databases including Mysql, Mongo, Cassandra, and Athena.
  • Must have experience with Python/Scala.
  • Must have experience with Big Data technologies like Apache Spark.
  • Must have experience with Apache Airflow.
  • Experience with data pipeline and ETL tools like AWS Glue.
  • Experience working with AWS cloud services: EC2, S3, RDS, Redshift.
Read more
Job posted by
Prajakta Kulkarni

Data Engineer - AWS

at A global business process management company

Agency job
via Jobdost
Data engineering
Data modeling
data pipeline
Data integration
Data Warehouse (DWH)
Data engineer
AWS RDS
Glue
AWS CloudFormation
Amazon Web Services (AWS)
DevOps
AWS Lambda
Python
Django
Data Pipeline
Step functions
RDS
icon
Gurugram, Pune, Mumbai, Bengaluru (Bangalore), Chennai, Nashik
icon
4 - 12 yrs
icon
₹12L - ₹15L / yr

 

 

Designation – Deputy Manager - TS


Job Description

  1. Total of  8/9 years of development experience Data Engineering . B1/BII role
  2. Minimum of 4/5 years in AWS Data Integrations and should be very good on Data modelling skills.
  3. Should be very proficient in end to end AWS Data solution design, that not only includes strong data ingestion, integrations (both Data @ rest and Data in Motion) skills but also complete DevOps knowledge.
  4. Should have experience in delivering at least 4 Data Warehouse or Data Lake Solutions on AWS.
  5. Should be very strong experience on Glue, Lambda, Data Pipeline, Step functions, RDS, CloudFormation etc.
  6. Strong Python skill .
  7. Should be an expert in Cloud design principles, Performance tuning and cost modelling. AWS certifications will have an added advantage
  8. Should be a team player with Excellent communication and should be able to manage his work independently with minimal or no supervision.
  9. Life Science & Healthcare domain background will be a plus

Qualifications

BE/Btect/ME/MTech

 

Read more
Job posted by
Saida Jabbar

Data Engineer

at RedSeer Consulting

Founded  •   •  employees  • 
Python
PySpark
SQL
pandas
Cloud Computing
Microsoft Windows Azure
Big Data
icon
Bengaluru (Bangalore)
icon
0 - 2 yrs
icon
₹10L - ₹15L / yr

BRIEF DESCRIPTION:

At-least 1 year of Python, Spark, SQL, data engineering experience

Primary Skillset: PySpark, Scala/Python/Spark, Azure Synapse, S3, RedShift/Snowflake

Relevant Experience: Legacy ETL job Migration to AWS Glue / Python & Spark combination

 

ROLE SCOPE:

Reverse engineer the existing/legacy ETL jobs

Create the workflow diagrams and review the logic diagrams with Tech Leads

Write equivalent logic in Python & Spark

Unit test the Glue jobs and certify the data loads before passing to system testing

Follow the best practices, enable appropriate audit & control mechanism

Analytically skillful, identify the root causes quickly and efficiently debug issues

Take ownership of the deliverables and support the deployments

 

REQUIREMENTS:

Create data pipelines for data integration into Cloud stacks eg. Azure Synapse

Code data processing jobs in Azure Synapse Analytics, Python, and Spark

Experience in dealing with structured, semi-structured, and unstructured data in batch and real-time environments.

Should be able to process .json, .parquet and .avro files

 

PREFERRED BACKGROUND:

Tier1/2 candidates from IIT/NIT/IIITs

However, relevant experience, learning attitude takes precedence

Read more
Job posted by
Raunak Swarnkar

Data Engineer

at Numerator

Founded 2018  •  Product  •  500-1000 employees  •  Profitable
Data Warehouse (DWH)
Informatica
ETL
Python
SQL
Datawarehousing
icon
Remote, Pune
icon
3 - 9 yrs
icon
₹5L - ₹20L / yr

We’re hiring a talented Data Engineer and Big Data enthusiast to work in our platform to help ensure that our data quality is flawless.  As a company, we have millions of new data points every day that come into our system. You will be working with a passionate team of engineers to solve challenging problems and ensure that we can deliver the best data to our customers, on-time. You will be using the latest cloud data warehouse technology to build robust and reliable data pipelines.

Duties/Responsibilities Include:

  •  Develop expertise in the different upstream data stores and systems across Numerator.
  • Design, develop and maintain data integration pipelines for Numerators growing data sets and product offerings.
  • Build testing and QA plans for data pipelines.
  • Build data validation testing frameworks to ensure high data quality and integrity.
  • Write and maintain documentation on data pipelines and schemas
 

Requirements:

  • BS or MS in Computer Science or related field of study
  • 3 + years of experience in the data warehouse space
  • Expert in SQL, including advanced analytical queries
  • Proficiency in Python (data structures, algorithms, object oriented programming, using API’s)
  • Experience working with a cloud data warehouse (Redshift, Snowflake, Vertica)
  • Experience with a data pipeline scheduling framework (Airflow)
  • Experience with schema design and data modeling

Exceptional candidates will have:

  • Amazon Web Services (EC2, DMS, RDS) experience
  • Terraform and/or ansible (or similar) for infrastructure deployment
  • Airflow -- Experience building and monitoring DAGs, developing custom operators, using script templating solutions.
  • Experience supporting production systems in an on-call environment
Read more
Job posted by
Ketaki Kambale

Artificial intelligence Engineer

at Avegen India Pvt. Ltd

Founded 2015  •  Products & Services  •  100-1000 employees  •  Profitable
Intelligence
Artificial Intelligence (AI)
Deep Learning
Machine Learning (ML)
Data extraction
ETL
Python
Systems Development Life Cycle (SDLC)
icon
Pune
icon
3 - 8 yrs
icon
₹3L - ₹20L / yr
Responsibilities
● Frame ML / AI use cases that can improve the company’s product
● Implement and develop ML / AI / Data driven rule based algorithms as software items
● For example, building a chatbot that replies an answer from relevant FAQ, and
reinforcing the system with a feedback loop so that the bot improves
Must have skills:
● Data extraction and ETL
● Python (numpy, pandas, comfortable with OOP)
● Django
● Knowledge of basic Machine Learning / Deep Learning / AI algorithms and ability to
implement them
● Good understanding of SDLC
● Deployed ML / AI model in a mobile / web product
● Soft skills : Strong communication skills & Critical thinking ability

Good to have:
● Full stack development experience
Required Qualification:
B.Tech. / B.E. degree in Computer Science or equivalent software engineering
Read more
Job posted by
Shubham Shinde

Data Engineer

at Our client company is into Analytics. (RF1)

Agency job
via Multi Recruit
Data Engineer
Big Data
Python
Amazon Web Services (AWS)
SQL
Java
ETL
icon
Bengaluru (Bangalore)
icon
3 - 5 yrs
icon
₹12L - ₹14L / yr
  •  We are looking for a Data Engineer with 3-5 years experience in Python, SQL, AWS (EC2, S3, Elastic Beanstalk, API Gateway), and Java.
  • The applicant must be able to perform Data Mapping (data type conversion, schema harmonization) using Python, SQL, and Java.
  • The applicant must be familiar with and have programmed ETL interfaces (OAUTH, REST API, ODBC) using the same languages.
  • The company is looking for someone who shows an eagerness to learn and who asks concise questions when communicating with teammates.
Read more
Job posted by
Ragul Ragul

Data Engineer - Google Cloud Platform

at Datalicious Pty Ltd

Founded 2007  •  Products & Services  •  20-100 employees  •  Raised funding
Python
Amazon Web Services (AWS)
Google Cloud Storage
Big Data
Data Analytics
Datawarehousing
Software Development
Data Science
icon
Bengaluru (Bangalore)
icon
2 - 7 yrs
icon
₹7L - ₹20L / yr
DESCRIPTION :- We- re looking for an experienced Data Engineer to be part of our team who has a strong cloud technology experience to help our big data team to take our products to the next level.- This is a hands-on role, you will be required to code and develop the product in addition to your leadership role. You need to have a strong software development background and love to work with cutting edge big data platforms.- You are expected to bring with you extensive hands-on experience with Amazon Web Services (Kinesis streams, EMR, Redshift), Spark and other Big Data processing frameworks and technologies as well as advanced knowledge of RDBS and Data Warehousing solutions.REQUIREMENTS :- Strong background working on large scale Data Warehousing and Data processing solutions.- Strong Python and Spark programming experience.- Strong experience in building big data pipelines.- Very strong SQL skills are an absolute must.- Good knowledge of OO, functional and procedural programming paradigms.- Strong understanding of various design patterns.- Strong understanding of data structures and algorithms.- Strong experience with Linux operating systems.- At least 2+ years of experience working as a software developer or a data-driven environment.- Experience working in an agile environment.Lots of passion, motivation and drive to succeed!Highly desirable :- Understanding of agile principles specifically scrum.- Exposure to Google cloud platform services such as BigQuery, compute engine etc.- Docker, Puppet, Ansible, etc..- Understanding of digital marketing and digital advertising space would be advantageous.BENEFITS :Datalicious is a global data technology company that helps marketers improve customer journeys through the implementation of smart data-driven marketing strategies. Our team of marketing data specialists offer a wide range of skills suitable for any challenge and cover everything from web analytics to data engineering, data science and software development.Experience : Join us at any level and we promise you'll feel up-levelled in no time, thanks to the fast-paced, transparent and aggressive growth of DataliciousExposure : Work with ONLY the best clients in the Australian and SEA markets, every problem you solve would directly impact millions of real people at a large scale across industriesWork Culture : Voted as the Top 10 Tech Companies in Australia. Never a boring day at work, and we walk the talk. The CEO organises nerf-gun bouts in the middle of a hectic day.Money: We'd love to have a long term relationship because long term benefits are exponential. We encourage people to get technical certifications via online courses or digital schools.So if you are looking for the chance to work for an innovative, fast growing business that will give you exposure across a diverse range of the world's best clients, products and industry leading technologies, then Datalicious is the company for you!
Read more
Job posted by
Ramjee Ganti

Data Scientist

at Monexo FinTech (P) Ltd

Founded 2016  •  Products & Services  •  20-100 employees  •  Raised funding
Data Science
Python
R Programming
icon
Mumbai, Chennai
icon
1 - 3 yrs
icon
₹3L - ₹5L / yr
The Candidate should be have: - good understanding of Statistical concepts - worked on Data Analysis and Model building for 1 year - ability to implement Data warehouse and Visualisation tools (IBM, Amazon or Tableau) - use of ETL tools - understanding of scoring models The candidate will be required: - to build models for approval or rejection of loans - build various reports (standard for monthly reporting) to optimise business - implement datawarehosue The candidate should be self-starter as well as work without supervision. You will be the 1st and only employee for this role for the next 6 months.
Read more
Job posted by
Mukesh Bubna
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at Client is a Machine Learning company based in New Delhi.?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort