NLP Engineer

at Streamoid

Agency job
icon
Bengaluru (Bangalore)
icon
4 - 6 yrs
icon
₹4L - ₹20L / yr
icon
Full time
Skills
Natural Language Processing (NLP)
PyTorch
Python
Java
Solr
Elastic Search
Skill Set:
  • 4+ years of experience Solid understanding of Python, Java and general software development skills (source code management, debugging, testing, deployment etc.).
  • Experience in working with Solr and ElasticSearch Experience with NLP technologies & the handling of unstructured text Detailed understanding of text pre-processing and normalisation techniques such as tokenisation, lemmatisation, stemming, POS tagging etc.
  • Prior experience in implementation of traditional ML solutions - classification, regression or clustering problem Expertise in text-analytics - Sentiment Analysis, Entity Extraction, Language modelling - and associated sequence learning models ( RNN, LSTM, GRU).
  • Comfortable working with deep-learning libraries (eg. PyTorch)
  • Candidate can even be a fresher with 1 or 2 years of experience IIIT, IIIT, Bits Pilani, top 5 local colleges are preferred colleges and universities.
  • A Masters candidate in machine learning.
  • Can source candidates from Mu Sigma and Manthan.
Read more

About Streamoid

Founded
Type
Size
employees
Stage
Raised funding
View full company details
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Data Engineer

at Incubyte

Founded 2020  •  Services  •  20-100 employees  •  Bootstrapped
Data engineering
Spark
SQL
Windows Azure
MySQL
Python
ETL
ADF
azure
icon
Remote only
icon
2 - 3 yrs
icon
₹8L - ₹20L / yr

Who are we?

 

We are incubators of high-quality, dedicated software engineering teams for our clients. We work with product organizations to help them scale or modernize their legacy technology solutions. We work with startups to help them operationalize their idea efficiently. Incubyte strives to find people who are passionate about coding, learning, and growing along with us. We work with a limited number of clients at a time on dedicated, long term commitments with an aim of bringing a product mindset into services.

 

What we are looking for

 

We’re looking to hire software craftspeople. People who are proud of the way they work and the code they write. People who believe in and are evangelists of extreme programming principles. High quality, motivated and passionate people who make great teams. We heavily believe in being a DevOps organization, where developers own the entire release cycle and thus get to work not only on programming languages but also on infrastructure technologies in the cloud.

 

What you’ll be doing

 

First, you will be writing tests. You’ll be writing self-explanatory, clean code. Your code will produce the same, predictable results, over and over again. You’ll be making frequent, small releases. You’ll be working in pairs. You’ll be doing peer code reviews.

 

You will work in a product team. Building products and rapidly rolling out new features and fixes.

 

You will be responsible for all aspects of development – from understanding requirements, writing stories, analyzing the technical approach to writing test cases, development, deployment, and fixes. You will own the entire stack from the front end to the back end to the infrastructure and DevOps pipelines. And, most importantly, you’ll be making a pledge that you’ll never stop learning!

 

Skills you need in order to succeed in this role

Most Important: Integrity of character, diligence and the commitment to do your best

Must Have: SQL, Databricks, (Scala / Pyspark), Azure Data Factory, Test Driven Development

Nice to Have: SSIS, Power BI, Kafka, Data Modeling, Data Warehousing

 

Self-Learner: You must be extremely hands-on and obsessive about delivering clean code

 

  • Sense of Ownership: Do whatever it takes to meet development timelines
  • Experience in creating end to end data pipeline
  • Experience in Azure Data Factory (ADF) creating multiple pipelines and activities using Azure for full and incremental data loads into Azure Data Lake Store and Azure SQL DW
  • Working experience in Databricks
  • Strong in BI/DW/Datalake Architecture, design and ETL
  • Strong in Requirement Analysis, Data Analysis, Data Modeling capabilities
  • Experience in object-oriented programming, data structures, algorithms and software engineering
  • Experience working in Agile and Extreme Programming methodologies in a continuous deployment environment.
  • Interest in mastering technologies like, relational DBMS, TDD, CI tools like Azure devops, complexity analysis and performance
  • Working knowledge of server configuration / deployment
  • Experience using source control and bug tracking systems,

    writing user stories and technical documentation

  • Strong in Requirement Analysis, Data Analysis, Data Modeling capabilities
  • Expertise in creating tables, procedures, functions, triggers, indexes, views, joins and optimization of complex
  • Experience with database versioning, backups, restores and
  • Expertise in data security and
  • Ability to perform database performance tuning queries
Read more
Job posted by
Lifi Lawrance

Senior Machine Learning Engineer | Permanent WFH

at Delivery Solutions

Founded 2015  •  Product  •  100-500 employees  •  Profitable
Python
NumPy
pandas
SQL
Docker
icon
Remote only
icon
3 - 6 yrs
icon
₹7L - ₹19L / yr
Title: Senior Software Engineer - AI/ML

  • Minimum 3 years of technical experience in AI/ML (you can include internships & freelance work towards this)
  • Excellent proficiency in Python (Numpy, Pandas)
  • Experience working with SQL/NoSQL databases
  • Experience working with AWS, Docker
  • Should have worked with a large set of data
  • Should be familiar with MI model building and deployment on AWS.
  • Good communication skills and very good problem-solving skills 

Perks & Benefits @Delivery Solutions: 

  • Permanent Remote work - (Work from anywhere)
  • Broadband reimbursement
  • Flexi work hours - (Login/Logout flexibility)
  • 21 Paid leaves in a year (Jan to Dec) and 7 COVID leaves
  • Two appraisal cycles in a year
  • Encashment of unused leaves on Gross
  • RNR - Amazon Gift Voucher
  • Employee Referral Bonus
  • Technical & Soft skills training
  • Sodexo meal card
  • Surprise on birthday/ service anniversary/new baby/wedding gifts
  • Annual trip 
Read more
Job posted by
Ayyappan Paramasivam

Senior Data Scientist (Health Metrics)

at Biostrap

Founded 2016  •  Products & Services  •  20-100 employees  •  Bootstrapped
Data Science
Mathematics
Python
Machine Learning (ML)
Amazon Web Services (AWS)
Algorithms
icon
Remote only
icon
5 - 20 yrs
icon
₹10L - ₹30L / yr

Introduction

The Biostrap platform extracts many metrics related to health, sleep, and activity.  Many algorithms are designed through research and often based on scientific literature, and in some cases they are augmented with or entirely designed using machine learning techniques.  Biostrap is seeking a Data Scientist to design, develop, and implement algorithms to improve existing metrics and measure new ones. 

Job Description

As a Data Scientist at Biostrap, you will take on projects to improve or develop algorithms to measure health metrics, including:

  • Research: search literature for starting points of the algorithm
  • Design: decide on the general idea of the algorithm, in particular whether to use machine learning, mathematical techniques, or something else.
  • Implement: program the algorithm in Python, and help deploy it.  

The algorithms and their implementation will have to be accurate, efficient, and well-documented.

Requirements

  • A Master’s degree in a computational field, with a strong mathematical background. 
  • Strong knowledge of, and experience with, different machine learning techniques, including their theoretical background.  
  • Strong experience with Python
  • Experience with Keras/TensorFlow, and preferably also with RNNs
  • Experience with AWS or similar services for data pipelining and machine learning.  
  • Ability and drive to work independently on an open problem.
  • Fluency in English.
Read more
Job posted by
Reinier van

Data Engineer

at Indium Software

Founded 1999  •  Services  •  100-1000 employees  •  Profitable
SQL
Python
Hadoop
HiveQL
Spark
PySpark
icon
Bengaluru (Bangalore), Hyderabad
icon
1 - 9 yrs
icon
₹1L - ₹15L / yr

Responsibilities:

 

* 3+ years of Data Engineering Experience - Design, develop, deliver and maintain data infrastructures.

SQL Specialist – Strong knowledge and Seasoned experience with SQL Queries

Languages: Python

* Good communicator, shows initiative, works well with stakeholders.

* Experience working closely with Data Analysts and provide the data they need and guide them on the issues.

* Solid ETL experience and Hadoop/Hive/Pyspark/Presto/ SparkSQL

* Solid communication and articulation skills

* Able to handle stakeholders independently with less interventions of reporting manager.

* Develop strategies to solve problems in logical yet creative ways.

* Create custom reports and presentations accompanied by strong data visualization and storytelling

 

We would be excited if you have:

 

* Excellent communication and interpersonal skills

* Ability to meet deadlines and manage project delivery

* Excellent report-writing and presentation skills

* Critical thinking and problem-solving capabilities

Read more
Job posted by
Karunya P
ETL
Data Warehouse (DWH)
ETL Developer
Relational Database (RDBMS)
Spark
Hadoop
SQL server
SSIS
ADF
Python
Java
talend
Azure Data Factory
icon
Bengaluru (Bangalore)
icon
5 - 8 yrs
icon
₹8L - ₹13L / yr

 Minimum of 4 years’ experience of working on DW/ETL projects and expert hands-on working knowledge of ETL tools.

Experience with Data Management & data warehouse development

Star schemas, Data Vaults, RDBMS, and ODS

Change Data capture

Slowly changing dimensions

Data governance

Data quality

Partitioning and tuning

Data Stewardship

Survivorship

Fuzzy Matching

Concurrency

Vertical and horizontal scaling

ELT, ETL

Spark, Hadoop, MPP, RDBMS

Experience with Dev/OPS architecture, implementation and operation

Hand's on working knowledge of Unix/Linux

Building Complex SQL Queries. Expert SQL and data analysis skills, ability to debug and fix data issue.

Complex ETL program design coding

Experience in Shell Scripting, Batch Scripting.

Good communication (oral & written) and inter-personal skills

Expert SQL and data analysis skill, ability to debug and fix data issue Work closely with business teams to understand their business needs and participate in requirements gathering, while creating artifacts and seek business approval.

Helping business define new requirements, Participating in End user meetings to derive and define the business requirement, propose cost effective solutions for data analytics and familiarize the team with the customer needs, specifications, design targets & techniques to support task performance and delivery.

Propose good design & solutions and adherence to the best Design & Standard practices.

Review & Propose industry best tools & technology for ever changing business rules and data set. Conduct Proof of Concepts (POC) with new tools & technologies to derive convincing benchmarks.

Prepare the plan, design and document the architecture, High-Level Topology Design, Functional Design, and review the same with customer IT managers and provide detailed knowledge to the development team to familiarize them with customer requirements, specifications, design standards and techniques.

Review code developed by other programmers, mentor, guide and monitor their work ensuring adherence to programming and documentation policies.

Work with functional business analysts to ensure that application programs are functioning as defined. 

Capture user-feedback/comments on the delivered systems and document it for the client and project manager’s review. Review all deliverables before final delivery to client for quality adherence.

Technologies (Select based on requirement)

Databases - Oracle, Teradata, Postgres, SQL Server, Big Data, Snowflake, or Redshift

Tools – Talend, Informatica, SSIS, Matillion, Glue, or Azure Data Factory

Utilities for bulk loading and extracting

Languages – SQL, PL-SQL, T-SQL, Python, Java, or Scala

J/ODBC, JSON

Data Virtualization Data services development

Service Delivery - REST, Web Services

Data Virtualization Delivery – Denodo

 

ELT, ETL

Cloud certification Azure

Complex SQL Queries

 

Data Ingestion, Data Modeling (Domain), Consumption(RDMS)
Read more
Job posted by
Jerrin Thomas

GCP Data Engineer (WFH Permanently)

at Fresh Prints

Founded 2009  •  Products & Services  •  20-100 employees  •  Profitable
Google Cloud Platform (GCP)
SQL
PySpark
Data engineering
Big Data
Hadoop
Spark
Data migration
Python
Tableau
MS-Excel
icon
Remote only
icon
1 - 3 yrs
icon
₹8L - ₹14L / yr

 

https://www.freshprints.com/home" target="_blank">Fresh Prints is a New York-based custom apparel startup. We find incredible students and give them the working capital, training, and support to build the business at their schools. We have 400+ students who will do $15 million in sales over the next 12 months.


You’ll be focused on the next $50 million. Data is a product that can be used to drive team behaviors and generate revenue growth.

  • How do we use our data to drive up account value?
  • How do we develop additional revenue channels?
  • How do we increase operational efficiency?
  • How do we usher in the next stage at Fresh Prints?

Those are the questions the members of our cross-functional Growth Teamwork on every day. They do so not as data analysts, developers, or marketers, but as entrepreneurs, determined to drive the business forward.

You’d be our first dedicated Data Engineer. As such, you’ll touch every aspect of data at Fresh Prints. You’ll work with the rest of the Data Science team to sanitize the data systems, automate pipelines, and build test cases for post batch and regular quality evaluation.

You will develop alert systems for fire drill activities, build an immediate cleanup plan, and document the evolution of the ingestion process. You will also assist in mining that data for insights and building visualizations that help the entire company utilize those insights.

This role reports to https://www.linkedin.com/in/abilashls/" target="_blank">Abilash Reddy, our Head of Data & CRM and you will work with an extremely talented and professional team.

Responsibilities

  • Work closely with every team at Fresh Prints to uncover ways in which data shapes how effective they are
  • Designing, building, and operationalizing large-scale enterprise data solutions and applications using GCP data pipeline and automation services
  • Create, restructure, and manage large datasets, files, and systems
  • Take complete ownership of the implementation of data pipelines and hybrid data systems along with test cases as we scale
  • Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader
  • Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics
  • Monitor and maintain overall Tableau server health

Experience Required

  • 3+ years of experience in Cloud data ingestion and automation from a variety of sources using GCP services and building hybrid data architecture on top
  • 3+ years of strong experience in SQL & Python
  • 3+ years of experience with Excel and/or Google Sheets
  • 2+ years of experience with Tableau server or online
  • A successful history of manipulating, processing, and extracting value from large disconnected datasets
  • Experience in an agile development environment
  • Perfect English fluency is a must

Personal Attributes

  • Able to connect the dots between data and business value
  • Strong attention to detail 
  • Proactive. You believe it’s always on you to make sure anything you do is a success
  • In love with a challenge. You revel in solving problems and want a job that pushes you out of your comfort zone
  • Goal-oriented. You’re incredibly ambitious. You’re dedicated to a long-term vision of who you are and where you want to go
  • Open to change. You’re inspired by the endless ways in which everything we do can always be improved
  • Calm under pressure. You have a sense of urgency but channel it into productively working through any issues

Education

  • Google cloud certified (preferred)
  • Bachelor in computer science or information management is a strong plus

Compensation & Benefits

  • Competitive salary
  • Health insurance (India & Philippines)
  • Learning opportunities 
  • Working in a great culture

Job Location

  • This is a permanent WFH role, could be based in India or the Philippines. Candidates from other countries may be considered
  • We will have an office in Hyderabad, India but working from the office is completely optional

Working Hours

  • 3:30 PM to 11:30 PM IST 

Fresh Prints is an equal employment opportunity employer and promotes diversity; actively encouraging people of all backgrounds, ages, LGBTQ+, and those with disabilities to apply.

Read more
Job posted by
Riza Amrin

Data Scientist

at Accolite Software

Founded 2007  •  Products & Services  •  100-1000 employees  •  Profitable
Data Science
R Programming
Python
Deep Learning
Neural networks
OpenCV
Machine Learning (ML)
Image Processing
icon
Remote, Bengaluru (Bangalore)
icon
3 - 10 yrs
icon
₹5L - ₹24L / yr
  • Adept at Machine learning techniques and algorithms.

Feature selection, dimensionality reduction, building and

  • optimizing classifiers using machine learning techniques
  • Data mining using state-of-the-art methods
  • Doing ad-hoc analysis and presenting results
  • Proficiency in using query languages such as N1QL, SQL

Experience with data visualization tools, such as D3.js, GGplot,

  • Plotly, PyPlot, etc.

Creating automated anomaly detection systems and constant tracking

  • of its performance
  • Strong in Python is a must.
  • Strong in Data Analysis and mining is a must
  • Deep Learning, Neural Network, CNN, Image Processing (Must)

Building analytic systems - data collection, cleansing and

  • integration

Experience with NoSQL databases, such as Couchbase, MongoDB,

Cassandra, HBase

Read more
Job posted by
Nikita Sadarangani

Data Engineer

at Fast paced Startup

Big Data
Data engineering
Hadoop
Spark
Apache Hive
Data engineer
Google Cloud Platform (GCP)
Scala
Python
Airflow
bigquery
icon
Pune
icon
3 - 6 yrs
icon
₹15L - ₹22L / yr

ears of Exp: 3-6+ Years 
Skills: Scala, Python, Hive, Airflow, Spark

Languages: Java, Python, Shell Scripting

GCP: BigTable, DataProc,  BigQuery, GCS, Pubsub

OR
AWS: Athena, Glue, EMR, S3, Redshift

MongoDB, MySQL, Kafka

Platforms: Cloudera / Hortonworks
AdTech domain experience is a plus.
Job Type - Full Time 

Read more
Job posted by
Kavita Singh

AI/ML, NLP, Chatbot Developer

at Lincode Labs India Pvt Ltd

Founded 2017  •  Products & Services  •  20-100 employees  •  Raised funding
Machine Learning (ML)
Natural Language Processing (NLP)
Artificial Intelligence (AI)
chat bot
icon
Bengaluru (Bangalore)
icon
3 - 7 yrs
icon
₹4L - ₹20L / yr
Role Description: The Chatbot Developer will develop software and system architecture while ensuring alignment with enterprise technology standards (e.g., solution patterns, application frameworks).

Responsibilities:

- Develop REST/JSON API’s Design code for high scale/availability/resiliency.

- Develop responsive web apps and integrate APIs using NodeJS.

- Presenting Chat efficiency reports to higher Management

- Develop system flow diagrams to automate a business function and identify impacted systems; metrics to depict the cost benefit analysis of the solutions developed.

- Work closely with business operations to convert requirements into system solutions and collaborate with development teams to ensure delivery of highly scalable and available systems.

- Using tools to classify/categorize the chat based on intents and coming up with F1 score for Chat Analysis

- Experience in analyzing real agents Chat conversation with agent to train the Chatbot.

- Developing Conversational Flows in the chatbot

- Calculating Chat efficiency reports.

Good to Have:

- Monitors performance and quality control plans to identify performance. 

- Works on problems of moderate and varied complexity where analysis of data may require adaptation of standardized practices. 

- Works with management to prioritize business and information needs.

- Experience in analyzing real agents Chat conversation with agent to train the Chatbot.

- Identifies, analyzes, and interprets trends or patterns in complex data sets.

- Ability to manage multiple assignments.

- Understanding of ChatBot Architecture.

- Experience of Chatbot training
Read more
Job posted by
Ritika Nigam

Senior Machine Learning Engineer

at AthenasOwl

Founded 2017  •  Product  •  100-500 employees  •  Raised funding
Deep Learning
Natural Language Processing (NLP)
Machine Learning (ML)
Computer vision
Python
Data Structures
icon
Mumbai
icon
3 - 7 yrs
icon
₹10L - ₹20L / yr

Company Profile and Job Description  

About us:  

AthenasOwl (AO) is our “AI for Media” solution that helps content creators and broadcasters to create and curate smarter content. We launched the product in 2017 as an AI-powered suite meant for the media and entertainment industry. Clients use AthenaOwl's context adapted technology for redesigning content, taking better targeting decisions, automating hours of post-production work and monetizing massive content libraries.  

For more details visit: www.athenasowl.tv   

  

Role:   

Senior Machine Learning Engineer  

Experience Level:   

4 -6 Years of experience  

Work location:   

Mumbai (Malad W)   

  

Responsibilities:   

  • Develop cutting edge machine learning solutions at scale to solve computer vision problems in the domain of media, entertainment and sports
  • Collaborate with media houses and broadcasters across the globe to solve niche problems in the field of post-production, archiving and viewership
  • Manage a team of highly motivated engineers to deliver high-impact solutions quickly and at scale

 

 

The ideal candidate should have:   

  • Strong programming skills in any one or more programming languages like Python and C/C++
  • Sound fundamentals of data structures, algorithms and object-oriented programming
  • Hands-on experience with any one popular deep learning framework like TensorFlow, PyTorch, etc.
  • Experience in implementing Deep Learning Solutions (Computer Vision, NLP etc.)
  • Ability to quickly learn and communicate the latest findings in AI research
  • Creative thinking for leveraging machine learning to build end-to-end intelligent software systems
  • A pleasantly forceful personality and charismatic communication style
  • Someone who will raise the average effectiveness of the team and has demonstrated exceptional abilities in some area of their life. In short, we are looking for a “Difference Maker”

 

Read more
Job posted by
Ericsson Fernandes
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at Streamoid?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort