Cutshort logo
Nanonets logo
Deep Learning Engineer
Deep Learning Engineer
Nanonets's logo

Deep Learning Engineer

Neil Shroff's profile picture
Posted by Neil Shroff
3 - 10 yrs
$25K - $50K / yr (ESOP available)
Remote, Mumbai, Bengaluru (Bangalore)
Skills
skill iconDeep Learning
TensorFlow
skill iconMachine Learning (ML)
skill iconPython

We are looking for an engineer with ML/DL background.


Ideal candidate should have the following skillset

1) Python
2) Tensorflow
3) Experience building and deploying systems
4) Experience with Theano/Torch/Caffe/Keras all useful
5) Experience Data warehousing/storage/management would be a plus
6) Experience writing production software would be a plus
7) Ideal candidate should have developed their own DL architechtures apart from using open source architechtures.
8) Ideal candidate would have extensive experience with computer vision applications


Candidates would be responsible for building Deep Learning models to solve specific problems. Workflow would look as follows:

1) Define Problem Statement (input -> output)
2) Preprocess Data
3) Build DL model
4) Test on different datasets using Transfer Learning
5) Parameter Tuning
6) Deployment to production


Candidate should have experience working on Deep Learning with an engineering degree from a top tier institute (preferably IIT/BITS or equivalent)

Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About Nanonets

Founded :
2016
Type
Size :
20-100
Stage :
Raised funding
About

Nanonets enables self-service artificial intelligence by simplifying adoption. Easily build machine learning models with minimal training data or knowledge of machine learning.


At Nanonets, we serve up the most accurate models. Always.

Read more
Tech Stack
skill iconPython
skill iconReact.js
skill iconGo Programming (Golang)
Candid answers by the company
What does the company do?
What is the location preference of jobs?

Nanoets automates accounts payable in every possible way and streamline the document processing activity.

Product showcase
Nanonets's logo
Nanonets
Visit
Uncover valuable insights from any document and automate repetitive tasks, with AI-powered workflows.
Read more
Photos
Company featured pictures
Connect with the team
Profile picture
Neil Shroff
Company social profiles
bloglinkedintwitterfacebook

Similar jobs

UpSolve Solutions LLP
Shaurya Kuchhal
Posted by Shaurya Kuchhal
Mumbai
2 - 4 yrs
₹7L - ₹11L / yr
skill iconMachine Learning (ML)
skill iconData Science
Microsoft Windows Azure
Google Cloud Platform (GCP)
skill iconPython
+3 more

About UpSolve

Work on cutting-edge tech stack. Build innovative solutions. Computer Vision, NLP, Video Analytics and IOT.


Job Role

  • Ideate use cases to include recent tech releases.
  • Discuss business plans and assist teams in aligning with dynamic KPIs.
  • Design solution architecture from input to infrastructure and services used to data store.


Job Requirements

  • Working knowledge about Azure Cognitive Services.
  • Project Experience in building AI solutions like Chatbots, sentiment analysis, Image Classification, etc.
  • Quick Learner and Problem Solver.


Job Qualifications

  • Work Experience: 2 years +
  • Education: Computer Science/IT Engineer
  • Location: Mumbai
Read more
Episource
at Episource
11 recruiters
Ahamed Riaz
Posted by Ahamed Riaz
Mumbai
5 - 12 yrs
₹18L - ₹30L / yr
Big Data
skill iconPython
skill iconAmazon Web Services (AWS)
Serverless
DevOps
+4 more

ABOUT EPISOURCE:


Episource has devoted more than a decade in building solutions for risk adjustment to measure healthcare outcomes. As one of the leading companies in healthcare, we have helped numerous clients optimize their medical records, data, analytics to enable better documentation of care for patients with chronic diseases.


The backbone of our consistent success has been our obsession with data and technology. At Episource, all of our strategic initiatives start with the question - how can data be “deployed”? Our analytics platforms and datalakes ingest huge quantities of data daily, to help our clients deliver services. We have also built our own machine learning and NLP platform to infuse added productivity and efficiency into our workflow. Combined, these build a foundation of tools and practices used by quantitative staff across the company.


What’s our poison you ask? We work with most of the popular frameworks and technologies like Spark, Airflow, Ansible, Terraform, Docker, ELK. For machine learning and NLP, we are big fans of keras, spacy, scikit-learn, pandas and numpy. AWS and serverless platforms help us stitch these together to stay ahead of the curve.


ABOUT THE ROLE:


We’re looking to hire someone to help scale Machine Learning and NLP efforts at Episource. You’ll work with the team that develops the models powering Episource’s product focused on NLP driven medical coding. Some of the problems include improving our ICD code recommendations, clinical named entity recognition, improving patient health, clinical suspecting and information extraction from clinical notes.


This is a role for highly technical data engineers who combine outstanding oral and written communication skills, and the ability to code up prototypes and productionalize using a large range of tools, algorithms, and languages. Most importantly they need to have the ability to autonomously plan and organize their work assignments based on high-level team goals.


You will be responsible for setting an agenda to develop and ship data-driven architectures that positively impact the business, working with partners across the company including operations and engineering. You will use research results to shape strategy for the company and help build a foundation of tools and practices used by quantitative staff across the company.


During the course of a typical day with our team, expect to work on one or more projects around the following;


1. Create and maintain optimal data pipeline architectures for ML


2. Develop a strong API ecosystem for ML pipelines


3. Building CI/CD pipelines for ML deployments using Github Actions, Travis, Terraform and Ansible


4. Responsible to design and develop distributed, high volume, high-velocity multi-threaded event processing systems


5. Knowledge of software engineering best practices across the development lifecycle, coding standards, code reviews, source management, build processes, testing, and operations  


6. Deploying data pipelines in production using Infrastructure-as-a-Code platforms

 

7. Designing scalable implementations of the models developed by our Data Science teams  


8. Big data and distributed ML with PySpark on AWS EMR, and more!



BASIC REQUIREMENTS 


  1.  Bachelor’s degree or greater in Computer Science, IT or related fields

  2.  Minimum of 5 years of experience in cloud, DevOps, MLOps & data projects

  3. Strong experience with bash scripting, unix environments and building scalable/distributed systems

  4. Experience with automation/configuration management using Ansible, Terraform, or equivalent

  5. Very strong experience with AWS and Python

  6. Experience building CI/CD systems

  7. Experience with containerization technologies like Docker, Kubernetes, ECS, EKS or equivalent

  8. Ability to build and manage application and performance monitoring processes

Read more
Gurugram
5 - 10 yrs
₹25L - ₹40L / yr
skill iconMachine Learning (ML)
skill iconData Science
Computer Vision
Machine vision
Artificial Intelligence (AI)
+2 more
About the company

 Changing the way cataloging is done across the Globe. Our vision is to empower the smallest of
sellers, situated in the farthest of corners, to create superior product images and videos, without the need
for any external professional help. Imagine 30M+ merchants shooting Product Images or Videos using
their Smartphones, and then choosing Filters for Amazon, Asos, Airbnb, Doordash, etc to instantly
compose High-Quality "tuned-in" product visuals, instantly.  
editing AI software, to capture and process beautiful product images for online selling. We are also
fortunate and proud to be backed by the biggest names in the investment community including the likes of
Accel Partners, Angellist and prominent Founders and Internet company operators, who believe that
there is an intelligent and efficient way of doing Digital Production than how the world operates currently.

Job Description :

- We are looking for a seasoned Computer Vision Engineer with AI/ML/CV and Deep Learning skills to
play a senior leadership role in our Product & Technology Research Team.
- You will be leading a team of CV researchers to build models that automatically transform millions of ecommerce, automobiles, food, real-estate ram images into processed final images.
- You will be responsible for researching the latest art of the possible in the field of computer vision,
designing the solution architecture for our offerings and lead the Computer Vision teams to build the core
algorithmic models & deploy them on Cloud Infrastructure.
- Working with the Data team to ensure your data pipelines are well set up and
models are being constantly trained and updated
- Working alongside product team to ensure that AI capabilities are built as democratized tools that
provides internal as well external stakeholders to innovate on top of it and make our customers
successful
- You will work closely with the Product & Engineering teams to convert the models into beautiful products
that will be used by thousands of Businesses everyday to transform their images and videos.

Job Requirements:

- 4-5 years of experience overall.
- Looking for strong Computer Vision experience and knowledge.
- BS/MS/ Phd degree in Computer Science, Engineering or a related subject from a ivy league institute
- Exposure on Deep Learning Techniques, TensorFlow/Pytorch
- Prior expertise on building Image processing applications using GANs, CNNs, Diffusion models
- Expertise with Image Processing Python libraries like OpenCV, etc.
- Good hands-on experience on Python, Flask or Django framework
- Authored publications at peer-reviewed AI conferences (e.g. NeurIPS, CVPR, ICML, ICLR,ICCV, ACL)
- Prior experience of managing teams and building large scale AI / CV projects is a big plus
- Great interpersonal and communication skills
- Critical thinker and problem-solving skills
Read more
xpressbees
Alfiya Khan
Posted by Alfiya Khan
Pune, Bengaluru (Bangalore)
6 - 8 yrs
₹15L - ₹25L / yr
Big Data
Data Warehouse (DWH)
Data modeling
Apache Spark
Data integration
+10 more
Company Profile
XpressBees – a logistics company started in 2015 – is amongst the fastest growing
companies of its sector. While we started off rather humbly in the space of
ecommerce B2C logistics, the last 5 years have seen us steadily progress towards
expanding our presence. Our vision to evolve into a strong full-service logistics
organization reflects itself in our new lines of business like 3PL, B2B Xpress and cross
border operations. Our strong domain expertise and constant focus on meaningful
innovation have helped us rapidly evolve as the most trusted logistics partner of
India. We have progressively carved our way towards best-in-class technology
platforms, an extensive network reach, and a seamless last mile management
system. While on this aggressive growth path, we seek to become the one-stop-shop
for end-to-end logistics solutions. Our big focus areas for the very near future
include strengthening our presence as service providers of choice and leveraging the
power of technology to improve efficiencies for our clients.

Job Profile
As a Lead Data Engineer in the Data Platform Team at XpressBees, you will build the data platform
and infrastructure to support high quality and agile decision-making in our supply chain and logistics
workflows.
You will define the way we collect and operationalize data (structured / unstructured), and
build production pipelines for our machine learning models, and (RT, NRT, Batch) reporting &
dashboarding requirements. As a Senior Data Engineer in the XB Data Platform Team, you will use
your experience with modern cloud and data frameworks to build products (with storage and serving
systems)
that drive optimisation and resilience in the supply chain via data visibility, intelligent decision making,
insights, anomaly detection and prediction.

What You Will Do
• Design and develop data platform and data pipelines for reporting, dashboarding and
machine learning models. These pipelines would productionize machine learning models
and integrate with agent review tools.
• Meet the data completeness, correction and freshness requirements.
• Evaluate and identify the data store and data streaming technology choices.
• Lead the design of the logical model and implement the physical model to support
business needs. Come up with logical and physical database design across platforms (MPP,
MR, Hive/PIG) which are optimal physical designs for different use cases (structured/semi
structured). Envision & implement the optimal data modelling, physical design,
performance optimization technique/approach required for the problem.
• Support your colleagues by reviewing code and designs.
• Diagnose and solve issues in our existing data pipelines and envision and build their
successors.

Qualifications & Experience relevant for the role

• A bachelor's degree in Computer Science or related field with 6 to 9 years of technology
experience.
• Knowledge of Relational and NoSQL data stores, stream processing and micro-batching to
make technology & design choices.
• Strong experience in System Integration, Application Development, ETL, Data-Platform
projects. Talented across technologies used in the enterprise space.
• Software development experience using:
• Expertise in relational and dimensional modelling
• Exposure across all the SDLC process
• Experience in cloud architecture (AWS)
• Proven track record in keeping existing technical skills and developing new ones, so that
you can make strong contributions to deep architecture discussions around systems and
applications in the cloud ( AWS).

• Characteristics of a forward thinker and self-starter that flourishes with new challenges
and adapts quickly to learning new knowledge
• Ability to work with a cross functional teams of consulting professionals across multiple
projects.
• Knack for helping an organization to understand application architectures and integration
approaches, to architect advanced cloud-based solutions, and to help launch the build-out
of those systems
• Passion for educating, training, designing, and building end-to-end systems.
Read more
SJTech Solutions
at SJTech Solutions
1 recruiter
Shashwat Joshi
Posted by Shashwat Joshi
Remote, Bhopal
0 - 6 yrs
₹3.6L - ₹7.2L / yr
skill iconPython
skill iconData Science
skill iconMachine Learning (ML)
Supervised learning
Unsupervised learning
Day-to-day responsibilities include:

1. Working on supervised and unsupervised learning algorithms
2. Developing deep learning and machine learning algorithms
3. Working on live projects on data analytics
Read more
Vernacular.ai
at Vernacular.ai
3 recruiters
Abhishek Ekka
Posted by Abhishek Ekka
Bengaluru (Bangalore)
1 - 3 yrs
₹5L - ₹15L / yr
skill iconMachine Learning (ML)
skill iconDeep Learning

About us

Skit (previously known as http://vernacular.ai/" target="_blank">Vernacular.ai) is an AI-first SaaS voice automation company. Its suite of speech and language solutions enable enterprises to automate their contact centre operations. With over 10 million hours of training data, its product - Vernacular Intelligent Voice Assistant (VIVA) can currently respond in 16+ languages, covering over 160+ dialects and replicating human-like conversations.

 

Skit currently serves a variety of enterprise clients across diverse sectors such as BFSI, F&B, Hospitality, Consumer Electronics and Travel & Tourism, including prominent clients like Axis Bank, Hathway, Porter and Barbeque Nation. It has been featured as one of the top-notch start-ups in the Cisco Launchpad’s Cohort 6 and is a part of the World Economic Forum’s Global Innovators Community. It has has also been listed in Forbes 30 Under 30 Asia start-ups 2021 for its remarkable industry innovation.

We are looking for ML Research Engineers to work on the following problems:

  • Spoken Language Understanding and Dialog Management.
  • Language semantics, parsing, and modeling across multiple languages.
  • Speech Recognition, Speech Analytics and Voice Processing across multiple languages.
  • Response Generation and Speech Synthesis.
  • Active Learning, Monitoring and Observability mechanisms for deployments.

Responsibilities

  • Design, build and evaluate Machine Learning solutions.
  • Perform experiments and statistical analyses to draw conclusions and take modeling decisions.
  • Study, implement and extend state of the art systems.
  • Take part in regular research reviews and discussions.
  • Build, maintain and extend our open source solutions in the domain.
  • Write well-crafted programs at all levels of the system. This includes the data pipelines, experiment prototypes, fast and scalable deployment models, and evaluation, visualization and monitoring systems.

Requirements

  • Practical Machine Learning experience as demonstrated by earlier works.
  • Knowledge of and ability to use tools from theoretical and practical aspects of computer science. This includes, but is not limited to, probability, statistics, learning theory, algorithms, software architecture, programming languages, etc.
  • Good programming skills and ability to work with programs at all levels of a finished Machine Learning product. We prefer language agnosticism since that exemplifies this point.
  • Git portfolios and blogs are helpful as they let us better evaluate your work.

 

Applicant and Candidate Privacy Policy
 
This policy explains:
  • What information we collect during our application and recruitment process and why we collect it;
  • How we use that information; and
  • How to access and update that information.
Types of information we collect
This policy covers the information you share with Skit (Cyllid Technologies Pvt. Ltd.) during the application or recruitment process including:
  • Your name, address, email address, telephone number and other contact information;
  • Your resume or CV, cover letter, previous and/or relevant work experience or other experience, education, transcripts, or other information you provide to us in support of an application and/or the application and recruitment process;
  • Information from interviews and phone-screenings you may have, if any;
  • Details of the type of employment you are or may be looking for, current and/or desired salary and other terms relating to compensation and benefits packages, willingness to relocate, or other job preferences;
  • Details of how you heard about the position you are applying for;
  • Reference information and/or information received from background checks (where applicable), including information provided by third parties;
  • Information about your educational and professional background from publicly available sources, including online, that we believe is relevant to your application or a potential future application (e.g. your LinkedIn profile); and/or
  • Information related to any assessment you may take as part of the interview screening process.
How we use information we collect
Your information will be used by Skit for the purposes of carrying out its application and recruitment process which includes:
  • Assessing your skills, qualifications and interests against our career opportunities;
  • Verifying your information and carrying out reference checks and/or conducting background checks (where applicable) if you are offered a job;
  • Communications with you about the recruitment process and/or your application(s), including, in appropriate cases, informing you of other potential career opportunities at Skit;
  • Creating and/or submitting reports as required under any local laws and/or regulations, where applicable;
  • Making improvements to Skit's application and/or recruitment process including improving diversity in recruitment practices;
  • Proactively conducting research about your educational and professional background and skills and contacting you if we think you would be suitable for a role with us.
Read more
MNC
at MNC
Agency job
via Fragma Data Systems by Harpreet kour
Bengaluru (Bangalore)
3 - 7 yrs
₹15L - ₹20L / yr
skill iconMachine Learning (ML)
skill iconDeep Learning
Load Testing
Performance Testing
Stress Testing
+1 more

Primary Responsibilities

  • Understand current state architecture, including pain points.
  • Create and document future state architectural options to address specific issues or initiatives using Machine Learning.
  • Innovate and scale architectural best practices around building and operating ML workloads by collaborating with stakeholders across the organization.
  • Develop CI/CD & ML pipelines that help to achieve end-to-end ML model development lifecycle from data preparation and feature engineering to model deployment and retraining.
  • Provide recommendations around security, cost, performance, reliability, and operational efficiency and implement them
  • Provide thought leadership around the use of industry standard tools and models (including commercially available models and tools) by leveraging experience and current industry trends.
  • Collaborate with the Enterprise Architect, consulting partners and client IT team as warranted to establish and implement strategic initiatives.
  • Make recommendations and assess proposals for optimization.
  • Identify operational issues and recommend and implement strategies to resolve problems.

Must have:

  • 3+ years of experience in developing CI/CD & ML pipelines for end-to-end ML model/workloads development
  • Strong knowledge in ML operations and DevOps workflows and tools such as Git, AWS CodeBuild & CodePipeline, Jenkins, AWS CloudFormation, and others
  • Background in ML algorithm development, AI/ML Platforms, Deep Learning, ML Operations in the cloud environment.
  • Strong programming skillset with high proficiency in Python, R, etc.
  • Strong knowledge of AWS cloud and its technologies such as S3, Redshift, Athena, Glue, SageMaker etc.
  • Working knowledge of databases, data warehouses, data preparation and integration tools, along with big data parallel processing layers such as Apache Spark or Hadoop
  • Knowledge of pure and applied math, ML and DL frameworks, and ML techniques, such as random forest and neural networks
  • Ability to collaborate with Data scientist, Data Engineers, Leaders, and other IT teams
  • Ability to work with multiple projects and work streams at one time. Must be able to deliver results based upon project deadlines.
  • Willing to flex daily work schedule to allow for time-zone differences for global team communications
  • Strong interpersonal and communication skills
Read more
Velocity Services
Bengaluru (Bangalore)
4 - 8 yrs
₹20L - ₹35L / yr
Data engineering
Data Engineer
Big Data
Big Data Engineer
skill iconPython
+10 more

We are an early stage start-up, building new fintech products for small businesses. Founders are IIT-IIM alumni, with prior experience across management consulting, venture capital and fintech startups. We are driven by the vision to empower small business owners with technology and dramatically improve their access to financial services. To start with, we are building a simple, yet powerful solution to address a deep pain point for these owners: cash flow management. Over time, we will also add digital banking and 1-click financing to our suite of offerings.

 

We have developed an MVP which is being tested in the market. We have closed our seed funding from marquee global investors and are now actively building a world class tech team. We are a young, passionate team with a strong grip on this space and are looking to on-board enthusiastic, entrepreneurial individuals to partner with us in this exciting journey. We offer a high degree of autonomy, a collaborative fast-paced work environment and most importantly, a chance to create unparalleled impact using technology.

 

Reach out if you want to get in on the ground floor of something which can turbocharge SME banking in India!

 

Technology stack at Velocity comprises a wide variety of cutting edge technologies like, NodeJS, Ruby on Rails, Reactive Programming,, Kubernetes, AWS, NodeJS, Python, ReactJS, Redux (Saga) Redis, Lambda etc. 

 

Key Responsibilities

  • Responsible for building data and analytical engineering pipelines with standard ELT patterns, implementing data compaction pipelines, data modelling and overseeing overall data quality

  • Work with the Office of the CTO as an active member of our architecture guild

  • Writing pipelines to consume the data from multiple sources

  • Writing a data transformation layer using DBT to transform millions of data into data warehouses.

  • Implement Data warehouse entities with common re-usable data model designs with automation and data quality capabilities

  • Identify downstream implications of data loads/migration (e.g., data quality, regulatory)

 

What To Bring

  • 3+ years of software development experience, a startup experience is a plus.

  • Past experience of working with Airflow and DBT is preferred

  • 2+ years of experience working in any backend programming language. 

  • Strong first-hand experience with data pipelines and relational databases such as Oracle, Postgres, SQL Server or MySQL

  • Experience with DevOps tools (GitHub, Travis CI, and JIRA) and methodologies (Lean, Agile, Scrum, Test Driven Development)

  • Experienced with the formulation of ideas; building proof-of-concept (POC) and converting them to production-ready projects

  • Experience building and deploying applications on on-premise and AWS or Google Cloud cloud-based infrastructure

  • Basic understanding of Kubernetes & docker is a must.

  • Experience in data processing (ETL, ELT) and/or cloud-based platforms

  • Working proficiency and communication skills in verbal and written English.

 

 

 

Read more
TransPacks Technologies IIT Kanpur
Pranav Asthana
Posted by Pranav Asthana
Hyderabad
5 - 9 yrs
₹5L - ₹20L / yr
computer vision
Image Processing
Machine vision
skill iconPython
OpenCV
+4 more

Responsibilities

  • Build and mentor the computer vision team at TransPacks
  • Drive to productionize algorithms (industrial level) developed through hard-core research
  • Own the design, development, testing, deployment, and craftsmanship of the team’s infrastructure and systems capable of handling massive amounts of requests with high reliability and scalability
  • Leverage the deep and broad technical expertise to mentor engineers and provide leadership on resolving complex technology issues
  • Entrepreneurial and out-of-box thinking essential for a technology startup
  • Guide the team for unit-test code for robustness, including edge cases, usability, and general reliability

 

Eligibility

  • Tech in Computer Science and Engineering/Electronics/Electrical Engineering, with demonstrated interest in Image Processing/Computer vision (courses, projects etc) and 6-8 years of experience
  • Tech in Computer Science and Engineering/Electronics/Electrical Engineering, with demonstrated interest in Image Processing/Computer vision (Thesis work) and 4-7 years of experience
  • D in Computer Science and Engineering/Electronics/Electrical Engineering, with demonstrated interest in Image Processing/Computer vision (Ph. D. Dissertation) and inclination to working in Industry to provide innovative solutions to practical problems

Requirements

  • In-depth understanding of image processing algorithms, pattern recognition methods, and rule-based classifiers
  • Experience in feature extraction, object recognition and tracking, image registration, noise reduction, image calibration, and correction
  • Ability to understand, optimize and debug imaging algorithms
  • Understating and experience in openCV library
  • Fundamental understanding of mathematical techniques involved in ML and DL schemas (Instance-based methods, Boosting methods, PGM, Neural Networks etc.)
  • Thorough understanding of state-of-the-art DL concepts (Sequence modeling, Attention, Convolution etc.) along with knack to imagine new schemas that work for the given data.
  • Understanding of engineering principles and a clear understanding of data structures and algorithms
  • Experience in writing production level codes using either C++ or Java
  • Experience with technologies/libraries such as python pandas, numpy, scipy
  • Experience with tensorflow and scikit.
Read more
Zycus
at Zycus
10 recruiters
madhavi JR
Posted by madhavi JR
Bengaluru (Bangalore)
2 - 13 yrs
₹5L - ₹15L / yr
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
Image Processing
Artificial Intelligence (AI)
skill iconDeep Learning

We are looking for applicants with a strong background in Analytics and Data mining (Web, Social and Big data), Machine Learning and Pattern Recognition, Natural Language Processing and Computational Linguistics, Statistical Modelling and Inferencing, Information Retrieval, Large Scale Distributed Systems and Cloud Computing, Econometrics and Quantitative Marketing, Applied Game Theory and Mechanism Design, Operations Research and Optimization, Human Computer Interaction and Information Visualization. Applicants with a background in other quantitative areas are also encouraged to apply.

We are looking for someone who can create and implement AI solutions. If you have built a product like IBM WATSON in the past and not just used WATSON to build applications, this could be the perfect role for you.

All successful candidates are expected to dive deep into problem areas of Zycus’ interest and invent technology solutions to not only advance the current products, but also to generate new product options that can strategically advantage the organization.

Skills:

  • Experience in predictive modelling and predictive software development
  • Skilled in Java, C++, Perl/Python (or similar scripting language)
  • Experience in using R, Matlab, or any other statistical software
  • Experience in mentoring junior team members, and guiding them on machine learning and data modelling applications
  • Strong communication and data presentation skills
  • Classification (svm, decision tree, random forest, neural network)
  • Regression (linear, polynomial, logistic, etc)
  • Classical Optimization(gradient descent, newton raphson, etc)
  • Graph theory (network analytics)
  • Heuristic optimisation (genetic algorithm, swarm theory)
  • Deep learning (lstm, convolutional nn, recurrent nn)

Must Have:

  • Experience: 3-9 years
  • The ideal candidate must have proven expertise in Artificial Intelligence (including deep learning algorithms), Machine Learning and/or NLP
  • The candidate must also have expertise in programming traditional machine learning algorithms, algorithm design & usage
  • Preferred experience with large data sets & distributed computing in Hadoop ecosystem
  • Fluency with databases

Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos