Deep Learning Engineer

at Nanonets

DP
Posted by Neil Shroff
icon
Remote, Mumbai, Bengaluru (Bangalore)
icon
3 - 10 yrs
icon
$25K - $50K / yr (ESOP available)
icon
Full time
Skills
Deep Learning
TensorFlow
Machine Learning (ML)
Python

We are looking for an engineer with ML/DL background.


Ideal candidate should have the following skillset

1) Python
2) Tensorflow
3) Experience building and deploying systems
4) Experience with Theano/Torch/Caffe/Keras all useful
5) Experience Data warehousing/storage/management would be a plus
6) Experience writing production software would be a plus
7) Ideal candidate should have developed their own DL architechtures apart from using open source architechtures.
8) Ideal candidate would have extensive experience with computer vision applications


Candidates would be responsible for building Deep Learning models to solve specific problems. Workflow would look as follows:

1) Define Problem Statement (input -> output)
2) Preprocess Data
3) Build DL model
4) Test on different datasets using Transfer Learning
5) Parameter Tuning
6) Deployment to production


Candidate should have experience working on Deep Learning with an engineering degree from a top tier institute (preferably IIT/BITS or equivalent)

About Nanonets

Automate data capture for intelligent document processing using Nanonets self-learning AI-based OCR. Process documents like Invoices, Receipts, Id cards and more!
Founded
2016
Type
Products & Services
Size
20-100 employees
Stage
Raised funding
View full company details
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Data Engineering Manager

at Porter.in

Founded 2014  •  Services  •  100-1000 employees  •  Profitable
Python
SQL
Spark
Amazon Web Services (AWS)
Team Management
icon
Bengaluru (Bangalore)
icon
7 - 12 yrs
icon
₹25L - ₹35L / yr

Manager | Data Engineering

Bangalore | Full Time

Company Overview:

At Porter, we are passionate about improving productivity. We want to help businesses, large and small, optimize their last-mile operations and empower them to unleash the growth of their core functions. Last mile delivery logistics is one of the biggest and fastest growing sectors of the economy with a market cap upwards of 50 billion USD and a growth rate exceeding 15% CAGR.

Porter is the fastest growing leader in this sector with operations in 14 major cities, a fleet size exceeding 1L registered and 50k active driver partners and a customer base with 3.5M being monthly active. Our industry-best technology platform has raised over 50 million USD from investors including Sequoia Capital, Kae Capital, Mahindra group and LGT Aspada. We are addressing a massive problem and going after a huge market.

We’re trying to create a household name in transportation and our ambition is to disrupt all facets of last mile logistics including warehousing and LTL transportation. At Porter, we’re here to do the best work of our lives.

If you want to do the same and love the challenges and opportunities of a fast paced work environment, then we believe Porter is the right place for you.

 

Responsibilities

Data Strategy and Alignment

  • Work closely with data analysts and business / product teams to understand requirements and provide data ready for analysis and reporting.
  • Apply, help define, and champion data governance : data quality, testing, documentation, coding best practices and peer reviews.
  • Continuously discover, transform, test, deploy, and document data sources and data models.
  • Work closely with the Infrastructure team to build and improve our Data Infrastructure.
  • Develop and execute data roadmap (and sprints) - with a keen eye on industry trends and direction.

 

 

 

Data Stores and System Development

  • Design and implement high-performance, reusable, and scalable data models for our data warehouse to ensure our end-users get consistent and reliable answers when running their own analyses.
  • Focus on test driven design and results for repeatable and maintainable processes and tools.
  • Create and maintain optimal data pipeline architecture - and data flow logging framework.
  • Build the data products, features, tools, and frameworks that enable and empower Data, and Analytics teams across Porter.

Project Management

  • Drive project execution using effective prioritization and resource allocation.
  • Resolve blockers through technical expertise, negotiation, and delegation.
  • Strive for on-time complete solutions through stand-ups and course-correction.

Team Management

  • Manage and elevate team of 5-8 members.
  • Do regular one-on-ones with teammates to ensure resource welfare.
  • Periodic assessment and actionable feedback for progress.
  • Recruit new members with a view to long-term resource planning through effective collaboration with the hiring team.

Process design

  • Set the bar for the quality of technical and data-based solutions the team ships.
  • Enforce code quality standards and establish good code review practices - using this as a nurturing tool.
  • Set up communication channels and feedback loops for knowledge sharing and stakeholder management.
  • Explore the latest best practices and tools for constant up-skilling.

 

Data Engineering Stack

  • Analytics : Python / R / SQL + Excel / PPT, Google Colab
  • Database : PostgreSQL, Amazon Redshift, DynamoDB, Aerospike
  • Warehouse : Redshift, S3
  • ETL : Airflow + DBT + Custom-made Python + Amundsen (Discovery)
  • Business Intelligence / Visualization : Metabase + Google Data Studio
  • Frameworks : Spark + Dash + StreamLit
  • Collaboration : Git, Notion
Job posted by
Satyajit Mittra

Data Science Core Developer

at Kwalee

Founded 2011  •  Product  •  100-500 employees  •  Profitable
Data Analytics
Data Science
Python
NOSQL Databases
SQL
icon
Bengaluru (Bangalore)
icon
3 - 15 yrs
icon
Best in industry

Kwalee is one of the world’s leading multiplatform game publishers and developers, with well over 750 million downloads worldwide for mobile hits such as Draw It, Teacher Simulator, Let’s Be Cops 3D, Traffic Cop 3D and Makeover Studio 3D. Alongside this, we also have a growing PC and Console team of incredible pedigree that is on the hunt for great new titles to join TENS!, Eternal Hope and Die by the Blade. 

With a team of talented people collaborating daily between our studios in Leamington Spa, Bangalore and Beijing, or on a remote basis from Turkey, Brazil, the Philippines and many more places, we have a truly global team making games for a global audience. And it’s paying off: Kwalee games have been downloaded in every country on earth! If you think you’re a good fit for one of our remote vacancies, we want to hear from you wherever you are based.

Founded in 2011 by David Darling CBE, a key architect of the UK games industry who previously co-founded and led Codemasters for many years, our team also includes legends such as Andrew Graham (creator of Micro Machines series) and Jason Falcus (programmer of classics including NBA Jam) alongside a growing and diverse team of global gaming experts. Everyone contributes creatively to Kwalee’s success, with all employees eligible to pitch their own game ideas on Creative Wednesdays, and we’re proud to have built our success on this inclusive principle. Could your idea be the next global hit?

What’s the job?

As a Data Science Core Developer you will build tools and develop technology that deliver data science products to a team of strategists, marketing experts and game developers.


What you will be doing

  • Create analytical tools, from simple scripts to full stack applications.
  • Develop successful prototype tools into highly tested automated programs
  • Work with the marketing, publishing and development teams to understand the problems they are facing, how to solve them and deliver products that are understandable to non-data scientists
  • Solve challenging data management and data flow problems to fuel Kwalee’s analysis


How you will be doing this

  • You’ll be part of an agile, multidisciplinary and creative team and work closely with them to ensure the best results.
  • You'll think creatively and be motivated by challenges and constantly striving for the best.
  • You’ll work with cutting edge technology, if you need software or hardware to get the job done efficiently, you will get it. We even have a robot!


Team

Our talented team is our signature. We have a highly creative atmosphere with more than 200 staff where you’ll have the opportunity to contribute daily to important decisions. You’ll work within an extremely experienced, passionate and diverse team, including David Darling and the creator of the Micro Machines video games.


Skills and Requirements

  • A proven track record of writing high quality program code in Python
  • Experience with machine learning python frameworks and libraries such as Tensorflow and Scikit-Learn
  • The ability to write quick scripts to accelerate manual tasks
  • Knowledge of NoSQL and SQL databases like Couchbase, Elasticsearch and PostgreSQL  will be helpful but not necessary
  • An avid interest in the development, marketing and monetisation of mobile games


We offer

  • We want everyone involved in our games to share our success, that’s why we have a generous team profit sharing scheme from day 1 of employment
  • In addition to a competitive salary we also offer private medical cover and life assurance
  • Creative Wednesdays! (Design and make your own games every Wednesday)
  • 20 days of paid holidays plus bank holidays 
  • Hybrid model available depending on the department and the role
  • Relocation support available 
  • Great work-life balance with flexible working hours
  • Quarterly team building days - work hard, play hard!
  • Monthly employee awards
  • Free snacks, fruit and drinks


Our philosophy

We firmly believe in creativity and innovation and that a fundamental requirement for a successful and happy company is having the right mix of individuals. With the right people in the right environment anything and everything is possible.

Kwalee makes games to bring people, their stories, and their interests together. As an employer, we’re dedicated to making sure that everyone can thrive within our team by welcoming and supporting people of all ages, races, colours, beliefs, sexual orientations, genders and circumstances. With the inclusion of diverse voices in our teams, we bring plenty to the table that’s fresh, fun and exciting; it makes for a better environment and helps us to create better games for everyone! This is how we move forward as a company – because these voices are the difference that make all the difference.

Job posted by
Michael Hoppitt

Data Engineer - Python, Apache, Spark

at Spica Systems

Founded 2019  •  Products & Services  •  20-100 employees  •  Raised funding
Python
Apache Spark
icon
Kolkata
icon
3 - 5 yrs
icon
₹7L - ₹12L / yr
We are a Silicon Valley based start-up, established in 2019 and are recognized as experts in building products and providing R&D and Software Development services in wide range of leading-edge technologies such as LTE, 5G, Cloud Services (Public -AWS, AZURE,GCP,Private – Openstack) and Kubernetes. It has a highly scalable and secured 5G Packet Core Network, orchestrated by ML powered Kubernetes platform, which can be deployed in various multi cloud mode along with a test tool.Headquartered in San Jose, California, we have our R&D centre in Sector V, Salt Lake Kolkata.
 

Requirements:

  • Overall 3 to 5 years of experience in designing and implementing complex large scale Software.
  • Good in Python is must.
  • Experience in Apache Spark, Scala, Java and Delta Lake
  • Experience in designing and implementing templated ETL/ELT data pipelines
  • Expert level experience in Data Pipeline Orchestrationusing Apache Airflow for large scale production deployment
  • Experience in visualizing data from various tasks in the data pipeline using Apache Zeppelin/Plotly or any other visualization library.
  • Log management and log monitoring using ELK/Grafana
  • Git Hub Integration

 

Technology Stack: Apache Spark, Apache Airflow, Python, AWS, EC2, S3, Kubernetes, ELK, Grafana , Apache Arrow, Java

Job posted by
Priyanka Bhattacharya

ML Engineer

at MNC

Agency job
via Fragma Data Systems
Machine Learning (ML)
Deep Learning
Load Testing
Performance Testing
Stress Testing
Test Planning
icon
Bengaluru (Bangalore)
icon
3 - 7 yrs
icon
₹15L - ₹20L / yr

Primary Responsibilities

  • Understand current state architecture, including pain points.
  • Create and document future state architectural options to address specific issues or initiatives using Machine Learning.
  • Innovate and scale architectural best practices around building and operating ML workloads by collaborating with stakeholders across the organization.
  • Develop CI/CD & ML pipelines that help to achieve end-to-end ML model development lifecycle from data preparation and feature engineering to model deployment and retraining.
  • Provide recommendations around security, cost, performance, reliability, and operational efficiency and implement them
  • Provide thought leadership around the use of industry standard tools and models (including commercially available models and tools) by leveraging experience and current industry trends.
  • Collaborate with the Enterprise Architect, consulting partners and client IT team as warranted to establish and implement strategic initiatives.
  • Make recommendations and assess proposals for optimization.
  • Identify operational issues and recommend and implement strategies to resolve problems.

Must have:

  • 3+ years of experience in developing CI/CD & ML pipelines for end-to-end ML model/workloads development
  • Strong knowledge in ML operations and DevOps workflows and tools such as Git, AWS CodeBuild & CodePipeline, Jenkins, AWS CloudFormation, and others
  • Background in ML algorithm development, AI/ML Platforms, Deep Learning, ML Operations in the cloud environment.
  • Strong programming skillset with high proficiency in Python, R, etc.
  • Strong knowledge of AWS cloud and its technologies such as S3, Redshift, Athena, Glue, SageMaker etc.
  • Working knowledge of databases, data warehouses, data preparation and integration tools, along with big data parallel processing layers such as Apache Spark or Hadoop
  • Knowledge of pure and applied math, ML and DL frameworks, and ML techniques, such as random forest and neural networks
  • Ability to collaborate with Data scientist, Data Engineers, Leaders, and other IT teams
  • Ability to work with multiple projects and work streams at one time. Must be able to deliver results based upon project deadlines.
  • Willing to flex daily work schedule to allow for time-zone differences for global team communications
  • Strong interpersonal and communication skills
Job posted by
Harpreet kour

Fresher Data Engineer- Python+SQL (Internship+Job Opportunity)

at Fragma Data Systems

Founded 2015  •  Products & Services  •  employees  •  Profitable
SQL
Data engineering
Big Data
Python
icon
Remote, Bengaluru (Bangalore), Hyderabad
icon
0 - 1 yrs
icon
₹2.5L - ₹4L / yr
● Hands-on Work experience as a Python Developer
● Hands-on Work experience in SQL/PLSQL
● Expertise in at least one popular Python framework (like Django,
Flask or Pyramid)
● Knowledge of object-relational mapping (ORM)
● Familiarity with front-end technologies (like JavaScript and HTML5)
● Willingness to learn & upgrade to Big data and cloud technologies
like Pyspark Azure etc.
● Team spirit
● Good problem-solving skills
● Write effective, scalable code
Job posted by
Evelyn Charles

Data Engineer- SQL+PySpark

at Fragma Data Systems

Founded 2015  •  Products & Services  •  employees  •  Profitable
Spark
PySpark
Big Data
Python
SQL
Windows Azure
icon
Remote, Bengaluru (Bangalore)
icon
1 - 5 yrs
icon
₹5L - ₹15L / yr
Must-Have Skills:
• Good experience in Pyspark - Including Dataframe core functions and Spark SQL
• Good experience in SQL DBs - Be able to write queries including fair complexity.
• Should have excellent experience in Big Data programming for data transformation and aggregations
• Good at ELT architecture. Business rules processing and data extraction from Data Lake into data streams for business consumption.
• Good customer communication.
• Good Analytical skill
 
 
Technology Skills (Good to Have):
  • Building and operationalizing large scale enterprise data solutions and applications using one or more of AZURE data and analytics services in combination with custom solutions - Azure Synapse/Azure SQL DWH, Azure Data Lake, Azure Blob Storage, Spark, HDInsights, Databricks, CosmosDB, EventHub/IOTHub.
  • Experience in migrating on-premise data warehouses to data platforms on AZURE cloud. 
  • Designing and implementing data engineering, ingestion, and transformation functions
  • Azure Synapse or Azure SQL data warehouse
  • Spark on Azure is available in HD insights and data bricks
Job posted by
Evelyn Charles

Data Scientist

at Dunzo

Agency job
via zyoin
Data Science
Machine Learning (ML)
NumPy
R Programming
Python
icon
Bengaluru (Bangalore)
icon
8 - 12 yrs
icon
₹50L - ₹90L / yr
  • B.Tech/MTech from tier 1 institution
  • 8+years of experience in machine learning techniques like logistic regression, random forest, boosting, trees, neural networks, etc.
  • Showcased experience with Python, SQL and proficiency in Scikit Learn, Pandas, NumPy, Keras and TensorFlow/pytorch
  • Experience of working with Qlik sense or Tableau is a plus
Experience of working in a product company is a plus
Job posted by
Pratibha Yadav

Predictive Modelling And Optimization Consultant (SCM)

at BRIDGEi2i Analytics Solutions

Founded 2011  •  Products & Services  •  100-1000 employees  •  Profitable
R Programming
Data Analytics
Predictive modelling
Supply Chain Management (SCM)
SQL
MySQL
Python
Statistical Modeling
Supply chain optimization
icon
Bengaluru (Bangalore)
icon
4 - 10 yrs
icon
₹9L - ₹15L / yr

The person holding this position is responsible for leading the solution development and implementing advanced analytical approaches across a variety of industries in the supply chain domain.

At this position you act as an interface between the delivery team and the supply chain team, effectively understanding the client business and supply chain.

Candidates will be expected to lead projects across several areas such as

  • Demand forecasting
  • Inventory management
  • Simulation & Mathematical optimization models.
  • Procurement analytics
  • Distribution/Logistics planning
  • Network planning and optimization

 

Qualification and Experience

  • 4+ years of analytics experience in supply chain – preferable industries hi-tech, consumer technology, CPG, automobile, retail or e-commerce supply chain.
  • Master in Statistics/Economics or MBA or M. Sc./M. Tech with Operations Research/Industrial Engineering/Supply Chain
  • Hands-on experience in delivery of projects using statistical modelling

Skills / Knowledge

  • Hands on experience in statistical modelling software such as R/ Python and SQL.
  • Experience in advanced analytics / Statistical techniques – Regression, Decision tress, Ensemble machine learning algorithms etc. will be considered as an added advantage.
  • Highly proficient with Excel, PowerPoint and Word applications.
  • APICS-CSCP or PMP certification will be added advantage
  • Strong knowledge of supply chain management
  • Working knowledge on the linear/nonlinear optimization
  • Ability to structure problems through a data driven decision-making process.
  • Excellent project management skills, including time and risk management and project structuring.
  • Ability to identify and draw on leading-edge analytical tools and techniques to develop creative approaches and new insights to business issues through data analysis.
  • Ability to liaison effectively with multiple stakeholders and functional disciplines.
  • Experience in Optimization tools like Cplex, ILOG, GAMS will be an added advantage.
Job posted by
Venniza Glades

Data Scientist

at humonics global pvt.ltd.

Founded 2017  •  Product  •  20-100 employees  •  Bootstrapped
Natural Language Processing (NLP)
Deep Learning
Machine Learning (ML)
R Programming
icon
NCR (Delhi | Gurgaon | Noida)
icon
5 - 10 yrs
icon
₹10L - ₹15L / yr
• Using statistical and machine learning techniques to analyse large-scale user data, including text data and chat logs; • Applying machine learning techniques for text mining and information extraction based on structured, semi-structured and unstructured data; • Contributing to services like chatbots, voice portals and dialogue systems • Input your own ideas to improve existing processes on services and products
Job posted by
vijeta sharma

Senior NLP Engineer

at Niki.ai

Founded 2015  •  Product  •  20-100 employees  •  Raised funding
Natural Language Processing (NLP)
Machine Learning (ML)
Python
icon
Bengaluru (Bangalore)
icon
5 - 12 yrs
icon
₹20L - ₹40L / yr
We at  https://niki.ai/" target="_blank">niki.ai are currently looking for a NLP Engineer to join our team. We would love to tell you a little more about the position and learn a few things about you. Below is the job description for the Natural Language Processing Engineer Role for your reference. 

Job Description
Niki is an artificially intelligent ordering application (http://niki.ai/app" target="_blank">niki.ai/app). Our founding team is from IIT Kharagpur, and we are looking for a Natural Language Processing Engineer to join our engineering team.

The ideal candidate will have industry experience solving language-related problems using statistical methods on vast quantities of data available from Indian mobile consumers and elsewhere.

Major responsibilities would be:

1. Create language models from text data. These language models draw heavily from statistical, deep learning as well as rule based research in recent times around building taggers, parsers, knowledge graph based dictionaries etc.

2. Develop highly scalable classifiers and tools leveraging machine learning, data regression, and rules based models

3. Work closely with product teams to implement algorithms that power user and developer-facing products

We work mostly in Java and Python and object oriented concepts are a must to fit in the team. Basic eligibility criteria are:

1. Graduate/Post-Graduate/M.S./Ph.D in Computer Science/Mathematics/Machine Learning/NLP or allied fields.
2. Industry experience of min 5 years.
3. Strong background in Natural Language Processing and Machine Learning
4. Have some experience in leading a team big or small.
5. Experience with Hadoop/Hbase/Pig or MaprReduce/Sawzall/Bigtable is a plus

Competitive Compensation.

What We're Building
We are building an automated messaging platform to simplify ordering experience for consumers. We have launched the Android App: http://niki.ai/app" target="_blank">niki.ai/app . In the current avatar, Niki can process mobile phone recharge and book cabs for the consumers. It assists in finding the right recharge plans across topup, 2g, 3g and makes the transaction. In cab booking, it helps in end to end booking along with tracking and cancellation within the App. You may also compare to get the nearest or the cheapest cab among available ones.

Being an instant messaging App, it works seamlessly on 2G / 3G / Wifi and is light weight around 3.6 MB. You may check out using: https://niki.ai/" target="_blank">niki.ai app
 
Job posted by
Alyeska Araujo
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at Nanonets?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort