Machine Learning Engineer

at Deemsoft

DP
Posted by Shreedhar shree
icon
Bengaluru (Bangalore)
icon
3 - 7 yrs
icon
₹6L - ₹20L / yr
icon
Full time
Skills
Java
Machine Learning (ML)
Python
Natural Language Processing (NLP)
BS/Masters in Computer Science with 4+ years of experience. Excellent knowledge in algorithms and data structures and implementing them in Java. Experience in distributed systems architectures, including multithreading & concurrency issues. Experience working in Agile/Scrum environment is desired. Exceptional debugging, testing, and problem-solving skills. Self-starter, with quick learning curve. Strong written and verbal communication skills and an ability and interest to mentor other junior engineers. Working experience within product development teams is a must. Must have demonstrated capabilities to create patentable ideas. Working experience in Natural Language Processing (NLP)/Natural Language Understanding (NLU) based solutions is a plus.
Read more

About Deemsoft

Founded
2006
Type
Products & Services
Size
20-100 employees
Stage
Bootstrapped
View full company details
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Data Scientist

at A content consumption and discovery app which provides news

Agency job
via Jobdost
Data Science
Deep Learning
R Programming
Python
icon
Noida
icon
2 - 5 yrs
icon
₹30L - ₹40L / yr

Data Scientist

Requirements

● B.Tech/Masters in Mathematics, Statistics, Computer Science or another
quantitative field
● 2-3+ years of work experience in ML domain ( 2-5 years experience )
● Hands-on coding experience in Python
● Experience in machine learning techniques such as Regression, Classification,
Predictive modeling, Clustering, Deep Learning stack, NLP
● Working knowledge of Tensorflow/PyTorch

Optional Add-ons-

● Experience with distributed computing frameworks: Map/Reduce, Hadoop, Spark
etc.
● Experience with databases: MongoDB

Read more
Job posted by
Mamatha A

Data Engineer

at PayU

Founded 2002  •  Product  •  500-1000 employees  •  Profitable
Python
ETL
Data engineering
Informatica
SQL
Spark
Snow flake schema
icon
Remote, Bengaluru (Bangalore)
icon
2 - 5 yrs
icon
₹5L - ₹20L / yr

Role: Data Engineer  
Company: PayU

Location: Bangalore/ Mumbai

Experience : 2-5 yrs


About Company:

PayU is the payments and fintech business of Prosus, a global consumer internet group and one of the largest technology investors in the world. Operating and investing globally in markets with long-term growth potential, Prosus builds leading consumer internet companies that empower people and enrich communities.

The leading online payment service provider in 36 countries, PayU is dedicated to creating a fast, simple and efficient payment process for merchants and buyers. Focused on empowering people through financial services and creating a world without financial borders where everyone can prosper, PayU is one of the biggest investors in the fintech space globally, with investments totalling $700 million- to date. PayU also specializes in credit products and services for emerging markets across the globe. We are dedicated to removing risks to merchants, allowing consumers to use credit in ways that suit them and enabling a greater number of global citizens to access credit services.

Our local operations in Asia, Central and Eastern Europe, Latin America, the Middle East, Africa and South East Asia enable us to combine the expertise of high growth companies with our own unique local knowledge and technology to ensure that our customers have access to the best financial services.

India is the biggest market for PayU globally and the company has already invested $400 million in this region in last 4 years. PayU in its next phase of growth is developing a full regional fintech ecosystem providing multiple digital financial services in one integrated experience. We are going to do this through 3 mechanisms: build, co-build/partner; select strategic investments. 

PayU supports over 350,000+ merchants and millions of consumers making payments online with over 250 payment methods and 1,800+ payment specialists. The markets in which PayU operates represent a potential consumer base of nearly 2.3 billion people and a huge growth potential for merchants. 

Job responsibilities:

  • Design infrastructure for data, especially for but not limited to consumption in machine learning applications 
  • Define database architecture needed to combine and link data, and ensure integrity across different sources 
  • Ensure performance of data systems for machine learning to customer-facing web and mobile applications using cutting-edge open source frameworks, to highly available RESTful services, to back-end Java based systems 
  • Work with large, fast, complex data sets to solve difficult, non-routine analysis problems, applying advanced data handling techniques if needed 
  • Build data pipelines, includes implementing, testing, and maintaining infrastructural components related to the data engineering stack.
  • Work closely with Data Engineers, ML Engineers and SREs to gather data engineering requirements to prototype, develop, validate and deploy data science and machine learning solutions

Requirements to be successful in this role: 

  • Strong knowledge and experience in Python, Pandas, Data wrangling, ETL processes, statistics, data visualisation, Data Modelling and Informatica.
  • Strong experience with scalable compute solutions such as in Kafka, Snowflake
  • Strong experience with workflow management libraries and tools such as Airflow, AWS Step Functions etc. 
  • Strong experience with data engineering practices (i.e. data ingestion pipelines and ETL) 
  • A good understanding of machine learning methods, algorithms, pipelines, testing practices and frameworks 
  • Preferred) MEng/MSc/PhD degree in computer science, engineering, mathematics, physics, or equivalent (preference: DS/ AI) 
  • Experience with designing and implementing tools that support sharing of data, code, practices across organizations at scale 
Read more
Job posted by
Vishakha Sonde

Data Engineer

at Hammoq

Founded 2020  •  Products & Services  •  20-100 employees  •  Raised funding
pandas
NumPy
Data engineering
Data Engineer
Apache Spark
PySpark
Image Processing
Scikit-Learn
Machine Learning (ML)
Python
Web Scraping
icon
Remote, Indore, Ujjain, Hyderabad, Bengaluru (Bangalore)
icon
5 - 8 yrs
icon
₹5L - ₹15L / yr
  • Does analytics to extract insights from raw historical data of the organization. 
  • Generates usable training dataset for any/all MV projects with the help of Annotators, if needed.
  • Analyses user trends, and identifies their biggest bottlenecks in Hammoq Workflow.
  • Tests the short/long term impact of productized MV models on those trends.
  • Skills - Numpy, Pandas, SPARK, APACHE SPARK, PYSPARK, ETL mandatory. 
Read more
Job posted by
Nikitha Muthuswamy

Big Data

at NoBroker

Founded 2014  •  Products & Services  •  100-1000 employees  •  Raised funding
Java
Spark
PySpark
Data engineering
Big Data
Hadoop
Selenium
icon
Bengaluru (Bangalore)
icon
1 - 3 yrs
icon
₹6L - ₹8L / yr
You will build, setup and maintain some of the best data pipelines and MPP frameworks for our
datasets
Translate complex business requirements into scalable technical solutions meeting data design
standards. Strong understanding of analytics needs and proactive-ness to build generic solutions
to improve the efficiency
Build dashboards using Self-Service tools on Kibana and perform data analysis to support
business verticals
Collaborate with multiple cross-functional teams and work
Read more
Job posted by
noor aqsa

Data Engineer ( Only Immediate)

at StatusNeo

Founded 2020  •  Products & Services  •  100-1000 employees  •  Profitable
Data engineering
Data Engineer
Python
Big Data
Spark
Scala
icon
Remote only
icon
2 - 15 yrs
icon
₹2L - ₹70L / yr
Proficiency in engineering practices and writing high quality code, with expertise in
either one of Java, Scala or Python
 Experience in Bigdata Technologies (Hadoop/Spark/Hive/Presto/HBase) & streaming
platforms (Kafka/NiFi/Storm)
 Experience in Distributed Search (Solr/Elastic Search), In-memory data-grid
(Redis/Ignite), Cloud native apps and Kubernetes is a plus
 Experience in building REST services and API’s following best practices of service
abstractions, Micro-services. Experience in Orchestration frameworks is a plus
 Experience in Agile methodology and CICD - tool integration, automation,
configuration management
 Added advantage for being a committer in one of the open-source Bigdata
technologies - Spark, Hive, Kafka, Yarn, Hadoop/HDFS
Read more
Job posted by
Alex P

GCP Data Engineer (WFH Permanently)

at Fresh Prints

Founded 2009  •  Products & Services  •  20-100 employees  •  Profitable
Google Cloud Platform (GCP)
SQL
PySpark
Data engineering
Big Data
Hadoop
Spark
Data migration
Python
Tableau
MS-Excel
icon
Remote only
icon
1 - 3 yrs
icon
₹8L - ₹14L / yr

 

https://www.freshprints.com/home" target="_blank">Fresh Prints is a New York-based custom apparel startup. We find incredible students and give them the working capital, training, and support to build the business at their schools. We have 400+ students who will do $15 million in sales over the next 12 months.


You’ll be focused on the next $50 million. Data is a product that can be used to drive team behaviors and generate revenue growth.

  • How do we use our data to drive up account value?
  • How do we develop additional revenue channels?
  • How do we increase operational efficiency?
  • How do we usher in the next stage at Fresh Prints?

Those are the questions the members of our cross-functional Growth Teamwork on every day. They do so not as data analysts, developers, or marketers, but as entrepreneurs, determined to drive the business forward.

You’d be our first dedicated Data Engineer. As such, you’ll touch every aspect of data at Fresh Prints. You’ll work with the rest of the Data Science team to sanitize the data systems, automate pipelines, and build test cases for post batch and regular quality evaluation.

You will develop alert systems for fire drill activities, build an immediate cleanup plan, and document the evolution of the ingestion process. You will also assist in mining that data for insights and building visualizations that help the entire company utilize those insights.

This role reports to https://www.linkedin.com/in/abilashls/" target="_blank">Abilash Reddy, our Head of Data & CRM and you will work with an extremely talented and professional team.

Responsibilities

  • Work closely with every team at Fresh Prints to uncover ways in which data shapes how effective they are
  • Designing, building, and operationalizing large-scale enterprise data solutions and applications using GCP data pipeline and automation services
  • Create, restructure, and manage large datasets, files, and systems
  • Take complete ownership of the implementation of data pipelines and hybrid data systems along with test cases as we scale
  • Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader
  • Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics
  • Monitor and maintain overall Tableau server health

Experience Required

  • 3+ years of experience in Cloud data ingestion and automation from a variety of sources using GCP services and building hybrid data architecture on top
  • 3+ years of strong experience in SQL & Python
  • 3+ years of experience with Excel and/or Google Sheets
  • 2+ years of experience with Tableau server or online
  • A successful history of manipulating, processing, and extracting value from large disconnected datasets
  • Experience in an agile development environment
  • Perfect English fluency is a must

Personal Attributes

  • Able to connect the dots between data and business value
  • Strong attention to detail 
  • Proactive. You believe it’s always on you to make sure anything you do is a success
  • In love with a challenge. You revel in solving problems and want a job that pushes you out of your comfort zone
  • Goal-oriented. You’re incredibly ambitious. You’re dedicated to a long-term vision of who you are and where you want to go
  • Open to change. You’re inspired by the endless ways in which everything we do can always be improved
  • Calm under pressure. You have a sense of urgency but channel it into productively working through any issues

Education

  • Google cloud certified (preferred)
  • Bachelor in computer science or information management is a strong plus

Compensation & Benefits

  • Competitive salary
  • Health insurance (India & Philippines)
  • Learning opportunities 
  • Working in a great culture

Job Location

  • This is a permanent WFH role, could be based in India or the Philippines. Candidates from other countries may be considered
  • We will have an office in Hyderabad, India but working from the office is completely optional

Working Hours

  • 3:30 PM to 11:30 PM IST 

Fresh Prints is an equal employment opportunity employer and promotes diversity; actively encouraging people of all backgrounds, ages, LGBTQ+, and those with disabilities to apply.

Read more
Job posted by
Riza Amrin

AWS Data Engineer

at Advanced technology to Solve Business Problems.( A1)

Agency job
via Multi Recruit
Python
PySpark
Knowledge in AWS
icon
Hyderabad
icon
2 - 4 yrs
icon
₹10L - ₹15L / yr
  • Desire to explore new technology and break new ground.
  • Are passionate about Open Source technology, continuous learning, and innovation.
  • Have the problem-solving skills, grit, and commitment to complete challenging work assignments and meet deadlines.

Qualifications

  • Engineer enterprise-class, large-scale deployments, and deliver Cloud-based Serverless solutions to our customers.
  • You will work in a fast-paced environment with leading microservice and cloud technologies, and continue to develop your all-around technical skills.
  • Participate in code reviews and provide meaningful feedback to other team members.
  • Create technical documentation.
  • Develop thorough Unit Tests to ensure code quality.

Skills and Experience

  • Advanced skills in troubleshooting and tuning AWS Lambda functions developed with Java and/or Python.
  • Experience with event-driven architecture design patterns and practices
  • Experience in database design and architecture principles and strong SQL abilities
  • Message brokers like Kafka and Kinesis
  • Experience with Hadoop, Hive, and Spark (either PySpark or Scala)
  • Demonstrated experience owning enterprise-class applications and delivering highly available distributed, fault-tolerant, globally accessible services at scale.
  • Good understanding of distributed systems.
  • Candidates will be self-motivated and display initiative, ownership, and flexibility.

 

Preferred Qualifications

  • AWS Lambda function development experience with Java and/or Python.
  • Lambda triggers such as SNS, SES, or cron.
  • Databricks
  • Cloud development experience with AWS services, including:
  • IAM
  • S3
  • EC2
  • AWS CLI
  • API Gateway
  • ECR
  • CloudWatch
  • Glue
  • Kinesis
  • DynamoDB
  • Java 8 or higher
  • ETL data pipeline building
  • Data Lake Experience
  • Python
  • Docker
  • MongoDB or similar NoSQL DB.
  • Relational Databases (e.g., MySQL, PostgreSQL, Oracle, etc.).
  • Gradle and/or Maven.
  • JUnit
  • Git
  • Scrum
  • Experience with Unix and/or macOS.
  • Immediate Joiners

Nice to have:

  • AWS / GCP / Azure Certification.
  • Cloud development experience with Google Cloud or Azure

 

Read more
Job posted by
Ranjini A R

Data Scientist

at IQVIA

Founded 1969  •  Products & Services  •  100-1000 employees  •  Profitable
Python
Scala
Spark
Big Data
Data Science
scala
icon
Remote, Kochi (Cochin)
icon
1 - 5 yrs
icon
₹4L - ₹10L / yr
Job Description Summary
Skill sets in Job Profile
1)Machine learning development using Python or Scala Spark
2)Knowledge of multiple ML algorithms like Random forest, XG boost, RNN, CNN, Transform learning etc..
3)Aware of typical challenges in machine learning implementation and respective applications

Good to have
1)Stack development or DevOps team experience
2)Cloud service (AWS, Cloudera), SAAS, PAAS
3)Big data tools and framework
4)SQL experience

Read more
Job posted by
Sony Shetty

Machine Learning Engineer

at Tradeindia.com - Infocom Network Ltd

Founded 1996  •  Services  •  100-1000 employees  •  Profitable
Machine Learning (ML)
Data engineering
Data Science
Big Data
Natural Language Processing (NLP)
Data Visualization
icon
NCR (Delhi | Gurgaon | Noida)
icon
2 - 5 yrs
icon
₹4L - ₹10L / yr
Tradeindia is looking for Machine Learning Engineers to join its engineering team. The ideal candidate will have industry experience working on a recommendation engine, a range of classification and optimization problems e.g.: Collaborative filtering/recommendation User journey prediction Search ranking Text/ sentiment classification or spam detection Predictive analytics The position will involve taking Algorithm and Programming skills and applying them to some of the most exciting and massive SMEs user data. Qualification: M.tech, B.tech, M.C.A. Location: New Delhi About the company: Launched in the year 1996 to offer the Indian Business community a platform to promote themselves globally tradeindia.com has created a niche as India's largest B2B marketplace, offering comprehensive business solutions to the Domestic and Global Business Community through its wide array of online services, directory services and facilitation of trade promotional events. Our portal is an ideal forum for buyers and sellers across the globe to interact and conduct business smoothly and effectively. With an unmatched expertize in data acquisition and online promotion, Tradeindia subsumes a huge number of company profiles and product catalogs under 2,256 different product categories and sub-categories. It is well promoted on all major search engines and receives an average of 20.5 million hits per month. Tradeindia is maintained and promoted by INFOCOM NETWORK LTD. Today we have reached a database of 47,66,674 registered users (Jan 2019), and the company is growing on a titanic scale with a considerable amount of new users joining/registering everyday, under the innovative vision and guidance of Mr. Bikky Khosla, CEO.
Read more
Job posted by
Sher Thapa

Data Scientist

at TintED

Founded 2018  •  Services  •  0-20 employees  •  Bootstrapped
Data Science
Python
R Programming
icon
Remote, Kolkata
icon
0 - 4 yrs
icon
₹3L - ₹7L / yr
We aim to transform recruiting industry.
Read more
Job posted by
Kumar Aniket
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at Deemsoft?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort