Machine Learning Engineer

at Zocket

DP
Posted by Shraavani Tulshibagwale
icon
Bengaluru (Bangalore)
icon
3 - 5 yrs
icon
₹12L - ₹15L / yr
icon
Full time
Skills
Data Science
Machine Learning (ML)
Python
Big Data

Machine Learning Engineer at Zocket


We are looking for a curious Machine Learning Engineer to join our extremely fast growing Tech Team at Zocket!


About Zocket:

Zocket helps businesses create digital ads in less than 30 seconds and grow digitally without any expertise.

Currently there are only two options for an SMB owner, either employ a digital marketing agency or stay away from digital ads. True to the mission, Zocket leverages AI to simplify  digital marketing for 300 million+ small businesses around the globe.


You are ideal if you have:

  • Interest in working with a fast growing Start Up
  • Strong communication and Presentation skills
  • Ability to meet deadlines 
  • Critical thinking Abilities
  • Interest in working in a high paced environment
  • Desire for Lots and lots of learning
  • Inclination towards working on diverse projects and to make real contributions to the company

Requirements:


  • Bachelor's Degree in Computer Science  or any quantitative discipline (Statistics, Mathematics, Economics)
  • 3+ Years of relevant experience
  • Experience working with languages like Python(mandatory) and R 
  • Experience working with Visualisation tools like Tableau, PowerBI.
  • Experience working in Frameworks such as OpenCV, PyTorch ,Tensorflow
  • Prior experience in building and deploying ML systems using AWS(EC2, Sagemaker)
  • Understanding of statistical concepts
  • Hands-on Computer Vision experience
  • Experience in MySQL is required
  • Cookie points if you have expertise with NLP 

 

 

Apply Away!

 

 

 

 

 

 

 

About Zocket

Zocket is on a mission to transform how small businesses across the globe leverage digital marketing to reach more customers. The global SMB market is ~300 million+ strong and is at the cusp of digital disruption in all aspects. Zocket is founded by IIM grad second time founders post successful exit in their first venture. Team members will work closely with the founders in achieving the mission and be a part of a product led growth journey that will be truly global. Be a part of the early zocketeer team and grow along.
Founded
2021
Type
Product
Size
0-20 employees
Stage
Raised funding
View full company details
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Event & Unstructured Data

at They provide both wholesale and retail funding. PM1

Agency job
via Multi Recruit
AWS KINESYS
Data engineering
AWS Lambda
DynamoDB
data pipeline
Data governance
Data processing
Amazon Web Services (AWS)
athena
Audio
Linux/Unix
Python
SQL
WebLogic
KINESYS
Lambda
icon
Mumbai
icon
5 - 7 yrs
icon
₹20L - ₹25L / yr
  • Key responsibility is to design & develop a data pipeline for real-time data integration, processing, executing of the model (if required), and exposing output via MQ / API / No-SQL DB for consumption
  • Provide technical expertise to design efficient data ingestion solutions to store & process unstructured data, such as Documents, audio, images, weblogs, etc
  • Developing API services to provide data as a service
  • Prototyping Solutions for complex data processing problems using AWS cloud-native solutions
  • Implementing automated Audit & Quality assurance Checks in Data Pipeline
  • Document & maintain data lineage from various sources to enable data governance
  • Coordination with BIU, IT, and other stakeholders to provide best-in-class data pipeline solutions, exposing data via APIs, loading in down streams, No-SQL Databases, etc

Skills

  • Programming experience using Python & SQL
  • Extensive working experience in Data Engineering projects, using AWS Kinesys,  AWS S3, DynamoDB, EMR, Lambda, Athena, etc for event processing
  • Experience & expertise in implementing complex data pipeline
  • Strong Familiarity with AWS Toolset for Storage & Processing. Able to recommend the right tools/solutions available to address specific data processing problems
  • Hands-on experience in Unstructured (Audio, Image, Documents, Weblogs, etc) Data processing.
  • Good analytical skills with the ability to synthesize data to design and deliver meaningful information
  • Know-how on any No-SQL DB (DynamoDB, MongoDB, CosmosDB, etc) will be an advantage.
  • Ability to understand business functionality, processes, and flows
  • Good combination of technical and interpersonal skills with strong written and verbal communication; detail-oriented with the ability to work independently

Functional knowledge

  • Real-time Event Processing
  • Data Governance & Quality assurance
  • Containerized deployment
  • Linux
  • Unstructured Data Processing
  • AWS Toolsets for Storage & Processing
  • Data Security

 

Job posted by
Sapna Deb

Senior consultant

at An IT Services Major, hiring for a leading insurance player.

Agency job
via Indventur Partner
Big Data
Hadoop
Apache Kafka
Apache Hive
Microsoft Windows Azure
Hbase
icon
Chennai
icon
3 - 5 yrs
icon
₹5L - ₹10L / yr

Client  An IT Services Major, hiring for a leading insurance player.

 

 

Position: SENIOR CONSULTANT

 

Job Description:

 

  • Azure admin- senior consultant with HD Insights(Big data)

 

Skills and Experience

 

  • Microsoft Azure Administrator certification
  • Bigdata project experience in Azure HDInsight Stack. big data processing frameworks such as Spark, Hadoop, Hive, Kafka or Hbase.
  • Preferred: Insurance or BFSI domain experience
  • 5 to 5 years of experience is required.
Job posted by
Vanshika kaur

Software Engineer ( Big data)

at It's a OTT platform

Agency job
via Vmultiply solutions
Big Data
Apache Kafka
Kibana
Elastic Search
Logstash
icon
Hyderabad
icon
6 - 8 yrs
icon
₹8L - ₹15L / yr
Passionate data engineer with ability to manage data coming from different sources.
Should design and operate data pipe lines.
Build and manage analytics platform using Elastic search, Redshift, Mongo db.
Strong programming fundamentals in Datastructures and algorithms.
Job posted by
HR Lakshmi

Data Engineer

at Healthifyme

Founded 2012  •  Products & Services  •  100-1000 employees  •  Raised funding
Python
SQL
Data engineering
Big Data
Data Warehouse (DWH)
ETL
Apache Spark
flink
icon
Bengaluru (Bangalore)
icon
3 - 4 yrs
icon
₹18L - ₹35L / yr

Responsibilities:

  • Design, construct, install, test and maintain data pipeline and data management systems.
  • Ensure that all systems meet the business/company requirements as well as industry practices.
  • Integrate up-and-coming data management and software engineering technologies into existing data structures.
  • Processes for data mining, data modeling, and data production.
  • Create custom software components and analytics applications.
  • Collaborate with members of your team (eg, Data Architects, the Software team, Data Scientists) on the project's goals.
  • Recommend different ways to constantly improve data reliability and quality.

 

Requirements:

  • Experience in a related field with real-world skills and testimonials from former employees.
  • Familiar with data warehouses like Redshift, Bigquery and Athena.
  • Familiar with data processing systems like flink, spark and storm. Develop set
  • Proficiency in Python and SQL. Possible work experience and proof of technical expertise.
  • You may also consider a Master's degree in computer engineering or science in order to fine-tune your skills while on the job. (Although a Master's isn't required, it is always appreciated).
  • Intellectual curiosity to find new and unusual ways of how to solve data management issues.
  • Ability to approach data organization challenges while keeping an eye on what's important.
  • Minimal data science knowledge is a Must, should understand a bit of analytics.
Job posted by
Jaya Harjai

BigData Developer (Spark+Python)

at Simplifai Cognitive Solutions Pvt Ltd

Founded 2017  •  Product  •  100-500 employees  •  Bootstrapped
Spark
Big Data
Apache Spark
Python
PySpark
Hadoop
icon
Pune
icon
2 - 15 yrs
icon
₹10L - ₹30L / yr

We are looking for a skilled Senior/Lead Bigdata Engineer to join our team. The role is part of the research and development team, where you with enthusiasm and knowledge are going to be our technical evangelist for the development of our inspection technology and products.

 

At Elop we are developing product lines for sustainable infrastructure management using our own patented technology for ultrasound scanners and combine this with other sources to see holistic overview of the concrete structure. At Elop we will provide you with world-class colleagues highly motivated to position the company as an international standard of structural health monitoring. With the right character you will be professionally challenged and developed.

This position requires travel to Norway.

 

Elop is sister company of Simplifai and co-located together in all geographic locations.

https://elop.no/

https://www.simplifai.ai/en/


Roles and Responsibilities

  • Define technical scope and objectives through research and participation in requirements gathering and definition of processes
  • Ingest and Process data from data sources (Elop Scanner) in raw format into Big Data ecosystem
  • Realtime data feed processing using Big Data ecosystem
  • Design, review, implement and optimize data transformation processes in Big Data ecosystem
  • Test and prototype new data integration/processing tools, techniques and methodologies
  • Conversion of MATLAB code into Python/C/C++.
  • Participate in overall test planning for the application integrations, functional areas and projects.
  • Work with cross functional teams in an Agile/Scrum environment to ensure a quality product is delivered.

Desired Candidate Profile

  • Bachelor's degree in Statistics, Computer or equivalent
  • 7+ years of experience in Big Data ecosystem, especially Spark, Kafka, Hadoop, HBase.
  • 7+ years of hands-on experience in Python/Scala is a must.
  • Experience in architecting the big data application is needed.
  • Excellent analytical and problem solving skills
  • Strong understanding of data analytics and data visualization, and must be able to help development team with visualization of data.
  • Experience with signal processing is plus.
  • Experience in working on client server architecture is plus.
  • Knowledge about database technologies like RDBMS, Graph DB, Document DB, Apache Cassandra, OpenTSDB
  • Good communication skills, written and oral, in English

We can Offer

  • An everyday life with exciting and challenging tasks with the development of socially beneficial solutions
  • Be a part of companys research and Development team to create unique and innovative products
  • Colleagues with world-class expertise, and an organization that has ambitions and is highly motivated to position the company as an international player in maintenance support and monitoring of critical infrastructure!
  • Good working environment with skilled and committed colleagues an organization with short decision paths.
  • Professional challenges and development
Job posted by
Priyanka Malani

Computer vision Scientist at Checko

at TransPacks Technologies IIT Kanpur

Founded 2017  •  Product  •  20-100 employees  •  Raised funding
Computer Vision
C++
Java
Algorithms
Data Structures
Performance Evaluation
Debugging
Data Science
icon
Hyderabad
icon
0 - 4 yrs
icon
₹3L - ₹6L / yr

Responsibilities

  • Own the design, development, testing, deployment, and craftsmanship of the team’s infrastructure and systems capable of handling massive amounts of requests with high reliability and scalability
  • Leverage the deep and broad technical expertise to mentor engineers and provide leadership on resolving complex technology issues
  • Entrepreneurial and out-of-box thinking essential for a technology startup
  • Guide the team for unit-test code for robustness, including edge cases, usability, and general reliability

 

Requirements

  • In-depth understanding of image processing algorithms, pattern recognition methods, and rule-based classifiers
  • Experience in feature extraction, object recognition and tracking, image registration, noise reduction, image calibration, and correction
  • Ability to understand, optimize and debug imaging algorithms
  • Understating and experience in openCV library
  • Fundamental understanding of mathematical techniques involved in ML and DL schemas (Instance-based methods, Boosting methods, PGM, Neural Networks etc.)
  • Thorough understanding of state-of-the-art DL concepts (Sequence modeling, Attention, Convolution etc.) along with knack to imagine new schemas that work for the given data.
  • Understanding of engineering principles and a clear understanding of data structures and algorithms
  • Experience in writing production level codes using either C++ or Java
  • Experience with technologies/libraries such as python pandas, numpy, scipy
  • Experience with tensorflow and scikit.
Job posted by
Pranav Asthana

Senior Systems Engineer – Big Data

at Couture.ai

Founded 2017  •  Product  •  20-100 employees  •  Profitable
Big Data
Hadoop
DevOps
Apache Spark
Spark
Shell Scripting
Docker
Kubernetes
Chef
Amberi
icon
Bengaluru (Bangalore)
icon
2 - 5 yrs
icon
₹5L - ₹10L / yr
Skills Requirements
 Knowledge of Hadoop ecosystem installation, initial-configuration and performance tuning.
 Expert with Apache Ambari, Spark, Unix Shell scripting, Kubernetes and Docker
 Knowledge on python would be desirable.
 Experience with HDP Manager/clients and various dashboards.
 Understanding on Hadoop Security (Kerberos, Ranger and Knox) and encryption and Data masking.
 Experience with automation/configuration management using Chef, Ansible or an equivalent.
 Strong experience with any Linux distribution.
 Basic understanding of network technologies, CPU, memory and storage.
 Database administration a plus.
Qualifications and Education Requirements
 2 to 4 years of experience with and detailed knowledge of Core Hadoop Components solutions and
dashboards running on Big Data technologies such as Hadoop/Spark.
 Bachelor degree or equivalent in Computer Science or Information Technology or related fields.
Job posted by
Rajesh Kumar

Data Engineer

at Nisum consulting

Founded 2000  •  Products & Services  •  100-1000 employees  •  Profitable
Big Data
Hadoop
Spark
Apache Kafka
Scala
Amazon Web Services (AWS)
Windows Azure
Google Cloud Platform (GCP)
Python
icon
Hyderabad
icon
4 - 12 yrs
icon
₹1L - ₹20L / yr
  • 5+ years of experience in a Data Engineer role
  • Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field.
  • Experience with big data tools: Hadoop, Spark, Kafka, etc.
  • Experience with relational SQL and NoSQL databases such as Cassandra.
  • Experience with AWS cloud services: EC2, EMR, Athena
  • Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
  • Advanced SQL knowledge and experience working with relational databases, query authoring (SQL) as well as familiarity with unstructured datasets.
  • Deep problem-solving skills to perform root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
Job posted by
Sameena Shaik

NLP Engineer - Artificial Intelligence

at Artivatic.ai

Founded 2017  •  Product  •  20-100 employees  •  Raised funding
Artificial Intelligence (AI)
Python
Natural Language Processing (NLP)
Deep Learning
Machine Learning (ML)
Java
Scala
Natural Language Toolkit (NLTK)
icon
Bengaluru (Bangalore)
icon
2 - 5 yrs
icon
₹5L - ₹10L / yr
We at artivatic are seeking passionate, talented and research focused natural processing language engineer with strong machine learning and mathematics background to help build industry-leading technology. - The ideal candidate will have research/implementation experience modeling and developing NLP tools and experience working with machine learning/deep learning algorithms.Qualifications :- Bachelors or Master degree in Computer Science, Mathematics or related field with specialization in natural language processing, Machine Learning or Deep Learning.- Publication record in conferences/journals is a plus.- 2+ years of working/research experience building NLP based solutions is preferred.Required Skills :- Hands-on Experience building NLP models using different NLP libraries ad toolkit like NLTK, Stanford NLP etc.- Good understanding of Rule-based, Statistical and probabilistic NLP techniques.- Good knowledge of NLP approaches and concepts like topic modeling, text summarization, semantic modeling, Named Entity recognition etc.- Good understanding of Machine learning and Deep learning algorithms.- Good knowledge of Data Structures and Algorithms.- Strong programming skills in Python/Java/Scala/C/C++.- Strong problem solving and logical skills.- A go-getter kind of attitude with a willingness to learn new technologies.- Well versed with software design paradigms and good development practices.Responsibilities :- Developing novel algorithms and modeling techniques to advance the state of the art in Natural Language Processing.- Developing NLP based tools and solutions end to end.
Job posted by
Layak Singh

Big Data Developer

at GeakMinds Technologies Pvt Ltd

Founded 2011  •  Services  •  100-1000 employees  •  Profitable
Hadoop
Big Data
HDFS
Apache Sqoop
Apache Flume
Apache HBase
Apache Kafka
icon
Chennai
icon
1 - 5 yrs
icon
₹1L - ₹6L / yr
• Looking for Big Data Engineer with 3+ years of experience. • Hands-on experience with MapReduce-based platforms, like Pig, Spark, Shark. • Hands-on experience with data pipeline tools like Kafka, Storm, Spark Streaming. • Store and query data with Sqoop, Hive, MySQL, HBase, Cassandra, MongoDB, Drill, Phoenix, and Presto. • Hands-on experience in managing Big Data on a cluster with HDFS and MapReduce. • Handle streaming data in real time with Kafka, Flume, Spark Streaming, Flink, and Storm. • Experience with Azure cloud, Cognitive Services, Databricks is preferred.
Job posted by
John Richardson
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at Zocket?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort