Cutshort logo
GreedyGame logo
Data Engineer
GreedyGame's logo

Data Engineer

Debdutta Pal's profile picture
Posted by Debdutta Pal
2 - 4 yrs
₹10L - ₹12L / yr
Bengaluru (Bangalore)
Skills
Python
MySQL
Data Science
NOSQL Databases
Greedygame is looking for a data scientist who will help us make sense of the vast amount of available data in order to make smarter decisions and develop high-quality products. Your primary focus will be using data mining techniques, statistical analysis, machine learning, in order to build high-quality prediction systems and strong consumer engagement profiles. Responsibilities Build required statistical models and heuristics to predict, optimize, and guide various aspects of our business based on available data Interact with product and operations teams to identify gaps, questions, and issues for data analysis and experiment Develop and code software programs, algorithms and create automated processes which cleanse,integrate and evaluate large datasets from multiple sources Create systems to use data from user behavior to identify actionable insights. Convey these insights to product and operations teams from time to time. Help in redefining ad viewing experience for consumers on a global scale Skills Required Coding experience in Python, MySQL, NoSQL and building prototypes for algorithms. Comfortable and willing to learn any machine learning algorithm, reading research papers and delving deep into its maths Passionate and curious to learn the latest trends, methods and technologies in this field. What’s in it for you? - Opportunity to be a part of the big disruption we are creating in the ad-tech space. - Work with complete autonomy, and take on multiple responsibilities - Work in a fast paced environment, with uncapped opportunities to learn and grow - Office in one of the most happening places in India. - Amazing colleagues, weekly lunches and beer on fridays! What we are building: GreedyGame is a platform which enables blending of ads within mobile gaming experience using assets like background, characters, power-ups. It helps advertisers engage audiences while they are playing games, empowers game developers monetize their game development efforts through non-intrusive advertising and allows gamers to enjoy gaming content without having to deal with distractive advertising.
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About GreedyGame

Founded :
2013
Type
Size
Stage :
Raised funding
About

GreedyGame is a platform which enables blending of ads within mobile gaming experience using assets like background, characters, power-ups. It helps advertisers engage audiences while they are playing games, empowers game developers monetize their game development efforts through non-intrusive advertising and allows gamers to enjoy gaming content without having to deal with distractive advertising.

 

 

Read more
Company video
GreedyGame's video section
GreedyGame's video section
Connect with the team
Profile picture
Shreyoshi Ghosh
Profile picture
Vinod K
Profile picture
Srilakshmi Yadavalli
Profile picture
Debdutta Pal
Profile picture
Rabya Khan
Company social profiles
angelbloglinkedintwitterfacebook

Similar jobs

Episource
at Episource
11 recruiters
Ahamed Riaz
Posted by Ahamed Riaz
Mumbai
5 - 12 yrs
₹18L - ₹30L / yr
Big Data
Python
Amazon Web Services (AWS)
Serverless
DevOps
+4 more

ABOUT EPISOURCE:


Episource has devoted more than a decade in building solutions for risk adjustment to measure healthcare outcomes. As one of the leading companies in healthcare, we have helped numerous clients optimize their medical records, data, analytics to enable better documentation of care for patients with chronic diseases.


The backbone of our consistent success has been our obsession with data and technology. At Episource, all of our strategic initiatives start with the question - how can data be “deployed”? Our analytics platforms and datalakes ingest huge quantities of data daily, to help our clients deliver services. We have also built our own machine learning and NLP platform to infuse added productivity and efficiency into our workflow. Combined, these build a foundation of tools and practices used by quantitative staff across the company.


What’s our poison you ask? We work with most of the popular frameworks and technologies like Spark, Airflow, Ansible, Terraform, Docker, ELK. For machine learning and NLP, we are big fans of keras, spacy, scikit-learn, pandas and numpy. AWS and serverless platforms help us stitch these together to stay ahead of the curve.


ABOUT THE ROLE:


We’re looking to hire someone to help scale Machine Learning and NLP efforts at Episource. You’ll work with the team that develops the models powering Episource’s product focused on NLP driven medical coding. Some of the problems include improving our ICD code recommendations, clinical named entity recognition, improving patient health, clinical suspecting and information extraction from clinical notes.


This is a role for highly technical data engineers who combine outstanding oral and written communication skills, and the ability to code up prototypes and productionalize using a large range of tools, algorithms, and languages. Most importantly they need to have the ability to autonomously plan and organize their work assignments based on high-level team goals.


You will be responsible for setting an agenda to develop and ship data-driven architectures that positively impact the business, working with partners across the company including operations and engineering. You will use research results to shape strategy for the company and help build a foundation of tools and practices used by quantitative staff across the company.


During the course of a typical day with our team, expect to work on one or more projects around the following;


1. Create and maintain optimal data pipeline architectures for ML


2. Develop a strong API ecosystem for ML pipelines


3. Building CI/CD pipelines for ML deployments using Github Actions, Travis, Terraform and Ansible


4. Responsible to design and develop distributed, high volume, high-velocity multi-threaded event processing systems


5. Knowledge of software engineering best practices across the development lifecycle, coding standards, code reviews, source management, build processes, testing, and operations  


6. Deploying data pipelines in production using Infrastructure-as-a-Code platforms

 

7. Designing scalable implementations of the models developed by our Data Science teams  


8. Big data and distributed ML with PySpark on AWS EMR, and more!



BASIC REQUIREMENTS 


  1.  Bachelor’s degree or greater in Computer Science, IT or related fields

  2.  Minimum of 5 years of experience in cloud, DevOps, MLOps & data projects

  3. Strong experience with bash scripting, unix environments and building scalable/distributed systems

  4. Experience with automation/configuration management using Ansible, Terraform, or equivalent

  5. Very strong experience with AWS and Python

  6. Experience building CI/CD systems

  7. Experience with containerization technologies like Docker, Kubernetes, ECS, EKS or equivalent

  8. Ability to build and manage application and performance monitoring processes

Read more
Top startup of India -  News App
Noida
2 - 5 yrs
₹20L - ₹35L / yr
Linux/Unix
Python
Hadoop
Apache Spark
MongoDB
+4 more
Responsibilities
● Create and maintain optimal data pipeline architecture.
● Assemble large, complex data sets that meet functional / non-functional
business requirements.
● Building and optimizing ‘big data’ data pipelines, architectures and data sets.
● Maintain, organize & automate data processes for various use cases.
● Identifying trends, doing follow-up analysis, preparing visualizations.
● Creating daily, weekly and monthly reports of product KPIs.
● Create informative, actionable and repeatable reporting that highlights
relevant business trends and opportunities for improvement.

Required Skills And Experience:
● 2-5 years of work experience in data analytics- including analyzing large data sets.
● BTech in Mathematics/Computer Science
● Strong analytical, quantitative and data interpretation skills.
● Hands-on experience with Python, Apache Spark, Hadoop, NoSQL
databases(MongoDB preferred), Linux is a must.
● Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
● Experience with Google Cloud Data Analytics Products such as BigQuery, Dataflow, Dataproc etc. (or similar cloud-based platforms).
● Experience working within a Linux computing environment, and use of
command-line tools including knowledge of shell/Python scripting for
automating common tasks.
● Previous experience working at startups and/or in fast-paced environments.
● Previous experience as a data engineer or in a similar role.
Read more
Internshala
at Internshala
5 recruiters
Sarvari Juneja
Posted by Sarvari Juneja
Gurugram
3 - 5 yrs
₹15L - ₹19L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+2 more

Internshala is a dot com business with the heart of dot org.

We are a technology company on a mission to equip students with relevant skills & practical exposure through internships, fresher jobs, and online trainings. Imagine a world full of freedom and possibilities. A world where you can discover your passion and turn it into your career. A world where your practical skills matter more than your university degree. A world where you do not have to wait till 21 to taste your first work experience (and get a rude shock that it is nothing like you had imagined it to be). A world where you graduate fully assured, fully confident, and fully prepared to stake a claim on your place in the world.

At Internshala, we are making this dream a reality!

👩🏻‍💻 Your responsibilities would include-

  • Designing, implementing, testing, deploying, and maintaining stable, secure, and scalable data engineering solutions and pipelines in support of data and analytics projects, including integrating new sources of data into our central data warehouse, and moving data out to applications and affiliates
  • Developing analytical tools and programs that can help in Analyzing and organizing raw data
  • Evaluating business needs and objectives
  • Conducting complex data analysis and report on results
  • Collaborating with data scientists and architects on several projects
  • Maintaining reliability of the system and being on-call for mission-critical systems
  • Performing infrastructure cost analysis and optimization
  • Generating architecture recommendations and the ability to implement them
  • Designing, building, and maintaining data architecture and warehousing using AWS services.
  • ETL optimization, designing, coding, and tuning big data processes using Apache Spark, R, Python, C#, and/or similar technologies.
  • Disaster recovery planning and implementation when it comes to ETL and data-related services
  • Define actionable KPIs and configure monitoring/alerting

🍒 You will get-

  • A chance to build and lead an awesome team working on one of the best recruitment and online trainings products in the world that impact millions of lives for the better
  • Awesome colleagues & a great work environment
  • Loads of autonomy and freedom in your work

💯 You fit the bill if-

  • You have the zeal to build something from scratch
  • You have experience in Data engineering and infrastructure work for analytical and machine learning processes.
  • You have experience in a Linux environment and familiarity with writing shell scripts using Python or any other scripting language
  • You have 3-5 years of experience as a Data Engineer or similar software engineering role
Read more
Amagi Media Labs
at Amagi Media Labs
3 recruiters
Rajesh C
Posted by Rajesh C
Chennai
15 - 18 yrs
Best in industry
Data architecture
Architecture
Data Architect
Architect
Java
+5 more
Job Title: Data Architect
Job Location: Chennai
Job Summary

The Engineering team is seeking a Data Architect. As a Data Architect, you will drive a
Data Architecture strategy across various Data Lake platforms. You will help develop
reference architecture and roadmaps to build highly available, scalable and distributed
data platforms using cloud based solutions to process high volume, high velocity and
wide variety of structured and unstructured data. This role is also responsible for driving
innovation, prototyping, and recommending solutions. Above all, you will influence how
users interact with Conde Nast’s industry-leading journalism.
Primary Responsibilities
Data Architect is responsible for
• Demonstrated technology and personal leadership experience in architecting,
designing, and building highly scalable solutions and products.
• Enterprise scale expertise in data management best practices such as data integration,
data security, data warehousing, metadata management and data quality.
• Extensive knowledge and experience in architecting modern data integration
frameworks, highly scalable distributed systems using open source and emerging data
architecture designs/patterns.
• Experience building external cloud (e.g. GCP, AWS) data applications and capabilities is
highly desirable.
• Expert ability to evaluate, prototype and recommend data solutions and vendor
technologies and platforms.
• Proven experience in relational, NoSQL, ELT/ETL technologies and in-memory
databases.
• Experience with DevOps, Continuous Integration and Continuous Delivery technologies
is desirable.
• This role requires 15+ years of data solution architecture, design and development
delivery experience.
• Solid experience in Agile methodologies (Kanban and SCRUM)
Required Skills
• Very Strong Experience in building Large Scale High Performance Data Platforms.
• Passionate about technology and delivering solutions for difficult and intricate
problems. Current on Relational Databases and No sql databases on cloud.
• Proven leadership skills, demonstrated ability to mentor, influence and partner with
cross teams to deliver scalable robust solutions..
• Mastery of relational database, NoSQL, ETL (such as Informatica, Datastage etc) /ELT
and data integration technologies.
• Experience in any one of Object Oriented Programming (Java, Scala, Python) and
Spark.
• Creative view of markets and technologies combined with a passion to create the
future.
• Knowledge on cloud based Distributed/Hybrid data-warehousing solutions and Data
Lake knowledge is mandate.
• Good understanding of emerging technologies and its applications.
• Understanding of code versioning tools such as GitHub, SVN, CVS etc.
• Understanding of Hadoop Architecture and Hive SQL
• Knowledge in any one of the workflow orchestration
• Understanding of Agile framework and delivery

Preferred Skills:
● Experience in AWS and EMR would be a plus
● Exposure in Workflow Orchestration like Airflow is a plus
● Exposure in any one of the NoSQL database would be a plus
● Experience in Databricks along with PySpark/Spark SQL would be a plus
● Experience with the Digital Media and Publishing domain would be a
plus
● Understanding of Digital web events, ad streams, context models
About Condé Nast
CONDÉ NAST INDIA (DATA)
Over the years, Condé Nast successfully expanded and diversified into digital, TV, and social
platforms - in other words, a staggering amount of user data. Condé Nast made the right
move to invest heavily in understanding this data and formed a whole new Data team
entirely dedicated to data processing, engineering, analytics, and visualization. This team
helps drive engagement, fuel process innovation, further content enrichment, and increase
market revenue. The Data team aimed to create a company culture where data was the
common language and facilitate an environment where insights shared in real-time could
improve performance.
The Global Data team operates out of Los Angeles, New York, Chennai, and London. The
team at Condé Nast Chennai works extensively with data to amplify its brands' digital
capabilities and boost online revenue. We are broadly divided into four groups, Data
Intelligence, Data Engineering, Data Science, and Operations (including Product and
Marketing Ops, Client Services) along with Data Strategy and monetization. The teams built
capabilities and products to create data-driven solutions for better audience engagement.
What we look forward to:
We want to welcome bright, new minds into our midst and work together to create diverse
forms of self-expression. At Condé Nast, we encourage the imaginative and celebrate the
extraordinary. We are a media company for the future, with a remarkable past. We are
Condé Nast, and It Starts Here.
Read more
MOBtexting
at MOBtexting
1 recruiter
Nandhini Beke
Posted by Nandhini Beke
Bengaluru (Bangalore)
3 - 4 yrs
₹5L - ₹6L / yr
MySQL
MySQL DBA
Data architecture
SQL
Cassandra
+1 more

Job Description

 

Experience: 3+ yrs

We are looking for a MySQL DBA who will be responsible for ensuring the performance, availability, and security of clusters of MySQL instances. You will also be responsible for design of database, database architecture, orchestrating upgrades, backups, and provisioning of database instances. You will also work in tandem with the other teams, preparing documentations and specifications as required.

 

Responsibilities:

Database design and data architecture

Provision MySQL instances, both in clustered and non-clustered configurations

Ensure performance, security, and availability of databases

Prepare documentations and specifications

Handle common database procedures, such as upgrade, backup, recovery, migration, etc.

Profile server resource usage, optimize and tweak as necessary

 

Skills and Qualifications:

Proven expertise in database design and data architecture for large scale systems

Strong proficiency in MySQL database management

Decent experience with recent versions of MySQL

Understanding of MySQL's underlying storage engines, such as InnoDB and MyISAM

Experience with replication configuration in MySQL

Knowledge of de-facto standards and best practices in MySQL

Proficient in writing and optimizing SQL statements

Knowledge of MySQL features, such as its event scheduler

Ability to plan resource requirements from high level specifications

Familiarity with other SQL/NoSQL databases such as Cassandra, MongoDB, etc.

Knowledge of limitations in MySQL and their workarounds in contrast to other popular relational databases

Read more
Global Petrochemicals & Metals Company
Remote only
3 - 8 yrs
₹60L - ₹84L / yr
Data Science
Data Scientist
R Programming
Python
Data Analytics
+2 more
Our client is Saudi Arabia’s private sector owned joint-stock industrial company that aims to advance the economic diversification in the country. It is one of the largest chemical companies that also the world’s biggest investor in titanium dioxide. The 3-decade old company provides employment to over 3400 professionals and markets their products to most parts of the world.
 
Apart from their expertise in petrochemicals and advanced metals, they manage several R&D related activities that cover turnkey solutions, testing, product certifications and training, all supporting sustainability and profitability for their company and clients. The team is led by a Standford and Princeton alumnus who holds masters in Business as well as Nuclear Engineering. The other Board members are alumni of prestigious engineering schools across the world, with immense knowledge and experience, and tremendous background in innovation and technology.
 
As aData Scientist, you will be analyzing large amounts of raw information to find patterns that will help improve our company. Your goal will be to help our company analyze trends to make better decisions.
 
What you will do:
 
  • Identifying valuable data sources and automate collection processes
  • Undertaking preprocessing of structured and unstructured data
  • Analyzing large amounts of information to discover trends and patterns
  • Building predictive models and machine-learning algorithms
  • Combining models through ensemble modeling
  • Presenting information using data visualization techniques
  • Proposing solutions and strategies to business challenges
  • Collaborating with engineering and product development teams

 


Candidate Profile:

What you need to have:

 
  • Data Scientist with min 3 years of experience in Analytics or Data Science preferably in Pricing or Polymer Market    
  • Experience using scripting languages like Python(preferred) or R is a must.
  • Experience with SQL, Tableau is good to have
  • Strong numerical, problem solving and analytical aptitude
  • Being able to make data based decisions
  • Ability to present/communicate analytics driven insights.
  • Critical and Analytical thinking skills    
Read more
SveltetechTechnologies Pvt Ltd
Sveltetech Tecnology
Posted by Sveltetech Tecnology
NCR (Delhi | Gurgaon | Noida)
1 - 2 yrs
₹1L - ₹3L / yr
Data Analytics
Data Analyst
MySQL
Databases
R Programming
+1 more

Data Analyst Job Duties

Data analyst responsibilities include conducting full lifecycle analysis to include requirements, activities and design. Data analysts will develop analysis and reporting capabilities. They will also monitor performance and quality control plans to identify improvements.

Responsibilities

  • Interpret data, analyze results using statistical techniques and provide ongoing reports
  • Develop and implement databases, data collection systems, data analytics and other strategies that optimize statistical efficiency and quality
  • Acquire data from primary or secondary data sources and maintain databases/data systems
  • Identify, analyze, and interpret trends or patterns in complex data sets
  • Filter and “clean” data by reviewing computer reports, printouts, and performance indicators to locate and correct code problems
  • Work with management to prioritize business and information needs
  • Locate and define new process improvement opportunities

    Requirements

    • Proven working experience as a Data Analyst or Business Data Analyst
    • https://resources.workable.com/data-scientist-analysis-interview-questions">Technical expertise regarding data models, database design development, data mining and segmentation techniques
    • Strong knowledge of and experience with reporting packages (Business Objects etc), databases (SQL etc), programming (XML, Javascript, or ETL frameworks)
    • Knowledge of statistics and experience using statistical packages for analyzing datasets (Excel, SPSS, SAS etc)
    • Strong https://resources.workable.com/analytical-skills-interview-questions">analytical skills with the ability to collect, organize, analyze, and disseminate significant amounts of information with attention to detail and accuracy.
    • Adept at queries, report writing and presenting findings
    • BS in Mathematics, Economics, Computer Science, Information Management or Statistics
Read more
Couture.ai
at Couture.ai
4 recruiters
Shobhit Agarwal
Posted by Shobhit Agarwal
Bengaluru (Bangalore)
3 - 8 yrs
₹30L - ₹40L / yr
Deep Learning
TensorFlow
Data Science
Artificial Neural Network (ANN)
Scala
+2 more
Looking for senior data science researchers.

Basic Qualifications:
∙Bachelors in Computer Science/Mathematics + Research (Machine Learning, Deep Learning, Statistics, Data Mining, Game Theory or core mathematical areas) from Tier1 tech institutes.
∙3+ years of relevant experience in building large scale machine learning or deep learning models and/or systems.
∙1 year or more of experience specifically with deep learning (CNN, RNN, LSTM, RBM etc).
∙Strong working knowledge of deep learning, machine learning, and statistics.
- Deep domain understanding of Personalization, Search and Visual.
∙Strong math skills with statistical modeling / machine learning.
∙Hands-on experience building models with deep learning frameworks like MXNet or Tensorflow.
∙Experience in using Python, statistical/machine learning libs.
∙Ability to think creatively and solve problems.
∙Data presentation skills.

Preferred:
∙MS/ Ph.D. (Machine Learning, Deep Learning, Statistics, Data Mining, Game Theory or core mathematical areas) from IISc and other Top Global Universities.
∙Or, Publications in highly accredited journals (If available, please share links to your published work.).
∙Or, history of scaling ML/Deep learning algorithm at massively large scale.
Read more
Dataweave Pvt Ltd
at Dataweave Pvt Ltd
32 recruiters
Pramod Shivalingappa S
Posted by Pramod Shivalingappa S
Bengaluru (Bangalore)
5 - 7 yrs
Best in industry
Python
Data Science
R Programming
(Senior) Data Scientist Job Description

About us
DataWeave provides Retailers and Brands with “Competitive Intelligence as a Service” that enables them to take key decisions that impact their revenue. Powered by AI, we provide easily consumable and actionable competitive intelligence by aggregating and analyzing billions of publicly available data points on the Web to help businesses develop data-driven strategies and make smarter decisions.

Data Science@DataWeave
We the Data Science team at DataWeave (called Semantics internally) build the core machine learning backend and structured domain knowledge needed to deliver insights through our data products. Our underpinnings are: innovation, business awareness, long term thinking, and pushing the envelope. We are a fast paced labs within the org applying the latest research in Computer Vision, Natural Language Processing, and Deep Learning to hard problems in different domains.

How we work?
It's hard to tell what we love more, problems or solutions! Every day, we choose to address some of the hardest data problems that there are. We are in the business of making sense of messy public data on the web. At serious scale! 

What do we offer?
● Some of the most challenging research problems in NLP and Computer Vision. Huge text and image
datasets that you can play with!
● Ability to see the impact of your work and the value you're adding to our customers almost immediately.
● Opportunity to work on different problems and explore a wide variety of tools to figure out what really
excites you.
● A culture of openness. Fun work environment. A flat hierarchy. Organization wide visibility. Flexible
working hours.
● Learning opportunities with courses and tech conferences. Mentorship from seniors in the team.
● Last but not the least, competitive salary packages and fast paced growth opportunities.

Who are we looking for?
The ideal candidate is a strong software developer or a researcher with experience building and shipping production grade data science applications at scale. Such a candidate has keen interest in liaising with the business and product teams to understand a business problem, and translate that into a data science problem.

You are also expected to develop capabilities that open up new business productization opportunities.

We are looking for someone with a Master's degree and 1+ years of experience working on problems in NLP or Computer Vision.

If you have 4+ years of relevant experience with a Master's degree (PhD preferred), you will be considered for a senior role.

Key problem areas
● Preprocessing and feature extraction noisy and unstructured data -- both text as well as images.
● Keyphrase extraction, sequence labeling, entity relationship mining from texts in different domains.
● Document clustering, attribute tagging, data normalization, classification, summarization, sentiment
analysis.
● Image based clustering and classification, segmentation, object detection, extracting text from images,
generative models, recommender systems.
● Ensemble approaches for all the above problems using multiple text and image based techniques.

Relevant set of skills
● Have a strong grasp of concepts in computer science, probability and statistics, linear algebra, calculus,
optimization, algorithms and complexity.
● Background in one or more of information retrieval, data mining, statistical techniques, natural language
processing, and computer vision.
● Excellent coding skills on multiple programming languages with experience building production grade
systems. Prior experience with Python is a bonus.
● Experience building and shipping machine learning models that solve real world engineering problems.
Prior experience with deep learning is a bonus.
● Experience building robust clustering and classification models on unstructured data (text, images, etc).
Experience working with Retail domain data is a bonus.
● Ability to process noisy and unstructured data to enrich it and extract meaningful relationships.
● Experience working with a variety of tools and libraries for machine learning and visualization, including
numpy, matplotlib, scikit-learn, Keras, PyTorch, Tensorflow.
● Use the command line like a pro. Be proficient in Git and other essential software development tools.
● Working knowledge of large-scale computational models such as MapReduce and Spark is a bonus.
● Be a self-starter—someone who thrives in fast paced environments with minimal ‘management’.
● It's a huge bonus if you have some personal projects (including open source contributions) that you work
on during your spare time. Show off some of your projects you have hosted on GitHub.

Role and responsibilities
● Understand the business problems we are solving. Build data science capability that align with our product strategy.
● Conduct research. Do experiments. Quickly build throw away prototypes to solve problems pertaining to the Retail domain.
● Build robust clustering and classification models in an iterative manner that can be used in production.
● Constantly think scale, think automation. Measure everything. Optimize proactively.
● Take end to end ownership of the projects you are working on. Work with minimal supervision.
● Help scale our delivery, customer success, and data quality teams with constant algorithmic improvements and automation.
● Take initiatives to build new capabilities. Develop business awareness. Explore productization opportunities.
● Be a tech thought leader. Add passion and vibrance to the team. Push the envelope. Be a mentor to junior members of the team.
● Stay on top of latest research in deep learning, NLP, Computer Vision, and other relevant areas.
Read more
Chariot Tech
at Chariot Tech
1 recruiter
Raj Garg
Posted by Raj Garg
NCR (Delhi | Gurgaon | Noida)
1 - 5 yrs
₹15L - ₹16L / yr
Machine Learning (ML)
Big Data
Data Science
We are looking for a Machine Learning Developer who possesses apassion for machine technology & big data and will work with nextgeneration Universal IoT platform.Responsibilities:•Design and build machine that learns , predict and analyze data.•Build and enhance tools to mine data at scale• Enable the integration of Machine Learning models in Chariot IoTPlatform•Ensure the scalability of Machine Learning analytics across millionsof networked sensors•Work with other engineering teams to integrate our streaming,batch, or ad-hoc analysis algorithms into Chariot IoT's suite ofapplications•Develop generalizable APIs so other engineers can use our workwithout needing to be a machine learning expert
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos