ML Engineer

at Our Client company is into Telecommunications. (SY1)

Agency job
icon
Remote, Bengaluru (Bangalore)
icon
4 - 8 yrs
icon
₹21L - ₹23L / yr
icon
Full time
Skills
Machine Learning (ML)
Python
Deep Learning
ML Tools
NLP Tools
Unix
Computer Vision
  • Participate in full machine learning Lifecycle including data collection, cleaning, preprocessing to training models, and deploying them to Production.
  • Discover data sources, get access to them, ingest them, clean them up, and make them “machine learning ready”. 
  • Work with data scientists to create and refine features from the underlying data and build pipelines to train and deploy models. 
  • Partner with data scientists to understand and implement machine learning algorithms. 
  • Support A/B tests, gather data, perform analysis, draw conclusions on the impact of your models. 
  • Work cross-functionally with product managers, data scientists, and product engineers, and communicate results to peers and leaders. 
  • Mentor junior team members 

 

Who we have in mind:

  • Graduate in Computer Science or related field, or equivalent practical experience. 
  • 4+ years of experience in software engineering with 2+ years of direct experience in the machine learning field.
  • Proficiency with SQL,  Python, Spark, and basic libraries such as Scikit-learn, NumPy, Pandas.
  • Familiarity with deep learning frameworks such as TensorFlow or Keras
  • Experience with Computer Vision (OpenCV),  NLP frameworks (NLTK, SpaCY, BERT).
  • Basic knowledge of machine learning techniques (i.e. classification, regression, and clustering). 
  • Understand machine learning principles (training, validation, etc.) 
  • Strong hands-on knowledge of data query and data processing tools (i.e. SQL) 
  • Software engineering fundamentals: version control systems (i.e. Git, Github) and workflows, and ability to write production-ready code. 
  • Experience deploying highly scalable software supporting millions or more users 
  • Experience building applications on cloud  (AWS or Azure) 
  • Experience working in scrum teams with Agile tools like JIRA
  • Strong oral and written communication skills. Ability to explain complex concepts and technical material to non-technical users

 

 

 

 

Read more
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Big Data Engineer

at Multinational Company providing Automation digital Solutions

Agency job
via Jobdost
Python
Spark
Big Data
Hadoop
Apache Hive
icon
Hyderabad
icon
4 - 7 yrs
icon
₹12L - ₹28L / yr
Must have :

  • At least 4 to 7 years of relevant experience as Big Data Engineer
  • Hands-on experience in Scala or Python
  • Hands-on experience on major components in Hadoop Ecosystem like HDFS, Map Reduce, Hive, Impala.
  • Strong programming experience in building applications/platform using Scala or Python.
  • Experienced in implementing Spark RDD Transformations, actions to implement business analysis


We are specialized in productizing solutions of new technology. 
Our vision is to build engineers with entrepreneurial and leadership mindsets who can create highly impactful products and solutions using technology to deliver immense value to our clients.
We strive to develop innovation and passion into everything we do, whether it is services or products, or solutions.
Read more
Job posted by
Sathish Kumar

Sr. AL Engineer

at Matellio India Private Limited

Founded 1998  •  Services  •  100-1000 employees  •  Profitable
Machine Learning (ML)
Data Science
Natural Language Processing (NLP)
Computer Vision
Deep Learning
Python
Linear regression
Linear algebra
Big Data
Spark
API
Artificial Intelligence (AI)
icon
Remote only
icon
8 - 15 yrs
icon
₹10L - ₹27L / yr

Responsibilities include: 

  • Convert the machine learning models into application program interfaces (APIs) so that other applications can use it
  • Build AI models from scratch and help the different components of the organization (such as product managers and stakeholders) understand what results they gain from the model
  • Build data ingestion and data transformation infrastructure
  • Automate infrastructure that the data science team uses
  • Perform statistical analysis and tune the results so that the organization can make better-informed decisions
  • Set up and manage AI development and product infrastructure
  • Be a good team player, as coordinating with others is a must
Read more
Job posted by
Harshit Sharma

Lead Game Analyst

at Kwalee

Founded 2011  •  Product  •  100-500 employees  •  Profitable
Data Science
Data Analytics
Python
SQL
icon
Bengaluru (Bangalore)
icon
0 - 8 yrs
icon
Best in industry

Kwalee is one of the world’s leading multiplatform game publishers and developers, with well over 750 million downloads worldwide for mobile hits such as Draw It, Teacher Simulator, Let’s Be Cops 3D, Airport Security and Makeover Studio 3D. Alongside this, we also have a growing PC and Console team of incredible pedigree that is on the hunt for great new titles to join TENS!, Eternal Hope, Die by the Blade and Scathe. 

We have a team of talented people collaborating daily between our studios in Leamington Spa, Bangalore and Beijing, or on a remote basis from Turkey, Brazil, the Philippines and many more places, and we’ve recently acquired our first external studio, TicTales which is based in France. We have a truly global team making games for a global audience. And it’s paying off: Kwalee has been recognised with the Best Large Studio and Best Leadership awards from TIGA (The Independent Game Developers’ Association) and our games have been downloaded in every country on earth!

Founded in 2011 by David Darling CBE, a key architect of the UK games industry who previously co-founded and led Codemasters for many years, our team also includes legends such as Andrew Graham (creator of Micro Machines series) and Jason Falcus (programmer of classics including NBA Jam) alongside a growing and diverse team of global gaming experts. Everyone contributes creatively to Kwalee’s success, with all employees eligible to pitch their own game ideas on Creative Wednesdays, and we’re proud to have built our success on this inclusive principle. Could your idea be the next global hit?

What’s the job?

As the Lead Game Analyst you will own the optimisation of in-game features and design, utilising A-B testing and multivariate testing of in-game components.


What you will be doing

  • Play a crucial role in finding the best people to work on your team.

  • Manage the delivery of reports and analysis from the team.

  • Investigate how millions of players interact with Kwalee games.

  • Perform statistical analysis to quantify the relationships between game elements and player engagement.

  • Design experiments which extract the most valuable information in the shortest time.

  • Develop testing plans which reveal complex interactions between game elements. 

  • Collaborate with the design team to come up with the most effective tests.

  • Regularly communicate results with development, management and data science teams.


How you will be doing this

  • You’ll be part of an agile, multidisciplinary and creative team and work closely with them to ensure the best results.

  • You'll think creatively and be motivated by challenges and constantly striving for the best.

  • You’ll work with cutting edge technology, if you need software or hardware to get the job done efficiently, you will get it. We even have a robot!


Team

Our talented team is our signature. We have a highly creative atmosphere with more than 200 staff where you’ll have the opportunity to contribute daily to important decisions. You’ll work within an extremely experienced, passionate and diverse team, including David Darling and the creator of the Micro Machines video games.


Skills and Requirement

  • A degree in a numerically focussed degree discipline such as, Maths, Physics, Economics, Chemistry, Engineering, Biological Sciences.

  • An extensive record of outstanding contribution to data analysis projects.

  • Expert in using Python for data analysis and visualisation.

  • Experience manipulating data in SQL databases.

  • Experience managing and onboarding new team members.


We offer

  • We want everyone involved in our games to share our success, that’s why we have a generous team profit sharing scheme from day 1 of employment

  • In addition to a competitive salary we also offer private medical cover and life assurance

  • Creative Wednesdays! (Design and make your own games every Wednesday)

  • 20 days of paid holidays plus bank holidays 

  • Hybrid model available depending on the department and the role

  • Relocation support available 

  • Great work-life balance with flexible working hours

  • Quarterly team building days - work hard, play hard!

  • Monthly employee awards

  • Free snacks, fruit and drinks


Our philosophy

We firmly believe in creativity and innovation and that a fundamental requirement for a successful and happy company is having the right mix of individuals. With the right people in the right environment anything and everything is possible.

Kwalee makes games to bring people, their stories, and their interests together. As an employer, we’re dedicated to making sure that everyone can thrive within our team by welcoming and supporting people of all ages, races, colours, beliefs, sexual orientations, genders and circumstances. With the inclusion of diverse voices in our teams, we bring plenty to the table that’s fresh, fun and exciting; it makes for a better environment and helps us to create better games for everyone! This is how we move forward as a company – because these voices are the difference that make all the difference.

Read more
Job posted by
Michael Hoppitt

Data Engineer

at Ascendeum

Founded 2015  •  Services  •  20-100 employees  •  Profitable
Python
CI/CD
Storage & Networking
Data storage
icon
Remote only
icon
1 - 3 yrs
icon
₹6L - ₹9L / yr
  • Understand long-term and short-term business requirements to precision match it with the capabilities of different distributed storage and computing technologies from the plethora of options available in the ecosystem.

  • Create complex data processing pipelines

  • Design scalable implementations of the models developed by our Data Scientist.

  • Deploy data pipelines in production systems based on CICD practices

  • Create and maintain clear documentation on data models/schemas as well as

    transformation/validation rules

  • Troubleshoot and remediate data quality issues raised by pipeline alerts or downstream consumers

Read more
Job posted by
Sonali Jain

Data Architect

at Amagi Media Labs

Founded 2008  •  Product  •  500-1000 employees  •  Profitable
Data Science
Machine Learning (ML)
ETL
Data Warehouse (DWH)
Amazon Web Services (AWS)
Agile/Scrum
Java
Scala
Python
Spark
icon
Bengaluru (Bangalore), Chennai
icon
12 - 15 yrs
icon
₹50L - ₹60L / yr
Job Title: Data Architect
Job Location: Chennai

Job Summary
The Engineering team is seeking a Data Architect. As a Data Architect, you will drive a
Data Architecture strategy across various Data Lake platforms. You will help develop
reference architecture and roadmaps to build highly available, scalable and distributed
data platforms using cloud based solutions to process high volume, high velocity and
wide variety of structured and unstructured data. This role is also responsible for driving
innovation, prototyping, and recommending solutions. Above all, you will influence how
users interact with Conde Nast’s industry-leading journalism.
Primary Responsibilities
Data Architect is responsible for
• Demonstrated technology and personal leadership experience in architecting,
designing, and building highly scalable solutions and products.
• Enterprise scale expertise in data management best practices such as data integration,
data security, data warehousing, metadata management and data quality.
• Extensive knowledge and experience in architecting modern data integration
frameworks, highly scalable distributed systems using open source and emerging data
architecture designs/patterns.
• Experience building external cloud (e.g. GCP, AWS) data applications and capabilities is
highly desirable.
• Expert ability to evaluate, prototype and recommend data solutions and vendor
technologies and platforms.
• Proven experience in relational, NoSQL, ELT/ETL technologies and in-memory
databases.
• Experience with DevOps, Continuous Integration and Continuous Delivery technologies
is desirable.
• This role requires 15+ years of data solution architecture, design and development
delivery experience.
• Solid experience in Agile methodologies (Kanban and SCRUM)
Required Skills
• Very Strong Experience in building Large Scale High Performance Data Platforms.
• Passionate about technology and delivering solutions for difficult and intricate
problems. Current on Relational Databases and No sql databases on cloud.
• Proven leadership skills, demonstrated ability to mentor, influence and partner with
cross teams to deliver scalable robust solutions..
• Mastery of relational database, NoSQL, ETL (such as Informatica, Datastage etc) /ELT
and data integration technologies.
• Experience in any one of Object Oriented Programming (Java, Scala, Python) and
Spark.
• Creative view of markets and technologies combined with a passion to create the
future.
• Knowledge on cloud based Distributed/Hybrid data-warehousing solutions and Data
Lake knowledge is mandate.
• Good understanding of emerging technologies and its applications.
• Understanding of code versioning tools such as GitHub, SVN, CVS etc.
• Understanding of Hadoop Architecture and Hive SQL
• Knowledge in any one of the workflow orchestration
• Understanding of Agile framework and delivery

Preferred Skills:
● Experience in AWS and EMR would be a plus
● Exposure in Workflow Orchestration like Airflow is a plus
● Exposure in any one of the NoSQL database would be a plus
● Experience in Databricks along with PySpark/Spark SQL would be a plus
● Experience with the Digital Media and Publishing domain would be a
plus
● Understanding of Digital web events, ad streams, context models

About Condé Nast

CONDÉ NAST INDIA (DATA)
Over the years, Condé Nast successfully expanded and diversified into digital, TV, and social
platforms - in other words, a staggering amount of user data. Condé Nast made the right
move to invest heavily in understanding this data and formed a whole new Data team
entirely dedicated to data processing, engineering, analytics, and visualization. This team
helps drive engagement, fuel process innovation, further content enrichment, and increase
market revenue. The Data team aimed to create a company culture where data was the
common language and facilitate an environment where insights shared in real-time could
improve performance.
The Global Data team operates out of Los Angeles, New York, Chennai, and London. The
team at Condé Nast Chennai works extensively with data to amplify its brands' digital
capabilities and boost online revenue. We are broadly divided into four groups, Data
Intelligence, Data Engineering, Data Science, and Operations (including Product and
Marketing Ops, Client Services) along with Data Strategy and monetization. The teams built
capabilities and products to create data-driven solutions for better audience engagement.
What we look forward to:
We want to welcome bright, new minds into our midst and work together to create diverse
forms of self-expression. At Condé Nast, we encourage the imaginative and celebrate the
extraordinary. We are a media company for the future, with a remarkable past. We are
Condé Nast, and It Starts Here.
Read more
Job posted by
Rajesh C

Backend Data Engineer

at India's best Short Video App

Agency job
via wrackle
Data engineering
Big Data
Spark
Apache Kafka
Apache Hive
Data engineer
Elastic Search
MongoDB
Python
Apache Storm
Druid Database
Apache HBase
Cassandra
DynamoDB
Memcached
Proxies
HDFS
Pig
Scribe
Apache ZooKeeper
Agile/Scrum
Roadmaps
DevOps
Software Testing (QA)
Data Warehouse (DWH)
flink
aws kinesis
presto
airflow
caches
data pipeline
icon
Bengaluru (Bangalore)
icon
4 - 12 yrs
icon
₹25L - ₹50L / yr
What Makes You a Great Fit for The Role?

You’re awesome at and will be responsible for
 
Extensive programming experience with cross-platform development of one of the following Java/SpringBoot, Javascript/Node.js, Express.js or Python
3-4 years of experience in big data analytics technologies like Storm, Spark/Spark streaming, Flink, AWS Kinesis, Kafka streaming, Hive, Druid, Presto, Elasticsearch, Airflow, etc.
3-4 years of experience in building high performance RPC services using different high performance paradigms: multi-threading, multi-processing, asynchronous programming (nonblocking IO), reactive programming,
3-4 years of experience working high throughput low latency databases and cache layers like MongoDB, Hbase, Cassandra, DynamoDB,, Elasticache ( Redis + Memcache )
Experience with designing and building high scale app backends and micro-services leveraging cloud native services on AWS like proxies, caches, CDNs, messaging systems, Serverless compute(e.g. lambda), monitoring and telemetry.
Strong understanding of distributed systems fundamentals around scalability, elasticity, availability, fault-tolerance.
Experience in analysing and improving the efficiency, scalability, and stability of distributed systems and backend micro services.
5-7 years of strong design/development experience in building massively large scale, high throughput low latency distributed internet systems and products.
Good experience in working with Hadoop and Big Data technologies like HDFS, Pig, Hive, Storm, HBase, Scribe, Zookeeper and NoSQL systems etc.
Agile methodologies, Sprint management, Roadmap, Mentoring, Documenting, Software architecture.
Liaison with Product Management, DevOps, QA, Client and other teams
 
Your Experience Across The Years in the Roles You’ve Played
 
Have total or more 5 - 7 years of experience with 2-3 years in a startup.
Have B.Tech or M.Tech or equivalent academic qualification from premier institute.
Experience in Product companies working on Internet-scale applications is preferred
Thoroughly aware of cloud computing infrastructure on AWS leveraging cloud native service and infrastructure services to design solutions.
Follow Cloud Native Computing Foundation leveraging mature open source projects including understanding of containerisation/Kubernetes.
 
You are passionate about learning or growing your expertise in some or all of the following
Data Pipelines
Data Warehousing
Statistics
Metrics Development
 
We Value Engineers Who Are
 
Customer-focused: We believe that doing what’s right for the creator is ultimately what will drive our business forward.
Obsessed with Quality: Your Production code just works & scales linearly
Team players. You believe that more can be achieved together. You listen to feedback and also provide supportive feedback to help others grow/improve.
Pragmatic: We do things quickly to learn what our creators desire. You know when it’s appropriate to take shortcuts that don’t sacrifice quality or maintainability.
Owners: Engineers at Chingari know how to positively impact the business.
Read more
Job posted by
Naveen Taalanki

Junior Data Scientist

at Passion Gaming Pvt. Ltd.

Founded 2015  •  Product  •  20-100 employees  •  Profitable
Data Science
Data Analytics
Python
SQL
NOSQL Databases
Deep Learning
Machine Learning (ML)
Predictive analytics
Algorithms
Statistical Modeling
icon
Panchkula
icon
1 - 4 yrs
icon
₹6.5L - ₹9.5L / yr

We are currently looking for a Junior Data Scientist to join our growing Data Science team in Panchkula. As a Jr. Data Scientist, you will work closely with the Head of Data Science and a variety of cross-functional teams to identify opportunities to enhance the customer journey, reduce churn, improve user retention, and drive revenue.

Experience Required

  • Medium to Expert level proficiency in either R or Python.
  • Expert level proficiency in SQL scripting for RDBMS and NoSQL DBs (especially MongoDB)
  • Tracking and insights on key metrics around User Journey, User Retention, Churn Modelling and Prediction, etc.
  • Medium-to-Highly skilled in data-structures and ML algorithms, with the ability to create efficient solutions to complex problems.
  • Experience of working on an end-to-end data science pipeline: problem scoping, data gathering, EDA, modeling, insights, visualizations, monitoring and maintenance.
  • Medium-to-Proficient in creating beautiful Tableau dashboards.
  • Problem-solving: Ability to break the problem into small parts and apply relevant techniques to drive the required outcomes.
  • Intermediate to advanced knowledge of machine learning, probability theory, statistics, and algorithms. You will be required to discuss and use various algorithms and approaches on a daily basis.
  • Proficient in at least a few of the following: regression, Bayesian methods, tree-based learners, SVM, RF, XGBOOST, time series modelling, GLM, GLMM, clustering, Deep learning etc.

Good to Have

  • Experience in one of the upcoming technologies like deep learning, recommender systems, etc.
  • Experience of working in the Gaming domain
  • Marketing analytics, cross-sell, up-sell, campaign analytics, fraud detection
  • Experience in building and maintaining Data Warehouses in AWS would be a big plus!

Benefits

  • PF and gratuity
  • Working 5 days a week
  • Paid leaves (CL, SL, EL, ML) and holidays
  • Parties, festivals, birthday celebrations, etc
  • Equability: absence of favouritism in hiring & promotion
Read more
Job posted by
Trivikram Pathak

Machine Learning Engineer

at Leading Multinational Co

Machine Learning (ML)
icon
Mumbai
icon
2 - 9 yrs
icon
₹8L - ₹27L / yr

ML Engineer-Analyst/ Senior Analyst

Job purpose:

To design and develop machine learning and deep learning systems. Run machine learning tests andexperiments and implementing appropriate ML algorithms. Works cross-functionally with the Data Scientists, Software application developers and business groups for the development of innovative ML models. Use Agile experience to work collaboratively with other Managers/Owners in geographically distributed teams.

Accountabilities:

  • Work with Data Scientists and Business Analysts to frame problems in a business context. Assist all the processes from data collection, cleaning, and preprocessing, to training models and deploying them to production.
  • Understand business objectives and developing models that help to achieve them, along with metrics to track their progress.
  • Explore and visualize data to gain an understanding of it, then identify differences in data distribution that could affect performance when deploying the model in the real world.
  • Define validation strategies, preprocess or feature engineering to be done on a given dataset and data augmentation pipelines.
  • Analyze the errors of the model and design strategies to overcome them.
  • Collaborate with data engineers to build data and model pipelines, manage the infrastructure and data pipelines needed to bring code to production and demonstrate end-to-end understanding of applications (including, but not limited to, the machine learning algorithms) being created.

Qualifications & Specifications

  • Bachelor's degree in Engineering /Computer Science/ Math/ Statistics or equivalent. Master's degree in relevant specification will be first preference
  • Experience of machine learning algorithms and libraries
  • Understanding of data structures, data modeling and software architecture.
  • Deep knowledge of math, probability, statistics and algorithms
  • Experience with machine learning platforms such as Microsoft Azure, Google Cloud, IBM Watson, and Amazon
  • Big data environment: Hadoop, Spark
  • Programming languages: Python, R, PySpark
  • Supervised & Unsupervised machine learning: linear regression, logistic regression, k-means

clustering, ensemble models, random forest, svm, gradient boosting

  • Sampling data: bagging & boosting, bootstrapping
  • Neural networks: ANN, CNN, RNN related topics
  • Deep learning: Keras, Tensorflow
  • Experience with AWS Sagemaker deployment and agile methodology

 

 

Read more
Job posted by
Richa Awasthi

Data Warehousing Engineer

at DataToBiz

Founded 2018  •  Services  •  20-100 employees  •  Bootstrapped
Datawarehousing
Amazon Redshift
Analytics
Python
Amazon Web Services (AWS)
SQL server
Data engineering
icon
Chandigarh, NCR (Delhi | Gurgaon | Noida)
icon
2 - 6 yrs
icon
₹7L - ₹15L / yr
Job Responsibilities :  
As a Data Warehouse Engineer in our team, you should have a proven ability to deliver high-quality work on time and with minimal supervision.
Develops or modifies procedures to solve complex database design problems, including performance, scalability, security and integration issues for various clients (on-site and off-site).
Design, develop, test, and support the data warehouse solution.
Adapt best practices and industry standards, ensuring top quality deliverable''s and playing an integral role in cross-functional system integration.
Design and implement formal data warehouse testing strategies and plans including unit testing, functional testing, integration testing, performance testing, and validation testing.
Evaluate all existing hardware's and software's according to required standards and ability to configure the hardware clusters as per the scale of data.
Data integration using enterprise development tool-sets (e.g. ETL, MDM, Quality, CDC, Data Masking, Quality).
Maintain and develop all logical and physical data models for enterprise data warehouse (EDW).
Contributes to the long-term vision of the enterprise data warehouse (EDW) by delivering Agile solutions.
Interact with end users/clients and translate business language into technical requirements.
Acts independently to expose and resolve problems.  
Participate in data warehouse health monitoring and performance optimizations as well as quality documentation.

Job Requirements :  
2+ years experience working in software development & data warehouse development for enterprise analytics.
2+ years of working with Python with major experience in Red-shift as a must and exposure to other warehousing tools.
Deep expertise in data warehousing, dimensional modeling and the ability to bring best practices with regard to data management, ETL, API integrations, and data governance.
Experience working with data retrieval and manipulation tools for various data sources like Relational (MySQL, PostgreSQL, Oracle), Cloud-based storage.
Experience with analytic and reporting tools (Tableau, Power BI, SSRS, SSAS). Experience in AWS cloud stack (S3, Glue, Red-shift, Lake Formation).
Experience in various DevOps practices helping the client to deploy and scale the systems as per requirement.
Strong verbal and written communication skills with other developers and business clients.
Knowledge of Logistics and/or Transportation Domain is a plus.
Ability to handle/ingest very huge data sets (both real-time data and batched data) in an efficient manner.
Read more
Job posted by
PS Dhillon

Senior Data Scientist

at Paysense

Founded 2015  •  Product  •  100-500 employees  •  Raised funding
Data Science
Python
Perl
Django
Machine Learning (ML)
Data Analytics
icon
NCR (Delhi | Gurgaon | Noida), Mumbai
icon
2 - 7 yrs
icon
₹10L - ₹30L / yr
About the job: - You will architect, code and deploy ML models (from scratch) to predict credit risk. - You will design, run, and analyze A/B and multivariate tests to test hypotheses aimed at optimizing user experience and portfolio risk. - You will perform data exploration and build statistical models on user behavior to discover opportunities for decreasing user defaults. And you must truly be excited about this part. - You’ll use behavioral and social data to gain insights into how humans make financial choices - You will spend a lot of time in building out predictive features from super sparse data sources. - You’ll continually acquire new data sources to develop a rich dataset that characterizes risk. - You will code, drink, breathe and live python, sklearn and pandas. It’s good to have experience in these but not a necessity - as long as you’re super comfortable in a language of your choice. About you: - You’ve strong computer science fundamentals - You’ve strong understanding of ML algorithms - Ideally, you have 2+ years of experience in using ML in industry environment - You know how to run tests and understand their results from a statistical perspective - You love freedom and hate being micromanaged. You own products end to end - You have a strong desire to learn and use the latest machine learning algorithms - It will be great if you have one of the following to share - a kaggle or a github profile - Degree in statistics/quant/engineering from Tier-1 institutes.
Read more
Job posted by
Pragya Singh
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at Our Client company is into Telecommunications. (SY1)?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort