Cutshort logo
Accrete.ai logo
Machine Learning Engineer (Ops)
Machine Learning Engineer (Ops)
Accrete.ai's logo

Machine Learning Engineer (Ops)

5 - 14 yrs
₹50L - ₹70L / yr
Mumbai
Skills
skill iconData Science
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
Computer Vision
kubeflow
Data Structures
skill iconDocker
skill iconKubernetes
PyTorch
TensorFlow
Keras
skill iconAmazon Web Services (AWS)
MLOps

Responsibilities:

  • Data science model review, run the code refactoring and optimization, containerization, deployment, versioning, and monitoring of its quality.
  • Design and implement cloud solutions, build MLOps on the cloud (preferably AWS)
  • Work with workflow orchestration tools like Kubeflow, Airflow, Argo, or similar tools
  • Data science models testing, validation, and test automation.
  • Communicate with a team of data scientists, data engineers, and architects, and document the processes.


Eligibility:

  • Rich hands-on experience in writing object-oriented code using python
  • Min 3 years of MLOps experience (Including model versioning, model and data lineage, monitoring, model hosting and deployment, scalability, orchestration, continuous learning, and Automated pipelines)
  • Understanding of Data Structures, Data Systems, and software architecture
  • Experience in using MLOps frameworks like Kubeflow, MLFlow, and Airflow Pipelines for building, deploying, and managing multi-step ML workflows based on Docker containers and Kubernetes.
  • Exposure to deep learning approaches and modeling frameworks (PyTorch, Tensorflow, Keras, etc. )
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About Accrete.ai

Founded
Type
Size
Stage
About
N/A
Company social profiles
N/A

Similar jobs

Vola Finance
at Vola Finance
1 video
5 recruiters
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
3yrs+
Upto ₹20L / yr (Varies
)
skill iconAmazon Web Services (AWS)
Data engineering
Spark
SQL
Data Warehouse (DWH)
+4 more

Lightning Job By Cutshort⚡

 

As part of this feature, you can expect status updates about your application and replies within 72 hours (once the screening questions are answered)


Roles & Responsibilities


Basic Qualifications:

● The position requires a four-year degree from an accredited college or university.

● Three years of data engineering / AWS Architecture and security experience.


Top candidates will also have:

Proven/Strong understanding and/or experience in many of the following:-

● Experience designing Scalable AWS architecture.

● Ability to create modern data pipelines and data processing using AWS PAAS components (Glue, etc.) or open source tools (Spark, Hbase, Hive, etc.).

● Ability to develop SQL structures that support high volumes and scalability using

RDBMS such as SQL Server, MySQL, Aurora, etc.

● Ability to model and design modern data structures, SQL/NoSQL databases, Data Lakes, Cloud Data Warehouse

● Experience in creating Network Architecture for secured scalable solution.

● Experience with Message brokers such as Kinesis, Kafka, Rabbitmq, AWS SQS, AWS SNS, and Apache ActiveMQ. Hands-on experience on AWS serverless architectures such as Glue,Lamda, Redshift etc.

● Working knowledge of Load balancers, AWS shield, AWS guard, VPC, Subnets, Network gateway Route53 etc.

● Knowledge of building Disaster management systems and security logs notification system

● Knowledge of building scalable microservice architectures with AWS.

● To create a framework for monthly security checks and wide knowledge on AWS services

● Deploying software using CI/CD tools such CircleCI, Jenkins, etc.

● ML/ AI model deployment and production maintainanace experience is mandatory.

● Experience with API tools such as REST, Swagger, Postman and Assertible.

● Versioning management tools such as github, bitbucket, GitLab.

● Debugging and maintaining software in Linux or Unix platforms.

● Test driven development

● Experience building transactional databases.

● Python, PySpark programming experience .

● Must experience engineering solutions in AWS.

● Working AWS experience, AWS certification is required prior to hiring

● Working in Agile Framework/Kanban Framework

● Must demonstrate solid knowledge of computer science fundamentals like data structures & algorithms.

● Passion for technology and an eagerness to contribute to a team-oriented environment.

● Demonstrated leadership on medium to large-scale projects impacting strategic priorities.

● Bachelor’s degree in Computer science or Electrical engineering or related field is required

Read more
Sahaj AI Software
at Sahaj AI Software
1 video
6 recruiters
Pavithra RM
Posted by Pavithra RM
Remote, Pune, Bengaluru (Bangalore), Chennai, Hyderabad
7 - 18 yrs
₹30L - ₹80L / yr
skill iconData Science
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
Computer Vision
large language model

Job Description 

Data scientist with strong background in data mining, machine learning, recommendation systems, and statistics. Should possess signature strengths of a qualified mathematician with ability to apply concepts of Mathematics, Applied Statistics, with specialization in one or more of NLP, Computer Vision, Speech, Data mining to develop models that provide effective solution.. A strong data engineering background with hands-on coding capabilities is needed to own and deliver outcomes. 

A Master’s or PhD Degree in a highly quantitative field (Computer Science, Machine Learning, Operational Research, Statistics, Mathematics, etc.) or equivalent experience, 7+ years of industry experience in predictive modelling, data science and analysis, with prior experience in a ML or data scientist role and a track record of building ML or DL models. 

Responsibilities and skills: 

● Work with our customers to deliver a ML / DL project from beginning to end, including understanding the business need, aggregating data, exploring data, building & validating predictive models, and deploying completed models to deliver business impact to the organization. 

● Selecting features, building and optimizing classifiers using ML techniques ● Data mining using state-of-the-art methods, create text mining pipelines to clean & process large unstructured datasets to reveal high quality information and hidden insights using machine learning techniques 

● Should be able to appreciate and work on Computer Vision problems – for example extract rich information from images to categorize and process visual data— Develop machine learning algorithms for object and image classification, Experience in using DBScan, PCA, Random Forests and Multinomial Logistic Regression to select the best features to classify objects. 

OR 

● Deep understanding of NLP such as fundamentals of information retrieval, deep learning approaches, transformers, attention models, text summarisation, attribute extraction, etc. Preferable experience in one or more of the following areas: recommender systems, moderation of user generated content, sentiment analysis, etc. 

OR 

● Speech recognition, speech to text and vice versa, understanding NLP and IR, text summarisation, statistical and deep learning approaches to text processing. Experience of having worked in these areas.

Excellent understanding of machine learning techniques and algorithms, such as k-NN, Naive Bayes, SVM, Decision Forests, etc. Needs to appreciate deep learning frameworks like MXNet, Caffe 2, Keras, Tensorflow 

● Experience in working with GPUs to develop models, handling terabyte size datasets ● Experience with common data science toolkits, such as R, Weka, NumPy, MatLab, mlr, mllib, Scikit-learn, caret etc - excellence in at least one of these is highly desirable ● Should be able to work hands-on in Python, R etc. Should closely collaborate & work with engineering teams to iteratively analyse data using Scala, Spark, Hadoop, Kafka, Storm etc., 

● Experience with NoSQL databases and familiarity with data visualization tools will be of great advantage

Read more
Episource
at Episource
11 recruiters
Ahamed Riaz
Posted by Ahamed Riaz
Mumbai
5 - 12 yrs
₹18L - ₹30L / yr
Big Data
skill iconPython
skill iconAmazon Web Services (AWS)
Serverless
DevOps
+4 more

ABOUT EPISOURCE:


Episource has devoted more than a decade in building solutions for risk adjustment to measure healthcare outcomes. As one of the leading companies in healthcare, we have helped numerous clients optimize their medical records, data, analytics to enable better documentation of care for patients with chronic diseases.


The backbone of our consistent success has been our obsession with data and technology. At Episource, all of our strategic initiatives start with the question - how can data be “deployed”? Our analytics platforms and datalakes ingest huge quantities of data daily, to help our clients deliver services. We have also built our own machine learning and NLP platform to infuse added productivity and efficiency into our workflow. Combined, these build a foundation of tools and practices used by quantitative staff across the company.


What’s our poison you ask? We work with most of the popular frameworks and technologies like Spark, Airflow, Ansible, Terraform, Docker, ELK. For machine learning and NLP, we are big fans of keras, spacy, scikit-learn, pandas and numpy. AWS and serverless platforms help us stitch these together to stay ahead of the curve.


ABOUT THE ROLE:


We’re looking to hire someone to help scale Machine Learning and NLP efforts at Episource. You’ll work with the team that develops the models powering Episource’s product focused on NLP driven medical coding. Some of the problems include improving our ICD code recommendations, clinical named entity recognition, improving patient health, clinical suspecting and information extraction from clinical notes.


This is a role for highly technical data engineers who combine outstanding oral and written communication skills, and the ability to code up prototypes and productionalize using a large range of tools, algorithms, and languages. Most importantly they need to have the ability to autonomously plan and organize their work assignments based on high-level team goals.


You will be responsible for setting an agenda to develop and ship data-driven architectures that positively impact the business, working with partners across the company including operations and engineering. You will use research results to shape strategy for the company and help build a foundation of tools and practices used by quantitative staff across the company.


During the course of a typical day with our team, expect to work on one or more projects around the following;


1. Create and maintain optimal data pipeline architectures for ML


2. Develop a strong API ecosystem for ML pipelines


3. Building CI/CD pipelines for ML deployments using Github Actions, Travis, Terraform and Ansible


4. Responsible to design and develop distributed, high volume, high-velocity multi-threaded event processing systems


5. Knowledge of software engineering best practices across the development lifecycle, coding standards, code reviews, source management, build processes, testing, and operations  


6. Deploying data pipelines in production using Infrastructure-as-a-Code platforms

 

7. Designing scalable implementations of the models developed by our Data Science teams  


8. Big data and distributed ML with PySpark on AWS EMR, and more!



BASIC REQUIREMENTS 


  1.  Bachelor’s degree or greater in Computer Science, IT or related fields

  2.  Minimum of 5 years of experience in cloud, DevOps, MLOps & data projects

  3. Strong experience with bash scripting, unix environments and building scalable/distributed systems

  4. Experience with automation/configuration management using Ansible, Terraform, or equivalent

  5. Very strong experience with AWS and Python

  6. Experience building CI/CD systems

  7. Experience with containerization technologies like Docker, Kubernetes, ECS, EKS or equivalent

  8. Ability to build and manage application and performance monitoring processes

Read more
Hyderabad
4 - 7 yrs
₹14L - ₹25L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+5 more

Roles and Responsibilities

Big Data Engineer + Spark Responsibilies Atleast 3 to 4 years of relevant experience as Big Data Engineer Min 1 year of relevant hands-on experience into Spark framework. Minimum 4 years of Application Development experience using any programming language like Scala/Java/Python. Hands on experience on any major components in Hadoop Ecosystem like HDFS or Map or Reduce or Hive or Impala. Strong programming experience of building applications / platforms using Scala/Java/Python. Experienced in implementing Spark RDD Transformations, actions to implement business analysis. An efficient interpersonal communicator with sound analytical problemsolving skills and management capabilities. Strive to keep the slope of the learning curve high and able to quickly adapt to new environments and technologies. Good knowledge on agile methodology of Software development.
Read more
Prismforce
at Prismforce
1 recruiter
Jyoti Moily
Posted by Jyoti Moily
Bengaluru (Bangalore)
5 - 8 yrs
₹20L - ₹25L / yr
skill iconData Science
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
Computer Vision
recommendation algorithm

Prismforce (www.prismforce.com) is a US Head quartered vertical SAAS product company , with development teams in India. We are Series-A funded venture , backed by Tier 1 VC and targeted towards tech/IT services industry and tech talent organizations in enterprises, solving their most critical sector specific problems in Talent Supply Chain. The product suite is powered by artificial intelligence designed to accelerate business impact e.g. improved profitability and agility , by digitizing core vertical workflows underserved by custom applications and typical ERP offerings.


We are looking for Data Scientists to build data products to be the core of SAAS company disrupting the Skill market.In this role you should be highly analytical with a keen understanding of Data, Machine Learning, Deep Learning, Analysis, Algorithms, Products, Maths, and Statistics. The hands-on individual would be playing multiple roles of being a Data Scientist , Data Engineer , Data Analysts , Efficient Coder and above all Problem Solver.

Location: Mumbai / Bangalore / Pune / Kolkata

Responsibilities:

  • Identify relevant data sources - a combination of data sources to make it useful.
  • Build the automation of the collection processes.
  • Pre-processing of structured and unstructured data.
  • Handle large amounts of information to create the input to analytical Models.
  • Build predictive models and machine-learning algorithms Innovate Machine-Learning , Deep-Learning algorithms.
  • Build Network graphs , NLP , Forecasting Models Building data pipelines for end-to-end solutions.
  • Propose solutions and strategies to business challenges. Collaborate with product development teams and communicate with the Senior Leadership teams.
  • Participate in Problem solving sessions

Requirements:

  • Bachelor's degree in a highly quantitative field (e.g. Computer Science , Engineering , Physics , Math , Operations Research , etc) or equivalent experience.
  • Extensive machine learning and algorithmic background with a deep level understanding of at least one of the following areas: supervised and unsupervised learning methods , reinforcement learning , deep learning , Bayesian inference , Network graphs , Natural Language Processing Analytical mind and business acumen
  • Strong math skills (e.g. statistics , algebra)
  • Problem-solving aptitude Excellent communication skills with ability to communicate technical information.
  • Fluency with at least one data science/analytics programming language (e.g. Python , R , Julia).
  • Start-up experience is a plus Ideally 5-8 years of advanced analytics experience in startups/marquee com


Read more
Crisp Analytics
at Crisp Analytics
8 recruiters
Seema Pahwa
Posted by Seema Pahwa
Mumbai
2 - 6 yrs
₹6L - ₹15L / yr
Big Data
Spark
skill iconScala
skill iconAmazon Web Services (AWS)
Apache Kafka

 

The Data Engineering team is one of the core technology teams of Lumiq.ai and is responsible for creating all the Data related products and platforms which scale for any amount of data, users, and processing. The team also interacts with our customers to work out solutions, create technical architectures and deliver the products and solutions.

If you are someone who is always pondering how to make things better, how technologies can interact, how various tools, technologies, and concepts can help a customer or how a customer can use our products, then Lumiq is the place of opportunities.

 

Who are you?

  • Enthusiast is your middle name. You know what’s new in Big Data technologies and how things are moving
  • Apache is your toolbox and you have been a contributor to open source projects or have discussed the problems with the community on several occasions
  • You use cloud for more than just provisioning a Virtual Machine
  • Vim is friendly to you and you know how to exit Nano
  • You check logs before screaming about an error
  • You are a solid engineer who writes modular code and commits in GIT
  • You are a doer who doesn’t say “no” without first understanding
  • You understand the value of documentation of your work
  • You are familiar with Machine Learning Ecosystem and how you can help your fellow Data Scientists to explore data and create production-ready ML pipelines

 

Eligibility

Experience

  • At least 2 years of Data Engineering Experience
  • Have interacted with Customers


Must Have Skills

  • Amazon Web Services (AWS) - EMR, Glue, S3, RDS, EC2, Lambda, SQS, SES
  • Apache Spark
  • Python
  • Scala
  • PostgreSQL
  • Git
  • Linux


Good to have Skills

  • Apache NiFi
  • Apache Kafka
  • Apache Hive
  • Docker
  • Amazon Certification

 

 

Read more
Bengaluru (Bangalore)
1 - 8 yrs
₹8L - ₹14L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+8 more
In this role, you will be part of a growing, global team of data engineers, who collaborate in DevOps mode, in order to enable Merck business with state-of-the-art technology to leverage data as an asset and to take better informed decisions.

The Merck Data Engineering Team is responsible for designing, developing, testing, and supporting automated end-to-end data pipelines and applications on Merck’s data management and global analytics platform (Palantir Foundry, Hadoop, AWS and other components).

The Foundry platform comprises multiple different technology stacks, which are hosted on Amazon Web Services (AWS) infrastructure or on-premise Merck’s own data centers. Developing pipelines and applications on Foundry requires:

• Proficiency in SQL / Java / Python (Python required; all 3 not necessary)
• Proficiency in PySpark for distributed computation
• Familiarity with Postgres and ElasticSearch
• Familiarity with HTML, CSS, and JavaScript and basic design/visual competency
• Familiarity with common databases (e.g. JDBC, mySQL, Microsoft SQL). Not all types required

This position will be project based and may work across multiple smaller projects or a single large project utilizing an agile project methodology.

Roles & Responsibilities:
• Develop data pipelines by ingesting various data sources – structured and un-structured – into Palantir Foundry
• Participate in end to end project lifecycle, from requirements analysis to go-live and operations of an application
• Acts as business analyst for developing requirements for Foundry pipelines
• Review code developed by other data engineers and check against platform-specific standards, cross-cutting concerns, coding and configuration standards and functional specification of the pipeline
• Document technical work in a professional and transparent way. Create high quality technical documentation
• Work out the best possible balance between technical feasibility and business requirements (the latter can be quite strict)
• Deploy applications on Foundry platform infrastructure with clearly defined checks
• Implementation of changes and bug fixes via Merck's change management framework and according to system engineering practices (additional training will be provided)
• DevOps project setup following Agile principles (e.g. Scrum)
• Besides working on projects, act as third level support for critical applications; analyze and resolve complex incidents/problems. Debug problems across a full stack of Foundry and code based on Python, Pyspark, and Java
• Work closely with business users, data scientists/analysts to design physical data models
Read more
Gauge Data Solutions Pvt Ltd
Deeksha Dewal
Posted by Deeksha Dewal
Noida
0 - 4 yrs
₹3L - ₹8L / yr
skill iconData Science
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
Computer Vision
Artificial Intelligence (AI)
+4 more

Essential Skills :

- Develop, enhance and maintain Python related projects, data services, platforms and processes.

- Apply and maintain data quality checks to ensure data integrity and completeness.

- Able to integrate multiple data sources and databases.

- Collaborate with cross-functional teams across, Decision Sciences, Search, Database Management. To design innovative solutions, capture requirements and drive a common future vision.

Technical Skills/Capabilities :

- Hands on experience in Python programming language.

- Understanding and proven application of Computer Science fundamentals in object oriented design, data structures, algorithm design, Regular expressions, data storage procedures, problem solving, and complexity analysis.

- Understanding of natural language processing and basic ML algorithms will be a plus.

- Good troubleshooting and debugging skills.

- Strong individual contributor, self-motivated, and a proven team player.

- Eager to learn and develop new experience and skills.

- Good communication and interpersonal skills.

About Company Profile :

Gauge Data Solutions Pvt Ltd :

- We are a leading company into Data Science, Machine learning and Artificial Intelligence.

- Within Gauge data we have a competitive environment for the Developers and Engineers.

- We at Gauge create potential solutions for the real world problems. One such example of our engineering is Casemine.

- Casemine is a legal research platform powered by Artificial Intelligence. It helps lawyers, judges and law researchers in their day to day life.

- Casemine provides exhaustive case results to its users with the use of cutting edge technologies.

- It is developed with the efforts of great engineers at Gauge Data.

- One such opportunity is now open for you. We at Gauge Data invites application for competitive, self motivated Python Developer.

Purpose of the Role :

- This position will play a central role in developing new features and enhancements for the products and services at Gauge Data.

- To know more about what we do and how we do it, feel free to read these articles:

- https://bit.ly/2YfVAsv

- https://bit.ly/2rQArJc

- You can also visit us at https://www.casemine.com/.

- For more information visit us at: - www.gaugeanalytics.com

- Join us on LinkedIn, Twitter & Facebook
Read more
Glance
at Glance
1 recruiter
Sandeep Shadankar
Posted by Sandeep Shadankar
Bengaluru (Bangalore)
5 - 10 yrs
₹50L - ₹80L / yr
skill iconData Science
skill iconMachine Learning (ML)
Artificial Intelligence (AI)
skill iconPython
skill iconDeep Learning
+6 more

Glance – An InMobi Group Company:

Glance is an AI-first Screen Zero content discovery platform, and it’s scaled massively in the last few months to one of the largest platforms in India. Glance is a lock-screen first mobile content platform set up within InMobi. The average mobile phone user unlocks their phone >150 times a day. Glance aims to be there, providing visually rich, easy to consume content to entertain and inform mobile users - one unlock at a time. Glance is live on more than 80 millions of mobile phones in India already, and we are only getting started on this journey! We are now into phase 2 of the Glance story - we are going global!

Roposo is part of the Glance family. It is a short video entertainment platform. All the videos created here are user generated (via upload or Roposo creation tools in camera) and there are many communities creating these videos on various themes we call channels. Around 4 million videos are created every month on Roposo and power Roposo channels, some of the channels are - HaHa TV (for comedy videos), News, Beats (for singing/ dance performances) along with a For You (personalized for a user) and Your Feed (for videos of people a user follows).

 

What’s the Glance family like?

Consistently featured among the “Great Places to Work” in India since 2017, our culture is our true north, enabling us to think big, solve complex challenges and grow with new opportunities. Glanciers are passionate and driven, creative and fun-loving, take ownership and are results-focused. We invite you to free yourself, dream big and chase your passion.

 

What can we promise? 

We offer an opportunity to have an immediate impact on the company and our products. The work that you shall do will be mission critical for Glance and will be critical for optimizing tech operations, working with highly capable and ambitious peer groups. At Glance, you get food for your body, soul, and mind with daily meals, gym, and yoga classes, cutting-edge training and tools, cocktails at drink cart Thursdays and fun at work on Funky Fridays. We even promise to let you bring your kids and pets to work. 

 

What you will be doing?

Glance is looking for a Data Scientist who will design and develop processes and systems to analyze high volume, diverse "big data" sources using advanced mathematical, statistical, querying, and reporting methods. Will use machine learning techniques and statistical analysis to predict outcomes and behaviors. Interacts with business partners to identify questions for data analysis and experiments. Identifies meaningful insights from large data and metadata sources; interprets and communicates insights and or prepares output from analysis and experiments to business partners. 

You will be working with Product leadership, taking high-level objectives and developing solutions that fulfil these requirements. Stakeholder management across Eng, Product and Business teams will be required.

 

Basic Qualifications:

  • Five+ years experience working in a Data Science role
  • Extensive experience developing and deploying ML models in real world environments
  • Bachelor's degree in Computer Science, Mathematics, Statistics, or other analytical fields
  • Exceptional familiarity with Python, Java, Spark or other open-source software with data science libraries
  • Experience in advanced math and statistics
  • Excellent familiarity with command line linux environment
  • Able to understand various data structures and common methods in data transformation
  • Experience deploying machine learning models and measuring their impact
  • Knowledge of a variety of machine learning techniques (clustering, decision tree learning, artificial neural networks, etc.) and their real-world advantages/drawbacks.

 

Preferred Qualifications

  • Experience developing recommendation systems
  • Experience developing and deploying deep learning models
  • Bachelor’s or Master's Degree or PhD that included coursework in statistics, machine learning or data analysis
  • Five+ years experience working with Hadoop, a NoSQL Database or other big data infrastructure
  • Experience with being actively engaged in data science or other research-oriented position
  • You would be comfortable collaborating with cross-functional teams.
  • Active personal GitHub account.
Read more
Pluto Seven Business Solutions Pvt Ltd
Sindhu Narayan
Posted by Sindhu Narayan
Bengaluru (Bangalore)
2 - 7 yrs
₹4L - ₹20L / yr
Statistical Modeling
skill iconData Science
TensorFlow
skill iconPython
skill iconMachine Learning (ML)
+5 more
Data Scientist : Pluto7 is a services and solutions company focused on building ML, Ai, Analytics, and IoT tailored solutions to accelerate business transformation.We are a Premier Google Cloud Partner, servicing Retail, Manufacturing, Healthcare, and Hi-Tech industries. We are a Google premium partner in AI & ML, which means you'll have the opportunity to work and collaborate with folks from Google. Are you an innovator, have a passion to work with data and find insights, have the inquisitive mind with the constant yearning to learn new ideas; then we are looking for you.As a Pluto7 Data Scientist engineer, you will be one of the key members of our innovative artificial intelligence and machine learning team. You are expected to be unfazed with large volumes of data, love to apply various models, use technology to process and filter data for analysis. Responsibilities: Build and Optimize Machine Learning models. Work with large/complex datasets to solve difficult and non-routine analysis problems, applying advanced analytical methods as needed. Build and prototype data pipelines for analysis at scale. Work cross-functionally with Business Analysts and Data Engineers to help develop cutting edge and innovative artificial intelligence and machine learning models. Make recommendations for selections on machine learning models. Drive accuracy levels to the next stage of the given ML models. Experience in developing visualisation and User Good exposure in exploratory data analysis Strong experience in Statistics and ML algorithms. Minimum qualifications: 2+ years of relevant work experience in ML and advanced data analytics(e.g., as a Machine Learning Specialist / Data scientist ). Strong Experience using machine learning and artificial intelligence frameworks such as Tensorflow, sci-kit learn, Keras using python. Good in Python/R/SAS programming. Understanding of Cloud platforms like GCP, AWS, or other. Preferred qualifications: Work experience in building data pipelines to ingest, cleanse and transform data. Applied experience with machine learning on large datasets and experience translating analysis results into business recommendations. Demonstrated skills in selecting the right statistical tools given a data analysis problem. Demonstrated effective written and verbal communication skills. Demonstrated willingness to both teach others and learn new techniques Work location : Bangalore
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos