ML Ops Engineer

at Top Management Consulting Company

icon
Gurugram, Bengaluru (Bangalore), Chennai
icon
2 - 9 yrs
icon
₹9L - ₹27L / yr
icon
Full time
Skills
DevOps
Microsoft Windows Azure
gitlab
Amazon Web Services (AWS)
Google Cloud Platform (GCP)
Docker
Kubernetes
Jenkins
GitHub
Git
Python
MySQL
PostgreSQL
SQL server
Oracle
Terraform
argo
airflow
kubeflow
Machine Learning (ML)
Greetings!!

We are looking out for a technically driven  "ML OPS Engineer" for one of our premium client

COMPANY DESCRIPTION:
This Company is a global management consulting firm. We are the trusted advisor to the world's leading businesses, governments, and institutions. We work with leading organizations across the private, public and social sectors. Our scale, scope, and knowledge allow us to address


Key Skills
• Excellent hands-on expert knowledge of cloud platform infrastructure and administration
(Azure/AWS/GCP) with strong knowledge of cloud services integration, and cloud security
• Expertise setting up CI/CD processes, building and maintaining secure DevOps pipelines with at
least 2 major DevOps stacks (e.g., Azure DevOps, Gitlab, Argo)
• Experience with modern development methods and tooling: Containers (e.g., docker) and
container orchestration (K8s), CI/CD tools (e.g., Circle CI, Jenkins, GitHub actions, Azure
DevOps), version control (Git, GitHub, GitLab), orchestration/DAGs tools (e.g., Argo, Airflow,
Kubeflow)
• Hands-on coding skills Python 3 (e.g., API including automated testing frameworks and libraries
(e.g., pytest) and Infrastructure as Code (e.g., Terraform) and Kubernetes artifacts (e.g.,
deployments, operators, helm charts)
• Experience setting up at least one contemporary MLOps tooling (e.g., experiment tracking,
model governance, packaging, deployment, feature store)
• Practical knowledge delivering and maintaining production software such as APIs and cloud
infrastructure
• Knowledge of SQL (intermediate level or more preferred) and familiarity working with at least
one common RDBMS (MySQL, Postgres, SQL Server, Oracle)
Read more
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Data Scientist

at Censiusai

Founded 2020  •  Products & Services  •  20-100 employees  •  Bootstrapped
Data Science
Machine Learning (ML)
Python
Apache
Apache Spark
Natural Language Processing (NLP)
Scala
icon
Remote only
icon
4 - 8 yrs
icon
₹10L - ₹16L / yr

Responsibilities

  • Work closely with Client to understand business needs and convert them into technical solutions

  • Architect and design end-to-end analytical solutions from data ingestion to deployment

  • Hands-on work with the team

  • Lead delivery of analytical projects acting as senior or lead data scientist

    • Design and develop data engineering pipelines

    • Develop analytical models

    • Review/test models created by other team members

    • Coordinate technical delivery of data engineering and data science tasks

    • Present results to business stakeholders

Requirements

  • B.Tech in CS or equivalent technical realted discipline
  • Demonstrated history of leading teams to deliver projects on time
  • 3+ years Experience working as a Data Scientist

  • Strong programming skills in at least two of analytical frameworks R, Python (pandas, nltk, scipy), Spark (Scala, PySpark, SparkR)

  • Strong SQL skills

  • Experience with Exploratory Data Analysis

  • Broad knowledge of statistics and Machine Learning topics (clustering, forecasting, classification, anomaly detection, NLP, NN, Deep Learning)

  • Experience with Cloud (Azure/AWS/Google)

Nice to have

  • M.Tech in CS preferred 
  • Experience with Big Data

  • MLOps experience (MlFlow, Kubeflow)

Why work with us

Life at Censius, like a lot of startups, can sometimes feel like a roller coaster. You will get a chance to work with modern AI technologies with some of the smartest and most passionate people. Join us to accelerate your learning, and make a lasting impact.

 

Perks

Competitive salary, health insurance, streaming service reimbursement (like Netflix), online courses, free books, and more!

Read more
Job posted by
Censius Team

Senior Software Engineer - Data

at 6sense

Founded 2013  •  Product  •  1000-5000 employees  •  Raised funding
PySpark
Data engineering
Big Data
Hadoop
Spark
Apache Spark
Python
ETL
Amazon Web Services (AWS)
icon
Remote only
icon
5 - 8 yrs
icon
₹30L - ₹45L / yr

About Slintel (a 6sense company) :

Slintel, a 6sense company,  the leader in capturing technographics-powered buying intent, helps companies uncover the 3% of active buyers in their target market. Slintel evaluates over 100 billion data points and analyzes factors such as buyer journeys, technology adoption patterns, and other digital footprints to deliver market & sales intelligence.

Slintel's customers have access to the buying patterns and contact information of more than 17 million companies and 250 million decision makers across the world.

Slintel is a fast growing B2B SaaS company in the sales and marketing tech space. We are funded by top tier VCs, and going after a billion dollar opportunity. At Slintel, we are building a sales development automation platform that can significantly improve outcomes for sales teams, while reducing the number of hours spent on research and outreach.

We are a big data company and perform deep analysis on technology buying patterns, buyer pain points to understand where buyers are in their journey. Over 100 billion data points are analyzed every week to derive recommendations on where companies should focus their marketing and sales efforts on. Third party intent signals are then clubbed with first party data from CRMs to derive meaningful recommendations on whom to target on any given day.

6sense is headquartered in San Francisco, CA and has 8 office locations across 4 countries.

6sense, an account engagement platform, secured $200 million in a Series E funding round, bringing its total valuation to $5.2 billion 10 months after its $125 million Series D round. The investment was co-led by Blue Owl and MSD Partners, among other new and existing investors.

Linkedin (Slintel) : https://www.linkedin.com/company/slintel/

Industry : Software Development

Company size : 51-200 employees (189 on LinkedIn)

Headquarters : Mountain View, California

Founded : 2016

Specialties : Technographics, lead intelligence, Sales Intelligence, Company Data, and Lead Data.

Website (Slintel) : https://www.slintel.com/slintel

Linkedin (6sense) : https://www.linkedin.com/company/6sense/

Industry : Software Development

Company size : 501-1,000 employees (937 on LinkedIn)

Headquarters : San Francisco, California

Founded : 2013

Specialties : Predictive intelligence, Predictive marketing, B2B marketing, and Predictive sales

Website (6sense) : https://6sense.com/

Acquisition News : 

https://inc42.com/buzz/us-based-based-6sense-acquires-b2b-buyer-intelligence-startup-slintel/ 

Funding Details & News :

Slintel funding : https://www.crunchbase.com/organization/slintel

6sense funding : https://www.crunchbase.com/organization/6sense

https://www.nasdaq.com/articles/ai-software-firm-6sense-valued-at-%245.2-bln-after-softbank-joins-funding-round

https://www.bloomberg.com/news/articles/2022-01-20/6sense-reaches-5-2-billion-value-with-softbank-joining-round

https://xipometer.com/en/company/6sense

Slintel & 6sense Customers :

https://www.featuredcustomers.com/vendor/slintel/customers

https://www.featuredcustomers.com/vendor/6sense/customers

About the job

Responsibilities

  • Work in collaboration with the application team and integration team to design, create, and maintain optimal data pipeline architecture and data structures for Data Lake/Data Warehouse
  • Work with stakeholders including the Sales, Product, and Customer Support teams to assist with data-related technical issues and support their data analytics needs
  • Assemble large, complex data sets from third-party vendors to meet business requirements.
  • Identify, design, and implement internal process improvements: automating manual processes, optimising data delivery, re-designing infrastructure for greater scalability, etc.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, Elastic search, MongoDB, and AWS technology
  • Streamline existing and introduce enhanced reporting and analysis solutions that leverage complex data sources derived from multiple internal systems

Requirements

  • 3+ years of experience in a Data Engineer role
  • Proficiency in Linux
  • Must have SQL knowledge and experience working with relational databases, query authoring (SQL) as well as familiarity with databases including Mysql, Mongo, Cassandra, and Athena
  • Must have experience with Python/ Scala
  • Must have experience with Big Data technologies like Apache Spark
  • Must have experience with Apache Airflow
  • Experience with data pipeline and ETL tools like AWS Glue
  • Experience working with AWS cloud services: EC2 S3 RDS, Redshift and other Data solutions eg. Databricks, Snowflake

 

Desired Skills and Experience

Python, SQL, Scala, Spark, ETL

 

Read more
Job posted by
Romesh Rawat

Speech Engineer

at Saarthi.ai

Founded 2017  •  Products & Services  •  20-100 employees  •  Bootstrapped
Data Science
Machine Learning (ML)
Natural Language Processing (NLP)
Python
C
C++
PyTorch
icon
Bengaluru (Bangalore)
icon
2 - 9 yrs
icon
₹10L - ₹30L / yr

About the Role:  

As a Speech Engineer you will be working on development of on-device multilingual speech recognition systems. 

  • Apart from ASR you will be working on solving speech focused research problems like speech enhancement, voice analysis and synthesis etc. 
  • You will be responsible for building complete pipeline for speech recognition from data preparation to deployment on edge devices. 
  • Reading, implementing and improving baselines reported in leading research papers will be another key area of your daily life at Saarthi. 

Requirements:  

  • 2-3 year of hands-on experience in speech recognitionbased projects 
  • Proven experience as a Speech engineer or similar role 
  • Should have experience of deployment on edge devices 
  • Candidate should have hands-on experience with open-source tools such as Kaldi, Pytorch-Kaldi and any of the end-to-end ASR tools such as ESPNET or EESEN or DeepSpeech Pytorch
  • Prior proven experience in training and deployment of deep learning models on scale 
  • Strong programming experience in Python,C/C++, etc. 
  • Working experience with Pytorch and Tensorflow
  • Experience contributing to research communities including publications at conferences and/or journals 
  • Strong communication skills 
  • Strong analytical and problem-solving skills
Read more
Job posted by
Prabhudev S

Azure Data Engineer

at Marktine

Founded 2014  •  Products & Services  •  20-100 employees  •  Bootstrapped
Big Data
Spark
PySpark
Data engineering
Data Warehouse (DWH)
Windows Azure
Python
SQL
Scala
Azure databricks
icon
Remote, Bengaluru (Bangalore)
icon
3 - 6 yrs
icon
₹10L - ₹20L / yr

Azure – Data Engineer

  • At least 2 years hands on experience working with an Agile data engineering team working on big data pipelines using Azure in a commercial environment.
  • Dealing with senior stakeholders/leadership
  • Understanding of Azure data security and encryption best practices. [ADFS/ACLs]

Data Bricks –experience writing in and using data bricks Using Python to transform, manipulate data.

Data Factory – experience using data factory in an enterprise solution to build data pipelines. Experience calling rest APIs.

Synapse/data warehouse – experience using synapse/data warehouse to present data securely and to build & manage data models.

Microsoft SQL server – We’d expect the candidate to have come from a SQL/Data background and progressed into Azure

PowerBI – Experience with this is preferred

Additionally

  • Experience using GIT as a source control system
  • Understanding of DevOps concepts and application
  • Understanding of Azure Cloud costs/management and running platforms efficiently
Read more
Job posted by
Vishal Sharma

Data Science Engineer

at E commerce & Retail

Agency job
via Myna Solutions
Machine Learning (ML)
Data Science
Python
Tableau
SQL
m
XGBoost
aws sagemaker
icon
Chennai
icon
5 - 10 yrs
icon
₹8L - ₹18L / yr
Job Title : DataScience Engineer
Work Location : Chennai
Experience Level : 5+yrs
Package : Upto 18 LPA
Notice Period : Immediate Joiners
It's a full-time opportunity with our client.

Mandatory Skills:Machine Learning,Python,Tableau & SQL

Job Requirements:

--2+ years of industry experience in predictive modeling, data science, and Analysis.

--Experience with ML models including but not limited to Regression, Random Forests, XGBoost.

--Experience in an ML engineer or data scientist role building and deploying ML models or hands on experience developing deep learning models.

--Experience writing code in Python and SQL with documentation for reproducibility.

--Strong Proficiency in Tableau.

--Experience handling big datasets, diving into data to discover hidden patterns, using data visualization tools, writing SQL.

--Experience writing and speaking about technical concepts to business, technical, and lay audiences and giving data-driven presentations.

--AWS Sagemaker experience is a plus not required.
Read more
Job posted by
Venkat B

Data Engineer

at Planet Spark

Founded  •   •  employees  • 
Data engineering
Data Engineer
Data Warehouse (DWH)
Python
SQL
Apache Hadoop
Datawarehousing
Data architecture
Machine Learning (ML)
icon
Gurugram
icon
2 - 5 yrs
icon
₹7L - ₹18L / yr
Responsibilities :
  1. Create and maintain optimal data pipeline architecture
  2. Assemble large, complex data sets that meet business requirements
  3. Identifying, designing, and implementing internal process improvements including redesigning infrastructure for greater scalability, optimizing data delivery, and automating manual processes
  4. Work with Data, Analytics & Tech team to extract, arrange and analyze data
  5. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS technologies
  6. Building analytical tools to utilize the data pipeline, providing actionable insight into key business performance metrics including operational efficiency and customer acquisition
  7. Works closely with all business units and engineering teams to develop a strategy for long-term data platform architecture.
  8. Working with stakeholders including data, design, product, and executive teams, and assisting them with data-related technical issues
  9. Working with stakeholders including the Executive, Product, Data, and Design teams to support their data infrastructure needs while assisting with data-related technical issues.
Skill Requirements
  1. SQL
  2. Ruby or Python(Ruby preferred)
  3. Apache-Hadoop based analytics
  4. Data warehousing
  5. Data architecture
  6. Schema design
  7. ML
Experience Requirement
  1.  Prior experience of 2 to 5 years as a Data Engineer.
  2. Ability in managing and communicating data warehouse plans to internal teams.
  3. Experience designing, building, and maintaining data processing systems.
  4. Ability to perform root cause analysis on external and internal processes and data to identify opportunities for improvement and answer questions.
  5. Excellent analytic skills associated with working on unstructured datasets.
  6. Ability to build processes that support data transformation, workload management, data structures, dependency, and metadata.
Read more
Job posted by
Maneesh Dhooper

Data/Integration Architect

at Consulting Leader

Agency job
via Buaut Tech
Data integration
talend
Hadoop
Integration
Java
Python
icon
Pune, Mumbai
icon
8 - 10 yrs
icon
₹8L - ₹16L / yr

 

Job Description for :

Role: Data/Integration Architect

Experience – 8-10 Years

Notice Period: Under 30 days

Key Responsibilities: Designing, Developing frameworks for batch and real time jobs on Talend. Leading migration of these jobs from Mulesoft to Talend, maintaining best practices for the team, conducting code reviews and demos.

Core Skillsets:

Talend Data Fabric - Application, API Integration, Data Integration. Knowledge on Talend Management Cloud, deployment and scheduling of jobs using TMC or Autosys.

Programming Languages - Python/Java
Databases: SQL Server, Other Databases, Hadoop

Should have worked on Agile

Sound communication skills

Should be open to learning new technologies based on business needs on the job

Additional Skills:

Awareness of other data/integration platforms like Mulesoft, Camel

Awareness Hadoop, Snowflake, S3

Read more
Job posted by
KAUSHANK nalin

Machine Learning Engineer

at SmartJoules

Founded 2015  •  Product  •  100-500 employees  •  Profitable
Machine Learning (ML)
Python
Big Data
Apache Spark
Deep Learning
icon
Remote, NCR (Delhi | Gurgaon | Noida)
icon
3 - 5 yrs
icon
₹8L - ₹12L / yr

Responsibilities:

  • Exploring and visualizing data to gain an understanding of it, then identifying differences in data distribution that could affect performance when deploying the model in the real world.
  • Verifying data quality, and/or ensuring it via data cleaning.
  • Able to adapt and work fast in producing the output which upgrades the decision making of stakeholders using ML.
  • To design and develop Machine Learning systems and schemes. 
  • To perform statistical analysis and fine-tune models using test results.
  • To train and retrain ML systems and models as and when necessary. 
  • To deploy ML models in production and maintain the cost of cloud infrastructure.
  • To develop Machine Learning apps according to client and data scientist requirements.
  • To analyze the problem-solving capabilities and use-cases of ML algorithms and rank them by how successful they are in meeting the objective.


Technical Knowledge:


  • Worked with real time problems, solved them using ML and deep learning models deployed in real time and should have some awesome projects under his belt to showcase. 
  • Proficiency in Python and experience with working with Jupyter Framework, Google collab and cloud hosted notebooks such as AWS sagemaker, DataBricks etc.
  • Proficiency in working with libraries Sklearn, Tensorflow, Open CV2, Pyspark,  Pandas, Numpy and related libraries.
  • Expert in visualising and manipulating complex datasets.
  • Proficiency in working with visualisation libraries such as seaborn, plotly, matplotlib etc.
  • Proficiency in Linear Algebra, statistics and probability required for Machine Learning.
  • Proficiency in ML Based algorithms for example, Gradient boosting, stacked Machine learning, classification algorithms and deep learning algorithms. Need to have experience in hypertuning various models and comparing the results of algorithm performance.
  • Big data Technologies such as Hadoop stack and Spark. 
  • Basic use of clouds (VM’s example EC2).
  • Brownie points for Kubernetes and Task Queues.      
  • Strong written and verbal communications.
  • Experience working in an Agile environment.
Read more
Job posted by
Saksham Dutta

Data Science (R / Python)

at Saama Technologies

Founded 1997  •  Products & Services  •  100-1000 employees  •  Profitable
Data Analytics
Data Science
Product Management
Machine Learning (ML)
Python
SAS
icon
Pune
icon
7 - 11 yrs
icon
₹6L - ₹22L / yr
Description Does solving complex business problems and real world challenges interest you Do you enjoy seeing the impact your contributions make on a daily basis Are you passionate about using data analytics to provide game changing solutions to the Global 2000 clients Do you thrive in a dynamic work environment that constantly pushes you to be the best you can be and more Are you ready to work with smart colleagues who drive for excellence in everything they do If you possess a solutions mindset, strong analytical skills, and commitment to be part of a tremendous journey, come join our growing, global team. See what Saama can do for your career and for your journey. Position: Data Scientist (2255) Location: Hinjewadi Phase 1, Pune Type: Permanent Full time Responsibilities: Work on small and large data sets of structured, semi-structured, and unstructured data to discover hidden knowledge about the client s business and develop methods to leverage that knowledge for their business. Identify and solve business challenges working closely with cross-functional teams, such as Delivery, Business Consulting, Engineering and Product Management. Develop prescriptive and predictive statistical, behavioral or other models via machine learning and/ or traditional statistical modeling techniques, and understand which type of model applies best in a given business situation. Drive the collection of new data and the refinement of existing data sources. Analyze and interpret the results of product experiments. Provide input for engineering and product teams as they develop and support our internal data platform to support ongoing statistical analyses. Requirements: Candidates should demonstrate following expertize- Must have Direct Hands-on, 7 years of experience, building complex systems using any Statistical Programming Language (R / Python / SAS) Must have fundamental knowledge of Inferential Statistics Should have worked on Predictive Modelling Experience should include the following, o File I/ O, Data Harmonization, Data Exploration o Multi-Dimensional Array Processing o Simulation & Optimization Techinques o Machine Learning Techniques (Supervised, Unsupervised) o Artificial Intelligence and Deep Learning o Natural Language Processing o Model Ensembling Techniques o Documenting Reproducible Research o Building Interactive Applications to demonstrate Data Science Use Cases Prior experience in Healthcare Domain, is a plus Experience using Big Data, is a plus. Exposure to SPARK is desirable Should have Excellent Analytical, Problem Solving ability. Should be able to grasp new concepts quickly Should be well familiar with Agile Project Management Methodology Should have experience of managing multiple simultaneous projects Should have played a team lead role Should have excellent written and verbal communication skills Should be a team player with open mind Impact on the business: Plays an important role in making Saama s Solutions game changers for our strategic partners by using data science to solve core, complex business challenges. Key relationships: Sales & pre-sales Product management Engineering Client organization: account management & delivery Saama Competencies: INTEGRITY: we do the right things. INNOVATION: we change the game. TRANSPARENCY: we communicate openly COLLABORATION: we work as one team PROBLEM-SOLVING: we solve core, complex business challenges ENJOY & CELEBRATE: we have fun Competencies: Self-starter who gets results with minimal support and direction in a fast-paced environment. Takes initiative; challenges the status quo to drive change. Learns quickly; takes smart risks to experiment and learn. Works well with others; builds trust and maintains credibility. Planful: identifies and confirms key requirements in dynamic environments; anticipates tasks and contingencies. Communicates effectively; productive communication with clients and all key stakeholders communication in both verbal and written communication. Stays the course despite challenges & setbacks. Works well under pressure. Strong analytical skills; able to apply inductive and deductive thinking to generate solutions for complex problems
Read more
Job posted by
Sandeep Chaudhary

Bigdata Lead

at Saama Technologies

Founded 1997  •  Products & Services  •  100-1000 employees  •  Profitable
Hadoop
Spark
Apache Hive
Apache Flume
Java
Python
Scala
MySQL
Game Design
Technical Writing
icon
Pune
icon
2 - 5 yrs
icon
₹1L - ₹18L / yr
Description Deep experience and understanding of Apache Hadoop and surrounding technologies required; Experience with Spark, Impala, Hive, Flume, Parquet and MapReduce. Strong understanding of development languages to include: Java, Python, Scala, Shell Scripting Expertise in Apache Spark 2. x framework principals and usages. Should be proficient in developing Spark Batch and Streaming job in Python, Scala or Java. Should have proven experience in performance tuning of Spark applications both from application code and configuration perspective. Should be proficient in Kafka and integration with Spark. Should be proficient in Spark SQL and data warehousing techniques using Hive. Should be very proficient in Unix shell scripting and in operating on Linux. Should have knowledge about any cloud based infrastructure. Good experience in tuning Spark applications and performance improvements. Strong understanding of data profiling concepts and ability to operationalize analyses into design and development activities Experience with best practices of software development; Version control systems, automated builds, etc. Experienced in and able to lead the following phases of the Software Development Life Cycle on any project (feasibility planning, analysis, development, integration, test and implementation) Capable of working within the team or as an individual Experience to create technical documentation
Read more
Job posted by
Sandeep Chaudhary
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at Top Management Consulting Company?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort