ML Ops Engineer

at Top Management Consulting Company

icon
Gurugram, Bengaluru (Bangalore), Chennai
icon
2 - 9 yrs
icon
₹9L - ₹27L / yr
icon
Full time
Skills
DevOps
Microsoft Windows Azure
gitlab
Amazon Web Services (AWS)
Google Cloud Platform (GCP)
Docker
Kubernetes
Jenkins
GitHub
Git
Python
MySQL
PostgreSQL
SQL server
Oracle
Terraform
argo
airflow
kubeflow
Machine Learning (ML)
Greetings!!

We are looking out for a technically driven  "ML OPS Engineer" for one of our premium client

COMPANY DESCRIPTION:
This Company is a global management consulting firm. We are the trusted advisor to the world's leading businesses, governments, and institutions. We work with leading organizations across the private, public and social sectors. Our scale, scope, and knowledge allow us to address


Key Skills
• Excellent hands-on expert knowledge of cloud platform infrastructure and administration
(Azure/AWS/GCP) with strong knowledge of cloud services integration, and cloud security
• Expertise setting up CI/CD processes, building and maintaining secure DevOps pipelines with at
least 2 major DevOps stacks (e.g., Azure DevOps, Gitlab, Argo)
• Experience with modern development methods and tooling: Containers (e.g., docker) and
container orchestration (K8s), CI/CD tools (e.g., Circle CI, Jenkins, GitHub actions, Azure
DevOps), version control (Git, GitHub, GitLab), orchestration/DAGs tools (e.g., Argo, Airflow,
Kubeflow)
• Hands-on coding skills Python 3 (e.g., API including automated testing frameworks and libraries
(e.g., pytest) and Infrastructure as Code (e.g., Terraform) and Kubernetes artifacts (e.g.,
deployments, operators, helm charts)
• Experience setting up at least one contemporary MLOps tooling (e.g., experiment tracking,
model governance, packaging, deployment, feature store)
• Practical knowledge delivering and maintaining production software such as APIs and cloud
infrastructure
• Knowledge of SQL (intermediate level or more preferred) and familiarity working with at least
one common RDBMS (MySQL, Postgres, SQL Server, Oracle)
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Data Engineer - AWS

at A global business process management company

Agency job
via Jobdost
Data engineering
Data modeling
data pipeline
Data integration
Data Warehouse (DWH)
Data engineer
AWS RDS
Glue
AWS CloudFormation
Amazon Web Services (AWS)
DevOps
AWS Lambda
Python
Django
Data Pipeline
Step functions
RDS
icon
Gurugram, Pune, Mumbai, Bengaluru (Bangalore), Chennai, Nashik
icon
4 - 12 yrs
icon
₹12L - ₹15L / yr

 

 

Designation – Deputy Manager - TS


Job Description

  1. Total of  8/9 years of development experience Data Engineering . B1/BII role
  2. Minimum of 4/5 years in AWS Data Integrations and should be very good on Data modelling skills.
  3. Should be very proficient in end to end AWS Data solution design, that not only includes strong data ingestion, integrations (both Data @ rest and Data in Motion) skills but also complete DevOps knowledge.
  4. Should have experience in delivering at least 4 Data Warehouse or Data Lake Solutions on AWS.
  5. Should be very strong experience on Glue, Lambda, Data Pipeline, Step functions, RDS, CloudFormation etc.
  6. Strong Python skill .
  7. Should be an expert in Cloud design principles, Performance tuning and cost modelling. AWS certifications will have an added advantage
  8. Should be a team player with Excellent communication and should be able to manage his work independently with minimal or no supervision.
  9. Life Science & Healthcare domain background will be a plus

Qualifications

BE/Btect/ME/MTech

 

Job posted by
Saida Jabbar

Data Engineer

at BDIPlus

Founded 2014  •  Product  •  100-500 employees  •  Profitable
Spark
Hadoop
Big Data
Data engineering
PySpark
SQL
NOSQL Databases
Amazon Web Services (AWS)
icon
Bengaluru (Bangalore)
icon
1 - 3 yrs
icon
₹3L - ₹6L / yr
Experience in a Data Engineer role, who has attained a Graduate degree in Computer
Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools:
● Experience with big data tools: Hive/Hadoop, Spark, Kafka, Hive etc.
● Experience with querying multiple databases SQL/NoSQL, including
Oracle, MySQL and MongoDB etc.
● Experience in Redis, RabbitMQ, Elastic Search is desirable.
● Strong Experience with object-oriented/functional/ scripting languages:
Python(preferred), Core Java, Java Script, Scala, Shell Scripting etc.
● Must have debugging complex code skills, experience on ML/AI
algorithms is a plus.
● Experience in version control tool Git or any is mandatory.
● Experience with AWS cloud services: EC2, EMR, RDS, Redshift, S3
● Experience with stream-processing systems: Storm, Spark-Streaming,
etc
Job posted by
Puja Kumari

Data Warehousing Engineer - Big Data/ETL

at Marktine

Founded 2014  •  Products & Services  •  20-100 employees  •  Bootstrapped
Big Data
ETL
PySpark
SSIS
Microsoft Windows Azure
Data Warehouse (DWH)
Python
Amazon Web Services (AWS)
Informatica
icon
Remote, Bengaluru (Bangalore)
icon
3 - 10 yrs
icon
₹5L - ₹15L / yr

Must Have Skills:

- Solid Knowledge on DWH, ETL and Big Data Concepts

- Excellent SQL Skills (With knowledge of SQL Analytics Functions)

- Working Experience on any ETL tool i.e. SSIS / Informatica

- Working Experience on any Azure or AWS Big Data Tools.

- Experience on Implementing Data Jobs (Batch / Real time Streaming)

- Excellent written and verbal communication skills in English, Self-motivated with strong sense of ownership and Ready to learn new tools and technologies

Preferred Skills:

- Experience on Py-Spark / Spark SQL

- AWS Data Tools (AWS Glue, AWS Athena)

- Azure Data Tools (Azure Databricks, Azure Data Factory)

Other Skills:

- Knowledge about Azure Blob, Azure File Storage, AWS S3, Elastic Search / Redis Search

- Knowledge on domain/function (across pricing, promotions and assortment).

- Implementation Experience on Schema and Data Validator framework (Python / Java / SQL),

- Knowledge on DQS and MDM.

Key Responsibilities:

- Independently work on ETL / DWH / Big data Projects

- Gather and process raw data at scale.

- Design and develop data applications using selected tools and frameworks as required and requested.

- Read, extract, transform, stage and load data to selected tools and frameworks as required and requested.

- Perform tasks such as writing scripts, web scraping, calling APIs, write SQL queries, etc.

- Work closely with the engineering team to integrate your work into our production systems.

- Process unstructured data into a form suitable for analysis.

- Analyse processed data.

- Support business decisions with ad hoc analysis as needed.

- Monitoring data performance and modifying infrastructure as needed.

Responsibility: Smart Resource, having excellent communication skills

 

 
Job posted by
Vishal Sharma

Data Engineer ( Only Immediate)

at StatusNeo

Founded 2020  •  Products & Services  •  100-1000 employees  •  Profitable
Data engineering
Data Engineer
Python
Big Data
Spark
Scala
icon
Remote only
icon
2 - 15 yrs
icon
₹2L - ₹70L / yr
Proficiency in engineering practices and writing high quality code, with expertise in
either one of Java, Scala or Python
 Experience in Bigdata Technologies (Hadoop/Spark/Hive/Presto/HBase) & streaming
platforms (Kafka/NiFi/Storm)
 Experience in Distributed Search (Solr/Elastic Search), In-memory data-grid
(Redis/Ignite), Cloud native apps and Kubernetes is a plus
 Experience in building REST services and API’s following best practices of service
abstractions, Micro-services. Experience in Orchestration frameworks is a plus
 Experience in Agile methodology and CICD - tool integration, automation,
configuration management
 Added advantage for being a committer in one of the open-source Bigdata
technologies - Spark, Hive, Kafka, Yarn, Hadoop/HDFS
Job posted by
Alex P

Machine Learning Engineer

at IDfy

Founded 2011  •  Products & Services  •  100-1000 employees  •  Raised funding
Machine Learning (ML)
Python
TensorFlow
PyTorch
Scikit-Learn
elixir
icon
Mumbai, Pune
icon
1 - 3 yrs
icon
₹6L - ₹14L / yr
About the team
● The machine learning team is a self-contained team of 9 people responsible for building models and services that support key workflows for IDfy.
● Our models are gating criteria for these workflows and as such are expected to perform accurately and quickly. We use a mix of conventional and hand-crafted deep learning models.
● The team comes from diverse backgrounds and experiences. We have ex-bankers, startup founders, IIT-ians, and more.
● We work directly with business and product teams to craft solutions for our customers. We know that we are, and function as a platform and not a services company.

● Be working on all aspects of a production machine learning system. You will be acquiring data, training and building models, deploying models, building API services for exposing these models, maintaining them in production, and more.
● Work on performance tuning of models
● From time to time work on support and debugging of these production systems
● Work on researching the latest technology in the areas of our interest and applying it to build newer products and enhancement of the existing platform.
● Building workflows for training and production systems
● Contribute to documentation

About you

● You are an early-career machine learning engineer (or data scientist). Our ideal candidate is
someone with 1-3 years of experience in data science.

Must Haves

● You have a good understanding of Python and Scikit-learn, Tensorflow, or Pytorch. Our systems are built with these tools/language and we expect a strong base in these.
● You are proficient at exploratory analysis and know which model to use in most scenarios
● You should have worked on framing and solving problems with the application of machine learning or deep learning models.
● You have some experience in building and delivering complete or part AI solutions
● You appreciate that the role of the Machine Learning engineer is not only modeling, but also building product solutions and you strive towards this.
● Enthusiasm and drive to learn and assimilate the state of art research. A lot of what we are building will require innovative approaches using newly researched models and applications.

Good to Have

● Knowledge of and experience in computer vision. While a large part of our work revolves around computer
vision, we believe this is something you can learn on the job.
● We build our own services, hence we would want you to have some knowledge of writing APIs.
● Our stack also includes languages like Ruby, Go, and Elixir. We would love it if you know any of these or take an interest in functional programming.
● Knowledge of and experience in ML Ops and tooling would be a welcome addition. We use Docker and Kubernetes for deploying our services.
Job posted by
Rati from
Data Science
Machine Learning (ML)
Natural Language Processing (NLP)
Computer Vision
Deep Learning
Python
TensorFlow
PyTorch
Amazon Web Services (AWS)
icon
Gurugram
icon
4 - 8 yrs
icon
₹15L - ₹25L / yr

About the Company:

This opportunity is for an AI Drone Technology startup funded by the Indian Army. It is working to develop cutting-edge products to help the Indian Army gain an edge in New Age Enemy Warfare.

They are working on using drones to neutralize terrorists hidden in deep forests. Get a chance to contribute to secure our borders against the enemy.

Responsibilities:

  • Extensive knowledge in machine learning and deep learning techniques
  • Solid background in image processing/computer vision
  • Experience in building datasets for computer vision tasks
  • Experience working with and creating data structures/architectures
  • Proficiency in at least one major machine learning framework such as Tensorflow, Pytorch
  • Experience visualizing data to stakeholders
  • Ability to analyze and debug complex algorithms
  • Highly skilled in Python scripting language
  • Creativity and curiosity for solving highly complex problems
  • Excellent communication and collaboration skills

 

Educational Qualification:

MS in Engineering, Applied Mathematics, Data Science, Computer Science or equivalent field, with 3 years industry experience, a PhD degree or equivalent industry experience.

Job posted by
Ankit Bansal

Artificial Intelligence Intern

at Bytelearn

Founded 2019  •  Product  •  20-100 employees  •  Raised funding
Artificial Intelligence (AI)
Natural Language Processing (NLP)
Algorithms
Data Structures
Machine Learning (ML)
Data Science
Deep Learning
icon
Remote only
icon
0 - 1 yrs
icon
₹4L - ₹4.8L / yr
1. Develop novel computer vision/NLP algorithms
2. Build large datasets that will be used to train the models
3. Empirically evaluate related research works
4. Train and evaluate deep learning architectures on multiple large scale datasets
5. Collaborate with the rest of the research team to produce high-quality research
Job posted by
Prachi Rathi

Predictive Modelling And Optimization Consultant (SCM)

at BRIDGEi2i Analytics Solutions

Founded 2011  •  Products & Services  •  100-1000 employees  •  Profitable
R Programming
Data Analytics
Predictive modelling
Supply Chain Management (SCM)
SQL
MySQL
Python
Statistical Modeling
Supply chain optimization
icon
Bengaluru (Bangalore)
icon
4 - 10 yrs
icon
₹9L - ₹15L / yr

The person holding this position is responsible for leading the solution development and implementing advanced analytical approaches across a variety of industries in the supply chain domain.

At this position you act as an interface between the delivery team and the supply chain team, effectively understanding the client business and supply chain.

Candidates will be expected to lead projects across several areas such as

  • Demand forecasting
  • Inventory management
  • Simulation & Mathematical optimization models.
  • Procurement analytics
  • Distribution/Logistics planning
  • Network planning and optimization

 

Qualification and Experience

  • 4+ years of analytics experience in supply chain – preferable industries hi-tech, consumer technology, CPG, automobile, retail or e-commerce supply chain.
  • Master in Statistics/Economics or MBA or M. Sc./M. Tech with Operations Research/Industrial Engineering/Supply Chain
  • Hands-on experience in delivery of projects using statistical modelling

Skills / Knowledge

  • Hands on experience in statistical modelling software such as R/ Python and SQL.
  • Experience in advanced analytics / Statistical techniques – Regression, Decision tress, Ensemble machine learning algorithms etc. will be considered as an added advantage.
  • Highly proficient with Excel, PowerPoint and Word applications.
  • APICS-CSCP or PMP certification will be added advantage
  • Strong knowledge of supply chain management
  • Working knowledge on the linear/nonlinear optimization
  • Ability to structure problems through a data driven decision-making process.
  • Excellent project management skills, including time and risk management and project structuring.
  • Ability to identify and draw on leading-edge analytical tools and techniques to develop creative approaches and new insights to business issues through data analysis.
  • Ability to liaison effectively with multiple stakeholders and functional disciplines.
  • Experience in Optimization tools like Cplex, ILOG, GAMS will be an added advantage.
Job posted by
Venniza Glades

Data Engineer

at Rely

Founded 2018  •  Product  •  20-100 employees  •  Raised funding
Python
Hadoop
Spark
Amazon Web Services (AWS)
Big Data
Amazon EMR
RabbitMQ
icon
Bengaluru (Bangalore)
icon
2 - 10 yrs
icon
₹8L - ₹35L / yr

Intro

Our data and risk team is the core pillar of our business that harnesses alternative data sources to guide the decisions we make at Rely. The team designs, architects, as well as develop and maintain a scalable data platform the powers our machine learning models. Be part of a team that will help millions of consumers across Asia, to be effortlessly in control of their spending and make better decisions.


What will you do
The data engineer is focused on making data correct and accessible, and building scalable systems to access/process it. Another major responsibility is helping AI/ML Engineers write better code.

• Optimize and automate ingestion processes for a variety of data sources such as: click stream, transactional and many other sources.

  • Create and maintain optimal data pipeline architecture and ETL processes
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Develop data pipeline and infrastructure to support real-time decisions
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS big data' technologies.
  • Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
  • Work with stakeholders to assist with data-related technical issues and support their data infrastructure needs.


What will you need
• 2+ hands-on experience building and implementation of large scale production pipeline and Data Warehouse
• Experience dealing with large scale

  • Proficiency in writing and debugging complex SQLs
  • Experience working with AWS big data tools
    • Ability to lead the project and implement best data practises and technology

Data Pipelining

  • Strong command in building & optimizing data pipelines, architectures and data sets
  • Strong command on relational SQL & noSQL databases including Postgres
  • Data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.

Big Data: Strong experience in big data tools & applications

  • Tools: Hadoop, Spark, HDFS etc
  • AWS cloud services: EC2, EMR, RDS, Redshift
  • Stream-processing systems: Storm, Spark-Streaming, Flink etc.
  • Message queuing: RabbitMQ, Spark etc

Software Development & Debugging

  • Strong experience in object-oriented programming/object function scripting languages: Python, Java, C++, Scala, etc
  • Strong hold on data structures & algorithms

What would be a bonus

  • Prior experience working in a fast-growth Startup
  • Prior experience in the payments, fraud, lending, advertising companies dealing with large scale data
Job posted by
Hizam Ismail

SQL server DB

at TekClan

Founded 2018  •  Products & Services  •  20-100 employees  •  Profitable
SQL Server Reporting Services (SSRS)
SQL server
MS SQLServer
SSIS
icon
Chennai
icon
2 - 7 yrs
icon
₹4L - ₹9L / yr
As a database developer, you will deliver SQL Server database solutions to support a growing suite of applications. You must be able to work in a fast-paced environment and learn fast with little guidance. You will be responsible for design, development, implementation and support of database code. You will develop ETL (SSIS), SSRS ,T-SQL and PowerShell processes upon new and legacy systems. You will develop new solutions and provide automation to reduce manual tasks. Qualified candidates should possess the ability to perform at a high level under aggressive timelines and complex solutions. 3+ years of SQL Development in a large enterprise environment Should have developed complex database code with T-SQL, SSIS, SSRS and SQL Server best practices to support UI, Middleware and Batch applications. Familiar with Agile development methodology Experience with SVN a plus Experience with Business Intelligence is a plus. Experience with Tableau is a huge plus. Experience with PowerShell is a plus. Exposed to data modelling is a plus.
Job posted by
ANANDHA BIRUNDHA T
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at Top Management Consulting Company?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort