Introducing MLOps — Why we need it, and how to apply it in your company (1/3)

Sho
3 min readDec 23, 2023

--

Photo by Viktor Hanacek from PicJumbo

This is an archive of my tech talk `Introducing MLOps — Why we need it, and how to apply it in your company` at Code Chrysalis in September 2021.

Note that the technologies and concepts may have changed or improved since then, so please take this article with a grain of salt!

The tech talk archive summarizes what MLOps is, its usual components in a cloud-provider-generic sense, and a sample high-level workflow on applying it on a budding ML product from scratch.

Table of Contents

  1. ML Productionisation and MLOps (this article)
  2. Why do we need MLOps?
  3. How do you apply MLOps? + a simple flow

What this tech talk archive is/isn’t

  • This archive is introductory (and isn’t a deep-dive)
  • This archive isn’t going to sell you specific MLOps tools
  • This archive focuses on the high-level concepts on applying MLOps
  • This archive isn’t just for technical people (although it gets quite technical towards the end)

Self-introduction

I’m Sho Akiyama, an ML Engineering Manager in Tokyo who is also the creator of the programming-centric anime series Remote Startup Senpai. At the time of this tech talk, I was working at Retail AI where my relatively small team built a whole MLOps infrastructure from scratch for recommenders on a smart cart product.

You can find me on LinkedIn or Instagram!

ML Productionisation and MLOps

How are ML models productionised?

As a direct answer, it does depend on how one defines productionised. Even when your models are accessible by users through a simple service, you can call it productionised. It could be as simple as this:

Though this can be good for app demos for internal use or investor presentations, it doesn’t really serve much outside of a few users. As your application and product requires scaling up, your team may need to start planning how to serve them on cloud instances, allow a hundred to a thousand users, and be able to easily retrain and deploy models.

As this can be a huge feat, one strategy to trim this down is to separate the system into different phases:

Three Phases of MLOps
  1. The ML Phase — which involves research, data transformation, and model building
  2. The DEV Phase — the “classic engineering” so to speak, building the backend and frontend around the models
  3. The PROD Phase — the “show time” phase, where the models are being served to the users through deployments, which also includes monitoring and logging

What, then, is MLOps?

Simply put, one can say that it is a set of best practices that aims to improve reliability and efficiency of productionising ML pipelines.

It is the continuous improvement of the iterative process that spans from the early steps of research to model implementation (ML Phase), to frontend/backend/mobile/infra development (DEV Phase), until the final steps of providing the model to the users and monitoring (PROD Phase), and back to the first steps after each iteration.

Next Episode

Now that we’ve defined what MLOps is and how models are productionized, it’s time to dive into Part II: Why do we need MLOps?

--

--

Sho

ML/full-stack engineer by day, animation director by night