Effortless model deployment with MLflow — Customizing inference

Save your Machine Learning models in an open-source format with MLflow to unlock effortless deployment experience later. Customizing the inference.

Facundo Santiago
10 min readMar 16, 2022

Welcome back to the series Effortless model deployment with MLflow! If you just join the party, check out the other post of the series:

In my previous post, Effortless model deployment with MLflow, we reviewed how by persisting your model using an open-source specification format like MLModel you can achieve great flexibility when deploying models in production.

As a recap, taking the example we saw for a model created with FastAI, we can log the model in MLflow like this:

mlflow.fastai.log_model(model, "classifier"…

--

--

Facundo Santiago
Facundo Santiago

Written by Facundo Santiago

Product Manager @ Microsoft AI. Graduate adjunct professor at University of Buenos Aires. Frustrated sociologist.

No responses yet