Effortless model deployment with MLflow — Customizing inference
Save your Machine Learning models in an open-source format with MLflow to unlock effortless deployment experience later. Customizing the inference.
10 min readMar 16, 2022
Welcome back to the series Effortless model deployment with MLflow! If you just join the party, check out the other post of the series:
- MLflow: Introduction to the MLModel specification.
- Customizing inference with MLFlow (this post)
- Packaging models with multiple pieces: deploying a recommender system.
- Packaging models with multiple assets: deploying a HuggingFace NLP model for classification.
- Packaging stratified models (many models): deploying a partitioned model for demand forecasting.
In my previous post, Effortless model deployment with MLflow, we reviewed how by persisting your model using an open-source specification format like MLModel you can achieve great flexibility when deploying models in production.
As a recap, taking the example we saw for a model created with FastAI, we can log the model in MLflow like this:
mlflow.fastai.log_model(model, "classifier"…