Model interpretability

Model interpretability — Making your model confesses: Feature Importance

Facundo Santiago
7 min readFeb 1, 2020

Following the sequence of posts about model interpretability, it is time to talk about a different method to explain model predictions: Feature Importance or more precisely Permutation Feature Importance. It belongs to the family of model-agnostic methods, which as explained before, are methods that don’t rely on any particularity of the model we want to interpret.

For an introduction to the subject and other methods for interpretability check…

--

--

Facundo Santiago

Product Manager @ Microsoft AI. Graduate adjunct professor at University of Buenos Aires. Frustrated sociologist.