Member-only story
Model interpretability
Model interpretability — Making your model confesses: LIME
For an introduction to the topic Model Interpretability see:
In a previous post, I wrote about why checking model fairness is such a critical task. I also wrote a post showing the different methods available to achieve so including Shapley Values and Feature Importance. Now it’s time for a new method: LIME.
LIME: Motivation
I shown that Shapley Values is conceptually great, but requires a lot of computing time. For more than a few features, the exact solution to this problem becomes problematic. This is the reason why most of the packages, like SHAP, implement an approximation method since it is likely to be the only feasible option. LIME, which stands for Local Interpretable Model-Agnostic Explanation, tries to overcome this limitation by applying the assumption that every complex model is linear on a local scale. In a sense, if you “zoom in” enough to an observation, a linear model might by applied. While it is not justified in the original paper, this is generally sound.
How LIME works
LIME seeks to explain an individual prediction from a model by choosing…