Model interpretability

Model interpretability — Making your model confesses: Shapley values

Introduction:

OOTB: Get it Out-Of-The-Box

Most commonly used methods for explainability:

1) Shapley values

Interpretation:

import shap
explainer = shap.TreeExplainer(model) #(***)
shap_values = explainer.shap_values(X_test)
shap.initjs()
shap.summary_plot(shap_values, X_test, plot_type="bar")
instance_to_explain = 0
shap.force_plot(explainer.expected_value[1], shap_values[1][instance_to_explain], X_test.iloc[instance_to_explain])
Shapley values for the verb to be used in the past rather than present.
# The following is the expected probability of something being 
# classified as Class 1. explainer.expected_value[0] gives the
# expected probability of something being classfied as Class 0. Any # SHAP value contributes towards or against this base expected
# probability, which is calcultated for the dataset, not for the
# model.
explainer.expected_value[1]
features= X_test.columns.tolist()
for feat in features:
shap.dependence_plot(feat, shap_values[1], X_test, dot_size=100)
Shapley values for the feature type_subv against its range of values

Advantages

Drawbacks

Available packages:

Upcoming

Solution Architect at the Office of CTO @ Microsoft. Machine Learning and Advanced Analytics. Sensemaking by engaging first hand. Frustrated sociologist.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store