Explain How Your Model Works Using Explainable AI

Semanur Kapusızoğlu
Analytics Vidhya
Published in
6 min readJan 7, 2021

--

Can you explain how your model works?

Artificial intelligence techniques are used to solve real-world problems. We get the data, perform some operations to make it clean & ready for the following processes.

We basically pick things from this world and take them into the world of machines, represent it with numbers, and then feed it to a bunch of models. Try to improve them and eventually “the winner model” gets the test data. A vital question comes to the minds :

“ How do we take this result back to real world ? “

Explainable AI (with a cooler name: XAI)

A formal definition: According to Wikipedia, Explainable AI refers to methods and techniques in the application of artificial intelligence technology such that the results of the solution can be understood by humans. [1]

In the early phases of AI adoption, it was okay to not understand what the model predicts in a certain way, as long as it gives the correct outputs. Explaining how they work was not the first priority. Now, the focus is turning to build human interpretable models.

Three important aspects of model interpretation are:
1. Transparency
2. The ability to question
3. The ease of understanding .[2]

Model interpretability can be examined in two levels:

  • Global Interpretation: Examines the model from a broader perspective. For example, let’s say we are working on a house price dataset and we implemented a neural network. The global interpretation might say “Your model uses # of squared feet as an important feature to derive predictions”
  • Local Interpretation: As the name suggests, this approach is focused on a certain observation/data point. Let’s continue moving forward with our example. Prediction for a really small house turned out large. Local interpretation looks at the other features and it might say “Your model predicted this way because the location of the house is very close to the city center.”
Source: Sri Ambati, Get Hands on MLI

The Trade-off Between Accuracy and Interpretability

In the industry, you will often hear that business stakeholders tend to prefer models that are more interpretable like linear models (linear\logistic regression) and trees which are intuitive, easy to validate, and explain to a non-expert in data science. [2]

In contrast, when we look at the complex structure of real-life data, in the model building & selection phase, the interest is mostly shifted towards more advanced models. That way, we are more likely to obtain improved predictions.

Models like these (ensembles, neural networks, etc.) are called black-box models. As the model gets more advanced, it becomes harder to explain how it works. Inputs magically go into a box and voila! We get amazing results.

But, HOW?

When we suggest this model to stakeholders, will they completely trust it and immediately start using it? NO. They will ask questions and we should be ready to answer them.

Why should I trust your model?

Why did the model take a certain decision?

What drives model predictions?

We should consider both improving our model accuracy and not get lost in the explanation. There should be a balance between both.

Source: DPhi Advanced ML Bootcamp — Explainable AI [2]

Here, I would like to share a sentence from Dipanjan Sarkar’s medium post about explainable AI:

Any machine learning model at its heart has a response function which tries to map and explain relationships and patterns between the independent (input) variables and the dependent (target or response) variable(s). [3]

So, models take inputs and process them to get outputs. What if our data is biased? It will also make our model biased and therefore untrustworthy. It is important to understand & be able to explain to our models so that we can also trust their predictions and maybe even detect issues and fix them before presenting them to others.

To improve the interpretability of our models, there are various techniques some of which we already know and implement. Traditional techniques are exploratory data analysis, visualizations, and model evaluation metrics. With the help of them, we can get an idea of the model’s strategy. However, they have some limitations. To learn more about traditional ways and their limitations, check out this amazing article by Dipanjan Sarkar.[4]

Other model interpretation techniques and libraries have been developed to overcome limitations. Some of these are :

  • LIME ( Local Interpretable Model-Agnostic Explanations)
  • SHAP (Shapley Additive Explanations)
  • ELI5 (Explain Like I’m 5)
  • SKATER

These libraries use feature importance, partial dependence plots, individual conditional expectation plots to explain less complex models such as linear regression, logistic regression, decision trees, etc.

Feature importance shows how a feature is important for the model. In other words, when we delete the feature from the model, how our error changes? If the error increases a lot, this means that a feature is important for our model to predict the target variable.

Source: Machine Learning Mastery, XGBoost Feature Importance Bar Chart

Partial dependence plots visualize the effect of the change for a certain feature when everything else is held constant (with a cooler phrase: ceteris paribus). With the help of these, we can see a possible limit value, where this value is exceeded, it directs the model predictions the other way. When we are visualizing partial dependence plots, we are examining the model globally.

Source: Dipanjan (DJ) Sarkar, Model Interpretation Strategies

Individual conditional expectation plots show the effect of changes for a certain feature, just like partial dependency plots. But this time, the point of view is local. We are interested to see the effect of changes for a certain feature for all instances in our data. A partial dependence plot is the average of the lines of an ICE plot.[5]

Source: Christoph Molnar, Interpretable Machine Learning- A Guide for Making Black Box Models Explainable

When it comes to explaining more advanced models, model-agnostic (does not depend on the model) techniques are used.

Global surrogate models take the original inputs and your black-box machine learning predictions. When this new dataset is used to train and test the appropriate global surrogate model (more interpretable models such as linear model, decision tree, etc.), it basically tries to mimic your black-box model’s predictions. By interpreting and visualizing this “easier” model, we get a better understanding of how our actual model predicts in a certain way.

Other interpretability tools are LIME, SHAP, ELI5, and SKATER libraries. We will talk about them in the next post, over a guided implementation. Until then, I am sharing some amazing resources I used to form this post along with some extra links. Stay tuned for the next post, see you there!

Happy learning!

REFERENCES

[1] Wikipedia, Explainable AI, https://en.wikipedia.org/wiki/Explainable_artificial_intelligence

[2] DPhi Tech, Explainable AI Course, https://dphi.tech/lms/learn/explainable-ai/563

[3] Dipanjan (DJ) Sarkar, The Importance of Human Interpretable Machine Learning, https://towardsdatascience.com/human-interpretable-machine-learning-part-1-the-need-and-importance-of-model-interpretation-2ed758f5f476

[4] Dipanjan (DJ) Sarkar, Model Interpretation Strategies, https://towardsdatascience.com/explainable-artificial-intelligence-part-2-model-interpretation-strategies-75d4afa6b739

[5] Christoph Molnar, Interpretable Machine Learning- A Guide for Making Black Box Models Explainable, 2019,
https://christophm.github.io/interpretable-ml-book/

Originally published at https://www.analyticsvidhya.com on January 7, 2021.

--

--

Semanur Kapusızoğlu
Analytics Vidhya

Hi! I‘m an industrial engineer passionate about data science and machine learning. I’m here because best way to learn something is to teach it.Hope you enjoy!