Author: Hubert Baniecki, Przemyslaw Biecek

Publisher: Proceedings of the AAAI Conference on Artificial Intelligence

Publication Year: 2022

Summary: The following article discusses how Shapley Additive Explanations (SHAP) values are a popular metric for identifying variable importance for black-box machine learning models. This paper demonstrates that despite it’s popularity, the SHAP values can be manipulated using a genetic algorithm to increase or decrease variable importance by shifting the data through adding white noise. The article demonstrates this technique on 2 datasets: the first with detecting heart disease and the other for price estimation for apartments. The authors demonstrate how they were able to deflate the SHAP value for sex variable for the heart disease detection dataset and deflate the SHAP value for square footage for apartment pricing. The authors state that this method should be used to check the stability of explanations (e.g. SHAP values) in context of changes in data distributions.