Shap-CAM: Visual Explanations for Convolutional Neural Networks Based on Shapley Value
Quan Zheng, Ziwei Wang, Jie Zhou, Jiwen Lu
;
Abstract
"Explaining deep convolutional neural networks has been recently drawing increasing attention since it helps to understand the networks’ internal operations and why they make certain decisions. Saliency maps, which emphasize salient regions largely connected to the network’s decision-making, are one of the most common ways for visualizing and analyzing deep networks in the computer vision community. However, saliency maps generated by existing methods cannot represent authentic information in images due to the unproven proposals about the weights of activation maps which lack solid theoretical foundation and fail to consider the relations between each pixels. In this paper, we develop a novel post-hoc visual explanation method called Shap-CAM based on class activation mapping. Unlike previous gradient-based approaches, Shap-CAM gets rid of the dependence on gradients by obtaining the importance of each pixels through Shapley value. We demonstrate that Shap-CAM achieves better visual performance and fairness for interpreting the decision making process. Our approach outperforms previous methods on both recognition and localization tasks."
Related Material
[pdf]
[DOI]