Layer-Wise Relevance Propagation with Conservation Property for ResNet

Seitaro Otsuki*, Tsumugi Iida*, Félix Doublet*, Tsubasa Hirakawa*, Takayoshi Yamashita*, Hironobu Fujiyoshi*, Komei Sugiura* ;

Abstract


"The transparent formulation of explanation methods is essential for elucidating the predictions of neural networks, which are typically black-box models. Layer-wise Relevance Propagation (LRP) is a well-established method that transparently traces the flow of a model’s prediction backward through its architecture by backpropagating relevance scores. However, the conventional LRP does not fully consider the existence of skip connections, and thus its application to the widely used ResNet architecture has not been thoroughly explored. In this study, we extend LRP to ResNet models by introducing Relevance Splitting at points where the output from a skip connection converges with that from a residual block. Our formulation guarantees the conservation property throughout the process, thereby preserving the integrity of the generated explanations. To evaluate the effectiveness of our approach, we conduct experiments on ImageNet and the Caltech-UCSD Birds-200-2011 dataset. Our method achieves superior performance to that of baseline methods on standard evaluation metrics such as the Insertion-Deletion score while maintaining its conservation property. We will release our code for further research at https://5ei74r0.github.io/ lrp-for-resnet.page/"

Related Material


[pdf] [supplementary material] [DOI]