This looks More Like that: Enhancing Self-Explaining Models by prototypical relevance propagation: This Looks More Like That

Loading...
Thumbnail Image
Date
2022
Volume
136
Issue
Journal
Pattern recognition : the journal of the Pattern Recognition Society
Series Titel
Book Title
Publisher
Amsterdam : Elsevier
Abstract

Current machine learning models have shown high efficiency in solving a wide variety of real-world problems. However, their black box character poses a major challenge for the comprehensibility and traceability of the underlying decision-making strategies. As a remedy, numerous post-hoc and self-explanation methods have been developed to interpret the models’ behavior. Those methods, in addition, enable the identification of artifacts that, inherent in the training data, can be erroneously learned by the model as class-relevant features. In this work, we provide a detailed case study of a representative for the state-of-the-art self-explaining network, ProtoPNet, in the presence of a spectrum of artifacts. Accordingly, we identify the main drawbacks of ProtoPNet, especially its coarse and spatially imprecise explanations. We address these limitations by introducing Prototypical Relevance Propagation (PRP), a novel method for generating more precise model-aware explanations. Furthermore, in order to obtain a clean, artifact-free dataset, we propose to use multi-view clustering strategies for segregating the artifact images using the PRP explanations, thereby suppressing the potential artifact learning in the models.

Description
Keywords
Citation
Gautam, S., Höhne, M. M.-C., Hansen, S., Jenssen, R., & Kampffmeyer, M. (2022). This looks More Like that: Enhancing Self-Explaining Models by prototypical relevance propagation: This Looks More Like That (Amsterdam : Elsevier). Amsterdam : Elsevier. https://doi.org//10.1016/j.patcog.2022.109172
Collections
License
CC BY 4.0 Unported