Towards Robust Interpretability with Self-Explaining Neural Networks
NeurIPS 2018

TL;DR
Instead of explaining models after the fact, why not build interpretability in from the start? We introduce Self-Explaining Neural Networks that generate faithful, stable explanations as part of their architecture—no post-hoc approximations needed.
Abstract
Most recent work on interpretability of complex machine learning models has focused on estimating a posteriori explanations for previously trained models around specific predictions. Self-explaining models where interpretability plays a key role already during learning have received much less attention. We propose three desiderata for explanations in general—explicitness, faithfulness, and stability—and show that existing methods do not satisfy them. In response, we design self-explaining models in stages, progressively generalizing linear classifiers to complex yet architecturally explicit models. Faithfulness and stability are enforced via regularization specifically tailored to such models.
Citation
@inproceedings{alvarez2018towards,
title={Towards Robust Interpretability with Self-Explaining Neural Networks},
author={Alvarez-Melis, David and Jaakkola, Tommi S},
booktitle={Advances in Neural Information Processing Systems},
volume={31},
year={2018}
}