On Relating Why and Why not explanations - Archive ouverte HAL Access content directly
Preprints, Working Papers, ... Year :

On Relating Why and Why not explanations

(1) , (2) , (3) , (4)
1
2
3
4

Abstract

Explanations of Machine Learning (ML) models often address a 'Why?' question. Such explanations can be related with selecting feature-value pairs which are sufficient for the prediction. Recent work has investigated explanations that address a 'Why Not?' question, i.e. finding a change of feature values that guarantee a change of prediction. Given their goals, these two forms of explaining predictions of ML models appear to be mostly unrelated. However, this paper demonstrates otherwise, and establishes a rigorous formal relationship between 'Why?' and 'Why Not?' explanations. Concretely, the paper proves that, for any given instance, 'Why?' explanations are minimal hitting sets of 'Why Not?' explanations and vice-versa. Furthermore, the paper devises novel algorithms for extracting and enumerating both forms of explanations.

Dates and versions

hal-03159201 , version 1 (04-03-2021)

Licence

Attribution - CC BY 4.0

Identifiers

Cite

Alexey Ignatiev, Nina Narodytska, Nicholas Asher, João Marques Silva. On Relating Why and Why not explanations. 2021. ⟨hal-03159201⟩
19 View
0 Download

Altmetric

Share

Gmail Facebook Twitter LinkedIn More