Fair and Adequate Explanations - Archive ouverte HAL Access content directly
Conference Papers Year :

Fair and Adequate Explanations

(1, 2) , (3, 4) , (5)
1
2
3
4
5

Abstract

Recent efforts have uncovered various methods for providing explanations that can help interpret the behavior of machine learning programs. Exact explanations with a rigorous logical foundation provide valid and complete explanations, but they have an epistemological problem: they may be too complex for humans to understand and too expensive to compute even with automated reasoning methods. Interpretability requires good explanations that humans can grasp and can compute. We take an important step toward specifying what good explanations are by analyzing the epistemically accessible and pragmatic aspects of explanations. We characterize sufficiently good, or fair and adequate, explanations in terms of counterfactuals and what we call the conundra of the explainee, the agent that requested the explanation. We provide a correspondence between logical and mathematical formulations for counterfactuals to examine the partiality of counterfactual explanations that can hide biases; we define fair and adequate explanations in such a setting. We then provide formal results about the algorithmic complexity of fair and adequate explanations.
Fichier principal
Vignette du fichier
main.pdf (356 Ko) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-03454256 , version 1 (29-11-2021)

Identifiers

Cite

Nicholas Asher, Soumya Paul, Chris Russell. Fair and Adequate Explanations. 5th IFIP TC 5, TC 12, WG 8.4, WG 8.9, WG 12.9 International Cross-Domain Conference for Machine Learning and Knowledge Extraction (CD-MAKE 2021), Aug 2021, Vienna (virtual), Austria. ⟨10.1007/978-3-030-84060-0_6⟩. ⟨hal-03454256⟩
40 View
56 Download

Altmetric

Share

Gmail Facebook Twitter LinkedIn More