Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The Accuracy-Explainability Trade-off, the Right to Explanation, and the Implications for Organisations
0
Zitationen
2
Autoren
2026
Jahr
Abstract
Abstract Algorithms increasingly shape access to employment, credit, healthcare, and justice, yet the basis on which they do so is often opaque. A growing literature argues that affected individuals have a right to explanation, grounded in their interest in informed self-advocacy: the ability to understand and respond to decisions that bear on their life prospects. We examine whether this interest can sustain such a right. Explanations that serve self-advocacy must be reliable (truth-tracking) and verifiable (open to independent check). We show that in open-ended decision environments, where evaluative criteria must be discovered rather than stipulated in advance, reliability and verifiability conflict with accuracy. This trade-off arises in human decision-making and has a structural analogue in AI systems. Because requiring thick explanation would systematically reduce the quality of decisions, the self-advocacy interest cannot by itself ground a general right. Where thick explanation is nonetheless owed, it is justified by different grounds: legality, fairness, and trust. We draw out implications for how organisations should structure their obligations.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.929 Zit.
Generative Adversarial Nets
2023 · 19.896 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.356 Zit.
"Why Should I Trust You?"
2016 · 14.688 Zit.
Generative adversarial networks
2020 · 13.316 Zit.