Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Evidence-based explanation to promote fairness in AI systems
4
Zitationen
2
Autoren
2020
Jahr
Abstract
As Artificial Intelligence (AI) technology gets more intertwined with every system, people are using AI to make decisions on their everyday activities. In simple contexts, such as Netflix recommendations, or in more complex context like in judicial scenarios, AI is part of people's decisions. People make decisions and usually, they need to explain their decision to others or in some matter. It is particularly critical in contexts where human expertise is central to decision-making. In order to explain their decisions with AI support, people need to understand how AI is part of that decision. When considering the aspect of fairness, the role that AI has on a decision-making process becomes even more sensitive since it affects the fairness and the responsibility of those people making the ultimate decision. We have been exploring an evidence-based explanation design approach to 'tell the story of a decision'. In this position paper, we discuss our approach for AI systems using fairness sensitive cases in the literature.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.988 Zit.
Generative Adversarial Nets
2023 · 19.896 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.368 Zit.
"Why Should I Trust You?"
2016 · 14.740 Zit.
Generative adversarial networks
2020 · 13.342 Zit.