Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Scenarios of Social Explainable AI in Practice
0
Zitationen
8
Autoren
2026
Jahr
Abstract
Abstract A key goal of explainable AI (XAI) is to ensure the trustworthiness of AI systems when they interact with humans in real-world settings. These interactions involve individuals with diverse backgrounds, varying levels of knowledge, and different abilities to comprehend explanations. This chapter presents scenarios that illustrate the challenges and requirements that arise for XAI methods in such real-life contexts. Specifically, we highlight the importance of adapting explanations to both the context and the explainee(s), which may involve using appropriate and multiple modalities. Based on these scenarios, we identify three key requirements for effective XAI systems: multimodality (the ability to use different explanation formats), incrementality (the ability to refine explanations over time), and patternedness (the ability to present explanations in a structured and recognizable manner).
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.615 Zit.
Generative Adversarial Nets
2023 · 19.894 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.306 Zit.
"Why Should I Trust You?"
2016 · 14.446 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.171 Zit.