Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainable Artificial Intelligence (XAI) from a user perspective: A synthesis of prior literature and problematizing avenues for future research
206
Zitationen
3
Autoren
2022
Jahr
Abstract
The rapid growth and use of artificial intelligence (AI)-based systems have raised concerns regarding explainability. Recent studies have discussed the emerging demand for explainable AI (XAI); however, a systematic review of explainable artificial intelligence from an end user's perspective can provide a comprehensive understanding of the current situation and help close the research gap. The purpose of this study was to perform a systematic literature review of explainable AI from the end user's perspective and to synthesize the findings. To be precise, the objectives were to 1) identify the dimensions of end users' explanation needs; 2) investigate the effect of explanation on end user's perceptions, and 3) identify the research gaps and propose future research agendas for XAI, particularly from end users' perspectives based on current knowledge. The final search query for the Systematic Literature Review (SLR) was conducted on July 2022. Initially, we extracted 1707 journal and conference articles from the Scopus and Web of Science databases. Inclusion and exclusion criteria were then applied, and 58 articles were selected for the SLR. The findings show four dimensions that shape the AI explanation, which are format (explanation representation format), completeness (explanation should contain all required information, including the supplementary information), accuracy (information regarding the accuracy of the explanation), and currency (explanation should contain recent information). Moreover, along with the automatic representation of the explanation, the users can request additional information if needed. We have also described five dimensions of XAI effects: trust, transparency, understandability, usability, and fairness. We investigated current knowledge from selected articles to problematize future research agendas as research questions along with possible research paths. Consequently, a comprehensive framework of XAI and its possible effects on user behavior has been developed.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.567 Zit.
Generative Adversarial Nets
2023 · 19.892 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.299 Zit.
"Why Should I Trust You?"
2016 · 14.391 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.164 Zit.