Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
A Survey on Explainable AI Narratives based on Large Language Models
0
Zitationen
7
Autoren
2025
Jahr
Abstract
Explainable Artificial Intelligence (XAI) seeks to elucidate the inner logic of machine learning models, yet its outputs often remain difficult for non-technical users to understand. The emerging paradigm of XAI Narratives leverages Large Language Models (LLMs) to translate technical explanations into coherent, human-readable accounts. This survey provides the first systematic review of this approach, focusing on systems in which LLMs act as post-hoc narrative translators rather than autonomous explainers. We formalize this task as the Narrative Generation Problem, examine its integration with classical XAI methods such as feature attribution and counterfactual explanations across multiple data modalities, and introduce a taxonomy for narrative evaluation spanning three core dimensions. Finally, we analyze prompting strategies and outline open challenges and future directions for advancing reliable, interpretable, and context-aware XAI Narratives.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.796 Zit.
Generative Adversarial Nets
2023 · 19.896 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.334 Zit.
"Why Should I Trust You?"
2016 · 14.607 Zit.
Generative adversarial networks
2020 · 13.214 Zit.