Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainable Generative AI: A Two-Stage Review of Existing Techniques and Future Research Directions
1
Zitationen
2
Autoren
2026
Jahr
Abstract
Generative Artificial Intelligence (GenAI) models produce increasingly sophisticated outputs, yet their underlying mechanisms remain opaque. To clarify how explainability is conceptualized and implemented in GenAI research, this two-stage review systematically examined 261 articles retrieved from six major databases. After removing duplicates and applying predefined inclusion criteria, 63 articles were retained for full analysis. In the first stage, an umbrella review synthesized insights from 18 review papers to identify prevailing frameworks, strategies, and conceptual challenges surrounding explainability in GenAI. In the second stage, an empirical review analyzed 45 primary studies to assess how explainability is operationalized, evaluated, and applied in practice. Across both stages, findings reveal fragmented approaches, a lack of standardized evaluation frameworks, and persistent challenges, including limited generalizability, interpretability–performance trade-offs, and high computational costs. The review concludes by outlining future research directions aimed at developing user-centric, regulation-aware explainability methods tailored to the unique architectures and application contexts of GenAI. By consolidating theoretical and empirical evidence, this study establishes a comprehensive foundation for advancing transparent, interpretable, and trustworthy GenAI systems.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 21.007 Zit.
Generative Adversarial Nets
2023 · 19.896 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.374 Zit.
"Why Should I Trust You?"
2016 · 14.763 Zit.
Generative adversarial networks
2020 · 13.359 Zit.