Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Transparency and Explainability in Responsible AI: Foundations, Challenges, and the Path Forward
0
Zitationen
2
Autoren
2026
Jahr
Abstract
AI systems now make or heavily influence decisions about who gets a loan, who is flagged as a flight risk, and which patients receive certain treatments etc. Given these stakes, one question keeps coming up in both policy and engineering circles: do we actually understand how these systems reach their conclusions? This paper focuses on two related ideas that sit at the heart of responsible AI: transparency, meaning how open a system is about its inner workings, and explainability, meaning how well it can articulate its reasoning to the people affected by it. I survey the main technical approaches, examine why they fall short in practice, and argue that solving this problem requires more than better algorithms,it requires rethinking how AI systems are governed.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.627 Zit.
Generative Adversarial Nets
2023 · 19.894 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.308 Zit.
"Why Should I Trust You?"
2016 · 14.455 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.177 Zit.