Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Unveiling explainability in artificial intelligence: a step to-wards transparent AI
1
Zitationen
3
Autoren
2025
Jahr
Abstract
Explainability in artificial intelligence (AI) is an essential factor for building transparent, trustworthy, and ethical systems, particularly in high-stakes domains such as healthcare, finance, justice, and autonomous systems. This study examines the foundations of AI explainability, its critical role in fostering trust, and the current methodologies used to interpret AI models, such as post-hoc techniques, intrinsically inter-pretable models, and hybrid approaches. Despite these advancements, challenges persist, including trade-offs between accuracy and inter-pretability, scalability, ethical risks, and transparency gaps. The paper explores emerging trends like causality-based explanations, neuro-symbolic AI, and personalized frameworks, while emphasizing the integration of ethics and the need for automation in explainability. Future directions stress the importance of collaboration among researchers, practitioners, and policymakers to establish industry standards and regulations, ensuring that AI systems align with societal values and expectations.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.627 Zit.
Generative Adversarial Nets
2023 · 19.894 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.308 Zit.
"Why Should I Trust You?"
2016 · 14.455 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.177 Zit.