Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Research Progress of Interpretable Artificial Intelligence
0
Zitationen
1
Autoren
2026
Jahr
Abstract
Artificial intelligence has made remarkable progress across many fields, encouraging countries to attach great importance to its research and development. However, the rapid development of artificial intelligence has also brought about a series of problems and threats, and overreliance on and blind trust in such models can lead to serious risks. Therefore, interpretable artificial intelligence has become a key element in building trusted and transparent intelligent systems, and its research and development requires immediate attention. This survey comprehensively summarizes the research progress on explainable artificial intelligence at home and abroad comprehensively from multiple dimensions and levels. Based on current research results in the industry, this survey subdivides the key technologies of explainable artificial intelligence into four categories: interpretation model, interpretation method, safety testing, and experimental verification, with the aim of clarifying the technical focus and development direction of each field. Furthermore, the survey explores specific application examples of explainable artificial intelligence across key industry sectors, including but not limited to education, healthcare, finance, autonomous driving, and justice, demonstrating the significant role it plays in enhancing decision-making transparency. Finally, this survey provides an in-depth analysis of the major technical challenges of interpretable artificial intelligence and presents future development trends, in addition to a special investigation and in-depth analysis of the interpretability of large models, which has attracted considerable attention recently.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.604 Zit.
Generative Adversarial Nets
2023 · 19.893 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.303 Zit.
"Why Should I Trust You?"
2016 · 14.432 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.167 Zit.