Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The Review of Studies on Explainable Artificial Intelligence in Educational Research
14
Zitationen
1
Autoren
2024
Jahr
Abstract
Explainable Artificial Intelligence (XAI) refers to systems that make AI models more transparent, helping users understand how outputs are generated. XAI algorithms are considered valuable in educational research, supporting outcomes like student success, trust, and motivation. Their potential to enhance transparency and reliability in online education systems is particularly emphasized. This study systematically analyzed educational research using XAI systems from 2019 to 2024, following the PICOS framework, and reviewed 35 studies. Methods like SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME), used in these studies, explain model decisions, enabling users to better understand AI models. This transparency is believed to increase trust in AI-based tools, facilitating their adoption by teachers and students.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.576 Zit.
Generative Adversarial Nets
2023 · 19.892 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.300 Zit.
"Why Should I Trust You?"
2016 · 14.396 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.164 Zit.