Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Quantitative Assessment of Explainability in Machine Learning Models : A Study on the OULA Dataset
2
Zitationen
2
Autoren
2025
Jahr
Abstract
Many studies on AI in education compare model performance and fairness, but few focus on explainability. To address this gap, we evaluate two machine learning models—Artificial Neural Network (ANN) and Decision Tree (DT)—focusing on performance and explainability in predicting student performance using the OULA dataset. The DT, being inherently explainable, struggles with complex data relationships and misclassification, while ANN, although more accurate and stable, lacks transparency. Using the LIME method, the ANN outperforms the DT in accuracy and stability, but enhancing the interpretability of ANN models remains a key challenge for future research.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.988 Zit.
Generative Adversarial Nets
2023 · 19.896 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.368 Zit.
"Why Should I Trust You?"
2016 · 14.740 Zit.
Generative adversarial networks
2020 · 13.342 Zit.