Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Measuring sustainable use of artificial intelligence in higher education: A novel explainable AI model
0
Zitationen
3
Autoren
2025
Jahr
Abstract
The rapid use of artificial intelligence (AI) in higher education offers opportunities for improved learning and operational efficiency, but raise questions about how its use can be sustained and its impacts assessed. Unchecked AI deployment can create ethical, equity and quality challenges. It is posited a new explainable AI (XAI) derived measurement model to examine sustainable use of AI in higher education. We developed a multifaceted measure to evaluate AI use, in terms of institutional support, user attitudes, ethical practices, educational outcomes and environmental issues. A combined sustainability artificial intelligence (SAUI) utilization index was developed through Analytic Hierarchy Process (AHP) weighting and statistical validation. Then, we constructed a machine learning model (XGBoost model) to infer the SAUI with SHapley Additive exPlanations (SHAP) for interpretable predictions. Factor analysis, structural equation modelling, clustering and predictive modelling were implemented. The study results show that faculty training and positive attitude toward AI have a more powerful contribution in influencing sustainable use of AI, compared to institutional facilitating conditions. Ethical and risk considerations were of moderate importance, whereas demographics had no predictive power. The explainability factor is especially important to stakeholders wanting actionable insights in education. Future research should include broader applications for this framework in other areas, and incorporate longitudinal data for analysis, in support of sustainability over time, supported in the knowledge that the presence of AI in academia promotes positively to sustainability goals.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.962 Zit.
Generative Adversarial Nets
2023 · 19.896 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.358 Zit.
"Why Should I Trust You?"
2016 · 14.704 Zit.
Generative adversarial networks
2020 · 13.328 Zit.