Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Exploring physics students’ attitudes toward ChatGPT using the ABC model
0
Zitationen
4
Autoren
2026
Jahr
Abstract
The integration of artificial intelligence tools such as ChatGPT into educational settings has sparked a paradigm shift in higher education, necessitating a deeper understanding of students’ attitudes toward these technologies. The ABC model, which delineates attitudes into affective, behavioral, and cognitive components, provides a robust framework for such investigations. Prior studies have applied this model broadly across multiple disciplines. However, little is known about its applicability in physics education, where a strong emphasis on analytical reasoning and quantitative problem-solving might influence attitudes uniquely. Addressing this gap, we conducted a cross-sectional survey study using an online questionnaire administered to N = 1,189 physics students enrolled at German universities. We developed an instrument, adapted from prior research, to assess students’ attitudes toward ChatGPT in the context of physics learning. The validity of the instrument’s hypothesized three-factor structure was then evaluated via confirmatory factor analysis. The results paint a clear picture: The three-factor solution demonstrated satisfactory global fit (CFI = 0.95, RMSEA = 0.05, SRMR = 0.04) and significantly outperformed alternative two- and one-factor models based on likelihood ratio tests and information criteria. The results thus affirm the empirical validity of this instrument in capturing physics students’ attitudes toward ChatGPT according to the ABC model, contributing to a nuanced understanding of learner perspectives on ChatGPT in discipline-specific educational contexts. Additionally, an overview is provided of physics students’ attitudes toward learning with ChatGPT by analyzing their responses on the item level. Implications for educational practice and future research are discussed.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.436 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.311 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.753 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.523 Zit.