Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
An aid or not? Examining ChatGPT-generated visualizations in physics problem solving
0
Zitationen
1
Autoren
2026
Jahr
Abstract
Abstract The use of artificial intelligence (AI) in education is expanding rapidly, yet its effectiveness for supporting conceptual understanding in physics remains uncertain. This study investigates the quality and pedagogical adequacy of ChatGPT-generated diagrams across five mechanics problems of varying complexity. Three university physics lecturers analysed the diagrams in relation to the problem statements. The analysis revealed recurring discrepancies, including missing or misdirected vectors, no coordinate systems, misrepresented forces, and the inclusion of extraneous or misleading elements. Errors were more frequent and severe in complex problems, indicating that the model’s capacity to produce accurate visual representations diminishes with task complexity. These findings highlight the limitations of relying on ChatGPT for physics visualization and underscore the importance of instructor mediation to ensure conceptual accuracy. Involving students in critiquing AI-generated visuals could also promote engagement and deeper conceptual understanding.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.436 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.311 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.753 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.523 Zit.