Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
How First-Year Students Actually Use ChatGPT in Permitted Assessments: Empirical Typologies, Verification Gaps, and the Policy-Practice Divide
0
Zitationen
3
Autoren
2026
Jahr
Abstract
<title>Abstract</title> Building upon the Structured AI Guided Education (SAGE) framework, this mixed-methods study examines how first-year ICT students navigate generative AI tools during a supervised in-class assessment under institutional AI Collaborate permissions. Analysing behavioural data (n=167) and reflective responses (n=163) collected through an embedded 12-item reflection instrument, a competency-confidence inversion is identified wherein students demonstrate sophisticated AI interaction strategies whilst experiencing regulatory anxiety. Crucially, the data reveals a ``Goldilocks Zone'' of interaction (4--8 prompts) where engagement is optimised, distinguishing effective use from passive consumption. Four distinct student typologies emerged: Strategic Optimisers (32\%), Dialogic Learners (28\%), Cautious Adopters (23\%), and Experimental Users (17\%). Students predominantly seek partnership in developing AI literacy frameworks rather than prescriptive policies, with 77.8\% struggling with verification competencies despite 73\% demonstrating systematic verification behaviours. The findings reveal AI functions as a linguistic equaliser for international students (46.7\% citing English confidence) and transforms rather than eliminates intellectual labour through time reallocation. These empirical patterns validate embedded SAGE verification protocols in cultivating systematic cross-referencing behaviours whilst revealing that verification competency, confidence and ethical awareness require explicit pedagogical intervention beyond assessment-embedded scaffolding alone, positioning students as co-creators rather than compliance subjects in defining legitimate AI-enhanced academic practice.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.479 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.364 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.814 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.543 Zit.