Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
University Students’ Perspectives on the Use of ChatGPT in Take-Home Assignments
0
Zitationen
2
Autoren
2026
Jahr
Abstract
<p>Increased and easier access to technology has changed the way students in universities do their assignments. This study investigated university students’ perspectives on the use of ChatGPT in take-home assignments. Specifically, the study examined (a) students’ perceptions of using ChatGPT, (b) the influence of these perceptions on behavioral intention (BI) and actual use, and (c) social influence (SI) on the use of ChatGPT. A sequential mixed-methods design was employed using a validated questionnaire and focus group discussions (FDGs). The qualitative and quantitative data were collected from 482 undergraduate students at a government-owned university in Tanzania. In particular, quantitative data were then analyzed descriptively and using structural equation modeling (SEM) techniques, while qualitative ones were analyzed thematically. The quantitative results indicated that students perceived ChatGPT as moderately beneficial in improving performance. They also perceived effort expectancy (EE) and SI as significant predictors of one’s BI to use ChatGPT, but performance expectancy (PE) was not perceived as a predictor. The qualitative results indicated that peers encouraged each other to use ChatGPT in take-home assignments, resulting in increased collaboration and sharing of knowledge about the tools. The students felt that lecturers opposed ChatGPT use due to the lack of an explicit university policy on the use of artificial intelligence (AI) chatbots, an issue that raises concerns pertaining to fairness and academic integrity. The results highlight the necessity for universities to conduct ethical literacy programs for students on the appropriate use of AI tools and also develop explicit guidelines that direct them on how they should use AI in academic assignments.</p>
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.469 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.358 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.803 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.542 Zit.