Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The Death of Take-Home Assessment in the Era of GenAI, Here Is the Evidence
0
Zitationen
1
Autoren
2026
Jahr
Abstract
This study examined whether a mature, empirically validated generative artificial intelligence (GenAI) intervention framework can produce reliable process evidence when deployed in unsupervised take-home assessments. Twenty-five group submissions from two cybersecurity management cohorts were audited using a five-check protocol that tested primary evidence presence, traceability, internal data consistency, modification provenance, and reflection specificity. The assessments were designed using the Structured AI-Guided Education (SAGE) framework and incorporated base prompts, structured decision tables, mandatory AI interaction logs, and reflective commentary. Only 3 of 25 submissions (12%) produced evidence chains that were substantially auditable. Full traceability between documented AI outputs and human evaluation claims was not achieved in any of the 25 submissions. The remaining submissions exhibited logical checksum failures, compliance-pattern text in evaluation cells, procedural rather than functional reflection, and structural indicators consistent with audit trail simulation. These patterns were consistent across both cohorts. The paper identifies a compliance gradient in which conscientious students who follow the process in good faith incur a disproportionate documentation burden, while students who simulate compliance can produce comparable outputs with less effort. On the basis of this evidence, the paper argues that take-home assessments can no longer be relied upon as standalone assurance instruments in the GenAI era. SAGE remains a validated pedagogy for fostering AI orchestration competency through scaffolded tutorial practice. However, the burden of assurance must shift to secure, supervised tasks where process fidelity cannot be simulated.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.485 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.371 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.827 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.549 Zit.