Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
AI versus human-generated multiple-choice questions for medical education: a cohort study in a high-stakes examination
50
Zitationen
7
Autoren
2025
Jahr
Abstract
ChatGPT-4o demonstrates the potential for efficiently generating MCQs but lacks the depth needed for complex assessments. Human review remains essential to ensure quality. Combining AI efficiency with expert oversight could optimise question creation for high-stakes exams, offering a scalable model for medical education that balances time efficiency and content quality.
Ähnliche Arbeiten
The Strengths and Difficulties Questionnaire: A Research Note
1997 · 14.604 Zit.
Making sense of Cronbach's alpha
2011 · 13.847 Zit.
QUADAS-2: A Revised Tool for the Quality Assessment of Diagnostic Accuracy Studies
2011 · 13.648 Zit.
A method for estimating the probability of adverse drug reactions
1981 · 11.485 Zit.
Evidence-Based Medicine
1992 · 4.153 Zit.