Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Artificial intelligence–assisted training for rib fracture interpretation: a prospective study in undergraduate medical students
0
Zitationen
8
Autoren
2026
Jahr
Abstract
PURPOSE: Chest X-rays (CXRs) are essential in trauma care but have limited sensitivity for rib fracture detection, leading to frequent missed diagnoses. Artificial intelligence (AI) has shown potential to improve diagnostic accuracy, yet its role in radiology education remains underexplored. This study evaluated the impact of AI-assisted training on early diagnostic performance and confidence in rib fracture detection on trauma CXRs. METHODS: In this prospective observational study, 26 undergraduate medical students (UGY) completed three sequential sessions: baseline unassisted interpretation of 50 CXRs (Session 1, S1), AI-assisted interpretation of the same cases (Session 2, S2), and interpretation of 50 new CXRs without AI assistance (Session 3, S3). Diagnostic performances and confidence levels were compared across sessions. RESULTS: AI-assistance (S2) significantly improved all performance metrics, with increases of 26.7% in accuracy, 41.0% in sensitivity, 25.1% in specificity, 35.6% in F1 score, and 31.4% in precision (all p < 0.01). Performance in S3 declined compared to S2 but remained higher than baseline for accuracy (+ 13.3%, p = 0.010) and precision (+ 13.7%, p = 0.010) compared to baseline. Confidence levels showed sustained improvement across all sessions (p < 0.001). Agreement analysis in AI-misclassified cases suggested possible automation bias in S2 and carryover effects in S3. CONCLUSIONS: AI-assisted training significantly enhances early diagnostic performance and confidence in rib fracture detection on chest radiographs, a key competency in trauma and emergency care, with partial skill retention after AI withdrawal. Integrating AI into early trauma imaging training may strengthen radiology training but requires strategies to mitigate automation bias and foster independent judgment.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.652 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.567 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.083 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.856 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.