Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Examining Reliance Patterns on AI Advice in Medical Imaging: A Mixed-Methods Randomized Crossover Experiment
0
Zitationen
13
Autoren
2026
Jahr
Abstract
Background: Artificial intelligence (AI) holds significant potential to support diagnostic decision-making; however, evidence regarding its clinical utility remains mixed. Often, the collaboration between clinicians and AI systems does not surpass the individual performance of unaided humans or standalone AI. Yet, currently, the mechanisms that limit human-AI synergy are poorly understood. This study examined the impact of AI advice on diagnostic decision-making among experts and novices, focusing on reliance patterns. Methods: We used a mixed-methods crossover experimental design with a think-aloud and an eye-tracking study arm. Participants were 50 task experts (radiologists) and 75 novices (non-radiologist physicians and medical trainees) from 10 countries. They reviewed 50 head CT scans and every case was examined in three time-separate sessions in randomized order. In each session, participants were exposed to different experimental conditions: (a) control, no AI prediction; (b) basic advice, AI prediction without annotations; and (c) XAI advice, AI prediction with scan annotations. For each case, participants had to determine if the patients had an intracranial hemorrhage (ICH). The main outcomes were diagnostic performance, confidence in the diagnosis, case reading time, and AI advice usefulness ratings. Findings: Both overreliance on incorrect advice and underreliance on correct advice occurred. Underreliance was associated with high uncertainty and, in absolute terms, had a more detrimental impact on diagnostic performance than overreliance. Correct XAI advice reduced underreliance, improved performance (OR=1·84, p<0·0001), and confidence (b=0·15, p<0·0001), particularly when reviewing more difficult cases with ICH. Surprisingly, correct XAI did not reduce reading time (b=1·81, p=0·0713). XAI was perceived as more useful than basic AI advice (b=0·12, p=0·0029), especially among novices. Interpretation: The occurrence of both under- and overreliance highlights the need for efficient counterstrategies beyond classic XAI methods to foster appropriate reliance and synergy.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.436 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.311 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.753 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.523 Zit.