Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Receiving information on machine learning-based clinical decision support systems in psychiatric services increases staff trust in these systems: A randomized survey experiment
1
Zitationen
4
Autoren
2024
Jahr
Abstract
Abstract Background Clinical decision support systems based on machine learning (ML) models are emerging within psychiatry. To ensure their successful implementation, healthcare staff needs to trust these systems. Here, we investigated if providing staff with basic information about ML-based clinical decision support systems enhances their trust in them. Methods We conducted a randomised survey experiment among staff in the Psychiatric Services of the Central Denmark Region. The participants were allocated to one of three arms, receiving different types of information: An intervention arm (receiving information on clinical decision-making supported by an ML model); an active control arm (receiving information on standard clinical decision process without ML support); and a blank control arm (no information). Subsequently, participants responded to various questions regarding their trust/distrust in ML-based clinical decision support systems. The effect of the intervention was assessed by pairwise comparisons between all randomization arms on sum scores of trust and distrust. Findings Among 2,838 invitees, 780 completed the survey experiment. The intervention enhanced trust and diminished distrust in ML-based clinical decision support systems compared with the active control arm (Trust: mean difference= 5% [95% confidence interval (CI): 2%; 9%], p-value < 0.001; Distrust: mean difference=-4% [-7%; -1%], p-value = 0.042)) and the blank control arm (Trust: mean difference= 5% [2%; 11%], p-value = 0.003; Distrust: mean difference= -3% [-6%; - 1%], p-value = 0.021). Interpretation Providing information on ML-based clinical decision support systems in hospital psychiatry may increase healthcare staff trust in such systems.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.626 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.532 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.046 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.843 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.