Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Risk prediction algorithms and clinical judgment: Impact of advice distance, social proof, and feature-importance explanations
7
Zitationen
4
Autoren
2023
Jahr
Abstract
Cancer risk algorithms are developed in ever-increasing numbers to support clinical decisions. However, their uptake in UK primary care remains low and there is little evidence of how they inform judgment. This study aimed to replicate and extend findings of a recent study, which found that family physicians integrated an unnamed risk algorithm in their risk estimates about hypothetical patients suspected to have colorectal cancer; consequently, their referral decisions improved. This study employed a similar methodology of presenting patient vignettes online but used a different cancer (upper gastrointestinal) and a larger physician sample (N = 215). Furthermore, it tested the impact of two interventions on algorithm uptake: a social proof nudge describing how previous study participants had found the algorithm useful, and a feature-importance explanation (graph depicting the relative contribution of symptoms and risk factors to the patients’ risk score). We provide further support that cancer risk algorithms have the potential to improve risk assessment and referral decisions, and evidence that the introduction of a simple and scalable social proof nudge can enhance algorithm uptake. Finally, we provide further support to the earlier finding that algorithms in tandem with clinical vignettes could be integrated into medical training programmes for risk assessment. • Cancer risk algorithms can improve clinical risk assessments and decisions. • Nudge potential users by highlighting benefits to other users. • A feature-importance explanation did not impact algorithm uptake.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.639 Zit.
Generative Adversarial Nets
2023 · 19.894 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.312 Zit.
"Why Should I Trust You?"
2016 · 14.486 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.181 Zit.