Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
SHIELD: An AI Framework for Skeletal Health Intelligence and Early Lesion Detection to Improve Orthopedic Referrals
0
Zitationen
5
Autoren
2025
Jahr
Abstract
<title>Abstract</title> Delays in the referral of patients with suspected metastatic bone disease (MBD) from radiology reports represent a critical challenge that can negatively impact patient outcomes. The conventional manual review process is often a significant bottleneck, leading to prolonged diagnostic timelines. We developed and validated SHIELD, an automated AI framework designed to accelerate and improve the accuracy of MBD referrals. We fine-tuned a RadBERT-RoBERTa model on a decade of radiology reports (N = 245 patients) from two academic medical centers to classify reports into three tiers: "No Referral," "Referral," and "Referral/High Risk." To ensure clinical utility and transparency, SHIELD incorporates a Large Language Model to generate natural-language explanations for its classifications. SHIELD demonstrated exceptional performance on a hold-out test set. It achieved 100% accuracy and an Area Under the Curve (AUC) of 1.00 in the primary binary task of distinguishing referral from non-referral cases. In the more granular three-class task, the model achieved an overall accuracy of 89.52%, with near-perfect performance in identifying "No Referral" reports (F1-score: 99.20%). Critically, the model operated in a clinically "fail-safe" manner, never misclassifying a high-risk case as requiring no referral. A retrospective timeline analysis revealed that SHIELD can reduce the referral period from a conventional average of 109.6 days to a computational time of 1–3 minutes. Proposed work provides high accuracy with a sophisticated explainability component using a large language model. Thus, SHIELD framework is a robust, explainable, and autonomous solution for triaging radiology reports. By drastically reducing administrative and diagnostic delays, it has the potential to significantly accelerate the clinical workflow, ensure timely specialist consultation, and ultimately improve the standard of care for patients with suspected MBD.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.493 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.377 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.835 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.555 Zit.