Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Should Artificial Intelligence-Based Patient Preference Predictors Be Used for Incapacitated Patients? A Scoping Review of Reasons to Facilitate Medico-Legal Considerations
5
Zitationen
8
Autoren
2025
Jahr
Abstract
BACKGROUND: Research indicates that surrogate decision-makers often struggle to accurately interpret and reflect the preferences of incapacitated patients they represent. This discrepancy raises important concerns about the reliability of such practice. Artificial intelligence (AI)-based Patient Preference Predictors (PPPs) are emerging tools proposed to guide healthcare decisions for patients who lack decision-making capacity. OBJECTIVES: This scoping review aims to provide a thorough analysis of the arguments, both for and against their use, presented in the academic literature. METHODS: A search was conducted in PubMed, Web of Science, and Scopus to identify relevant publications. After screening titles and abstracts based on predefined inclusion and exclusion criteria, 16 publications were selected for full-text analysis. RESULTS: The arguments in favor are fewer in number compared to those against. Proponents of AI-PPPs highlight their potential to improve the accuracy of predictions regarding patients' preferences, reduce the emotional burden on surrogates and family members, and optimize healthcare resource allocation. Conversely, critics point to risks including reinforcing existing biases in medical data, undermining patient autonomy, raising critical concerns about privacy, data security, and explainability, and contributing to the depersonalization of decision-making processes. CONCLUSIONS: Further empirical studies are needed to assess the acceptability and feasibility of these tools among key stakeholders, such as patients, surrogates, and clinicians. Moreover, robust interdisciplinary research is needed to explore the legal and medico-legal implications associated with their implementation, ensuring that these tools align with ethical principles and support patient-centered and equitable healthcare practices.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.652 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.567 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.083 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.856 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.