Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainable Artificial Intelligence for Medical Science
0
Zitationen
1
Autoren
2024
Jahr
Abstract
Explainable Artificial Intelligence (XAI) is at the forefront of Artificial Intelligence (AI) research. As the development of AI has become increasingly complex with modern day computational capabilities, the transparency of the AI models decreases. This promotes the necessity of XAI, as it is illicit as per the General Data Protection Regulations (GDPR) “right to an explanation” to not provide a person with an explanation given a decision reached after algorithmic judgement. The latter is crucial in critical fields such as Healthcare, Finance and Law. For this thesis, the Healthcare field and morespecifically Electronic Health Records are the main focus for the development and application of XAI methods.This thesis offers prospective approaches to enhance the explainability of Electronic Health Records (EHRs). It presents three different perspectives that encompass the Model, Data, and the User, aimed at elevating explainability. The model perspective draws upon improvements to the local explainability of black-box AI methods. The data perspective enables an improvement to the quality of the data provided for AI methods, such that the XAI methods applied to the AI models account for a key property of missingness. Finally, the user perspective provides an accessible form of explainability by allowing less experienced users to have an interface to use both AI and XAI methods.Thereby, this thesis provides new innovative approaches to improve the explanations that are given for EHRs. This is verified through empirical and theoretical analysis of a collection of introduced and existing methods. We propose a selection of XAI methods that collectively build upon current leading literature in the field. Here we propose the methods Polynomial Adaptive Local Explanations (PALE) for patient specific explanations, both Counterfactual-Integrated Gradients (CF-IG) and QuantifiedUncertainty Counterfactual Explanations (QUCE) that utilise counterfactual thinking, Batch-Integrated Gradients (Batch-IG) to address the temporal nature of EHR data and Surrogate Set Imputation (SSI) that addresses missing value imputation. Finally, we propose a tool called ExMed that utilises XAI methods and allows for the ease of access for AI and XAI methods.
Ähnliche Arbeiten
"Why Should I Trust You?"
2016 · 14.522 Zit.
A Comprehensive Survey on Graph Neural Networks
2020 · 8.813 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.376 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.832 Zit.
Artificial intelligence in healthcare: past, present and future
2017 · 4.470 Zit.