Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
EXPLAINING ADDITIVE WEIBULL MODEL PARAMETER ESTIMATION WITH XAI: A SHAP AND LIME ANALYSIS
0
Zitationen
3
Autoren
2026
Jahr
Abstract
Conventional machine learning models face limitations in conducting time-to-event analyses because of censoring issues. This study introduces a Deep Additive Weibull (DAW) model that utilizes deep learning techniques for the survival analysis of right-censored COVID-19 patient data. Also, we explore several methods for "opening the black box" of the DAW model, including local interpretable model-agnostic explanations (LIME) and shapley additive explanations (SHAP), to enhance model trustworthiness. The DAW model leverages neural networks for survival analysis, specifically to estimate survival probabilities for each patient using an autoencoder-based network. The DAW model achieved a concordance index of 0.9699 for training and 0.92339 for testing. Our findings show that the DAW model effectively captures nonlinearities and complex interactions. We also assessed the impact of specific features on the model's prediction, providing valuable insights. Both SHAP and LIME plots highlight similar features as important, such as pneumonia, diabetes, age and inmsupr, indicating consistent model behavior across different explanation methods. Moreover, we demonstrated that explainable machine learning (ML) can elucidate how models make prediction, which is crucial for increasing trust and adoption of innovative ML techniques in healthcare.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.995 Zit.
Generative Adversarial Nets
2023 · 19.896 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.374 Zit.
"Why Should I Trust You?"
2016 · 14.750 Zit.
Generative adversarial networks
2020 · 13.352 Zit.