Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
SHAP-Optimized Explainable Computational Intelligence Models for Interpretable Chemotherapy Adverse Event Prediction
0
Zitationen
1
Autoren
2026
Jahr
Abstract
The negative effects of chemotherapy can be varied among patients, which will greatly reduce the continuity of the treatment and the clinical outcomes. The existing risk assessment approaches are mainly based on generalized statistical techniques or opaque models, which would tend to be susceptible to incomplete clinical evidence and would not deliver the explanations required to offer clinical faith. To address these limitations, this paper proposes an explicable artificial intelligence-based model of forecasting adverse extreme-type chemotherapy side effects using XG-Boost, random forest, and decision tree models, and using SHAP-based interpretability. The clinical, demographic, and treatment-related data were processed in an orderly manner, and the models that were trained using the data were processed with missing values. The experiments indicate that XG-Boost is more powerful in prediction and robust and generalization, as it has a reduced cross-validation error and improved confusion matrix performance with a reduced number of false negatives and false positives. Random forests and decision tree models proved to have higher misclassification errors, in particular, false negatives, and limit clinical validity. SHAP analysis provided patient-specific and globalized explanations of the risk factors that made predictions. In general, the proposed explainable AI architecture is better than the existing approaches regarding its capacity to be precise, resilient, and interpretable with reliable and tailored clinical decision support in cancer treatment.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.682 Zit.
Generative Adversarial Nets
2023 · 19.895 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.318 Zit.
"Why Should I Trust You?"
2016 · 14.528 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.191 Zit.