Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
XAI -Driven Explainability for Cardiovascular Diseases Prediction
0
Zitationen
2
Autoren
2026
Jahr
Abstract
The adoption of artificial intelligence (AI) in cardiovascular disease prediction has significantly improved risk stratification, offering new avenues for early diagnosis and preventive care. With the growing availability of electronic health records and structured clinical datasets, machine learning (ML) and deep learning (DL) models have demonstrated strong predictive capabilities. However, despite their performance, its adoption in healthcare is often constrained by the lack of transparency and interpretability in many ML and DL models. This lack of explainability undermines clinical trust and raises ethical concerns. In high-stakes domains such as CVD prediction, clinicians require not only accurate outputs but also clear explanations of how those predictions are derived. This paper presents a comparative evaluation of explainable artificial intelligence (XAI) techniques applied to both conventional ML models such as Logistic Regression, Support Vector Machine, Decision Tree, and Random Forest and DL architectures including AutoInt, FT-Transformer, and Category Embedding. Using the Framingham Heart Study dataset, this study integrates SHAP (Shapley Additive Explanations) and LIME (Local Interpretable Model-Agnostic Explanations) to assess model interpretability and feature relevance. Results show that conventional models offer superior explainability with comparable predictive accuracy, while DL models, although slightly less interpretable, demonstrate potential with advanced XAI techniques. The findings advocate hybrid approaches that balance accuracy and interpretability, supporting ethical and practical AI deployment in healthcare.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.634 Zit.
Generative Adversarial Nets
2023 · 19.894 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.311 Zit.
"Why Should I Trust You?"
2016 · 14.478 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.178 Zit.