Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Token-Level Attribution for Transparent Biomedical AI
1
Zitationen
3
Autoren
2026
Jahr
Abstract
Background: Explainability (xAI) is critical for fostering trust, ensuring safety, and supporting regulatory compliance in healthcare AI systems. Large Language Models (LLMs), with impressive capabilities, operate as "black boxes" with prohibitive computational demands and regulatory challenges. Small Language Models (SLMs) with open-source architectures present a pragmatic alternative, offering efficiency, potential interpretability, and alignment with data privacy frameworks. This study evaluates whether token-level attribution (TLA) methods can provide technical traceability in SLMs for clinical decision support. Methods: The Captum 0.7 attribution library was applied to a Qwen-2.5-1.5B model on 20 breast cancer cases from a publicly available dataset. Hardware requirements were profiled on consumer-grade GPU. Using perturbation-based integrated gradients, we analyzed how clinical input features statistically influenced token generation probabilities. Results: Attribution heatmaps successfully identified clinically relevant input features, with high-attribution tokens corresponding to expected clinical factors. The model occupied minimal storage, enabling local deployment without cloud infrastructure. This validates that SLMs can provide algorithmic traceability required for regulatory frameworks. Conclusions: This proof-of-concept demonstrates the technical feasibility of combining SLMs with perturbation-based xAI methods to achieve auditable clinical AI within practical hardware constraints. While TLA provides statistical associations, bridging toward causal clinical reasoning requires further research.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 21.007 Zit.
Generative Adversarial Nets
2023 · 19.896 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.374 Zit.
"Why Should I Trust You?"
2016 · 14.763 Zit.
Generative adversarial networks
2020 · 13.359 Zit.