Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Faithfulness vs. Safety: Evaluating LLM Behavior Under Counterfactual Medical Evidence
0
Zitationen
7
Autoren
2026
Jahr
Abstract
In high-stakes domains like medicine, it may be generally desirable for models to faithfully adhere to the context provided. But what happens if the context does not align with model priors or safety protocols? In this paper, we investigate how LLMs behave and reason when presented with counterfactual (or even adversarial) medical evidence. We first construct MedCounterFact, a counterfactual medical QA dataset that requires the models to answer clinical comparison questions (i.e., judge the efficacy of certain treatments, with evidence consisting of randomized controlled trials provided as context). In MedCounterFact, real-world medical interventions within the questions and evidence are systematically replaced with four types of counterfactual stimuli, ranging from unknown words to toxic substances. Our evaluation across multiple frontier LLMs on MedCounterFact reveals that in the presence of counterfactual evidence, existing models overwhelmingly accept such "evidence" at face value even when it is dangerous or implausible, and provide confident and uncaveated answers. While it may be prudent to draw a boundary between faithfulness and safety, our findings suggest that models arguably overemphasize the former.
Ähnliche Arbeiten
Rethinking the Inception Architecture for Computer Vision
2016 · 30.694 Zit.
MobileNetV2: Inverted Residuals and Linear Bottlenecks
2018 · 24.984 Zit.
CBAM: Convolutional Block Attention Module
2018 · 21.802 Zit.
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
2020 · 21.499 Zit.
Xception: Deep Learning with Depthwise Separable Convolutions
2017 · 18.702 Zit.