OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 19.04.2026, 11:42

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

T-FIX: Text-Based Explanations with Features Interpretable to eXperts

2025·0 Zitationen·arXiv (Cornell University)Open Access
Volltext beim Verlag öffnen

0

Zitationen

15

Autoren

2025

Jahr

Abstract

As LLMs are deployed in knowledge-intensive settings (e.g., surgery, astronomy, therapy), users are often domain experts who expect not just answers, but explanations that mirror professional reasoning. However, most automatic evaluations of explanations prioritize plausibility or faithfulness, rather than testing whether an LLM thinks like an expert. Existing approaches to evaluating professional reasoning rely heavily on per-example expert annotation, making such evaluations costly and difficult to scale. To address this gap, we introduce the T-FIX benchmark, spanning seven scientific tasks across three domains, to operationalize expert alignment as a desired attribute of LLM-generation explanations. Our framework enables automatic evaluation of expert alignment, generalizing to unseen explanations and eliminating the need for ongoing expert involvement.

Ähnliche Arbeiten