OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 13.05.2026, 14:17

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Epistemic compression in large language model explanations of the gut–liver axis

2026·0 Zitationen·Frontiers in Cellular and Infection MicrobiologyOpen Access
Volltext beim Verlag öffnen

0

Zitationen

5

Autoren

2026

Jahr

Abstract

Background: The gut-liver axis integrates intestinal barrier function, microbial ecology, metabolism, immune regulation, and hepatic feedback, yet remains causally non-closed and strongly context dependent. As large language models (LLMs) increasingly mediate biomedical explanation, their ability to preserve evidentiary structure within such epistemically open frameworks requires systematic evaluation. Methods: We conducted a cross-platform, mixed-methods infodemiology analysis of five widely accessible LLMs. Twenty clinically grounded questions spanning five hierarchical domains from basic mechanisms to intervention and evaluation generated 100 single-turn responses. Linguistic accessibility was assessed using seven established readability indices, while epistemic integrity was evaluated using the Journal of the American Medical Association Benchmark Criteria, Global Quality Score, and a modified DISCERN framework. Results: Linguistic complexity increased as prompts progressed toward intervention and evaluation, without corresponding gains in transparency, reliability, or educational quality. Informational integrity clustered primarily by platform rather than domain. Readability indices showed strong internal concordance, whereas integrity metrics aligned only moderately and correlated weakly with readability. Item-level analysis revealed consistently high narrative clarity but systematic under-signaling of source attribution and uncertainty, resulting in over-coherent explanations that compressed conditional associations into mechanism-like claims. Conclusions: LLM explanations of the gut-liver axis are susceptible to epistemic compression driven by narrative fluency rather than factual error. Readability does not reliably indicate epistemic robustness in decision-adjacent contexts. These findings support shifting evaluation and governance from platform comparison toward concept-conditioned requirement engineering that enforces provenance, calibrated uncertainty, and explicit separation of correlation, mechanism, and actionability as generative outputs approach clinical relevance.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Gut microbiota and healthTopic ModelingArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen