Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Trustworthy Medical Imaging with Large Language Models: A Study of Hallucinations Across Modalities
0
Zitationen
3
Autoren
2025
Jahr
Abstract
Large Language Models (LLMs) are increasingly applied to medical imaging tasks, including image interpretation and synthetic image generation. However, these models often produce hallucinations, which are confident but incorrect outputs that can mislead clinical decisions. This study examines hallucinations in two directions: image to text, where LLMs generate reports from X-ray, CT, or MRI scans, and text to image, where models create medical images from clinical prompts. We analyze errors such as factual inconsistencies and anatomical inaccuracies, evaluating outputs using expert informed criteria across imaging modalities. Our findings reveal common patterns of hallucination in both interpretive and generative tasks, with implications for clinical reliability. We also discuss factors contributing to these failures, including model architecture and training data. By systematically studying both image understanding and generation, this work provides insights into improving the safety and trustworthiness of LLM driven medical imaging systems.
Ähnliche Arbeiten
Rethinking the Inception Architecture for Computer Vision
2016 · 30.707 Zit.
MobileNetV2: Inverted Residuals and Linear Bottlenecks
2018 · 25.015 Zit.
CBAM: Convolutional Block Attention Module
2018 · 21.829 Zit.
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
2020 · 21.506 Zit.
Xception: Deep Learning with Depthwise Separable Convolutions
2017 · 18.717 Zit.