OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 08.04.2026, 09:47

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Learning to Diagnose Privately: DP-Powered LLMs for Radiology Report Classification

2026·0 Zitationen·IEEE AccessOpen Access
Volltext beim Verlag öffnen

0

Zitationen

8

Autoren

2026

Jahr

Abstract

Large Language Models (LLMs) are being widely adopted in different domains including education, healthcare, finance and so on. In the domain of healthcare, LLMs are used in disease diagnosis, abnormality classification, remedy suggestions and so on. Multi-abnormality classification of radiology reports is essential in healthcare, medical decision-making, and drug discovery. LLMs are increasingly utilized for such tasks due to their remarkable Natural Language Processing (NLP) capabilities, which streamline medical report processing and reduce administrative burdens. To enhance predictive accuracy, LLMs are often fine-tuned on private, locally available datasets, such as medical reports. However, this practice raises significant privacy concerns, as LLMs are prone to memorizing training data, making them susceptible to data extraction attacks through even query-based access. Additionally, sharing fine-tuned models and weights poses adversarial risks, as they may inadvertently reveal sensitive information about the training data. Despite the growing application of LLMs to medical text classification, privacy-preserving fine-tuning for multi-abnormality classification remains underexplored. To bridge this gap, we propose a differentially private (DP) fine-tuning approach that preserves privacy while enabling multi-abnormality classification from text radiology reports through Low Rank Adaptation (LoRA). Our framework leverages DP optimization techniques to fine-tune LLMs on local patient data while mitigating data leakage risks. To our knowledge, this is the first study to incorporate DP fine-tuning of LLMs for multi-abnormality classification using text-based radiology reports. We use labels generated by a larger LLM to fine-tune a smaller LLM, accelerating inference while maintaining privacy constraints. We conduct extensive experiments on the MIMIC-CXR, and CT-RATE datasets to evaluate DP fine-tuning method across varying privacy regimes, analyzing the privacy-utility trade-off and demonstrating the efficacy of our approach. For instance, on the MIMIC-CXR dataset, our proposed DP-LoRA framework achieves weighted F1-scores up to 0.89 under moderate privacy budgets (ϵ = 10), approaching the performance of non-private LoRA (0.90) and full fine-tuning (0.96). These results demonstrate that strong privacy protection can be achieved with only moderate performance degradation.

Ähnliche Arbeiten