OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 10.05.2026, 12:56

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Quality and Readability of Large Language Models' Responses to Oral Lichen Planus Patients' <scp>FAQs</scp>

2026·0 Zitationen·Oral Diseases
Volltext beim Verlag öffnen

0

Zitationen

17

Autoren

2026

Jahr

Abstract

OBJECTIVE: To evaluate the quality and readability of large language models (LLMs) when responding to Frequently Asked Questions (FAQs) about oral lichen planus (OLP). METHODS: We evaluated the responses of three LLMs (ChatGPT-4o, Gemini 2.0 Flash Experimental, and Copilot) to 13 patient-centered FAQs about OLP. Questions were identified using query tools, and answers were assessed by 14 oral medicine experts using the Quality Assessment of Medical Artificial Intelligence (QAMAI) tool. Readability was analyzed with the Flesch Reading Ease (FRE) and Flesch-Kincaid Grade Level (FKG) tools. RESULTS: All LLMs provided generally accurate and relevant responses, with median QAMAI scores indicating "good" to "very good" quality. ChatGPT achieved slightly higher completeness, particularly for questions on OLP definition and treatment. The reference provision was inconsistent across all chatbots. Readability analysis revealed that most responses required college-level literacy, with ChatGPT producing the most complex texts, Gemini occasionally achieving more accessible outputs, and Copilot situated in an intermediate position. CONCLUSIONS: LLMs may have potential as adjunctive tools for patient education in OLP, although they remain limited by incomplete information, inconsistent references, and suboptimal readability. Future research should incorporate longitudinal LLMs evaluations and training to develop models delivering accurate, accessible information, tailored to users' literacy levels.

Ähnliche Arbeiten