Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Assessing the readability of responses produced by ChatGPT and Gemini when answering questions about the gastrointestinal system
0
Zitationen
2
Autoren
2026
Jahr
Abstract
Introduction The utilisation of artificial intelligence has proven to be a pivotal element in the timely identification of gastrointestinal diseases, thereby markedly enhancing the detection of lesions and ensuring enhanced diagnostic accuracy. A comparison of the AI models ChatGPT and Gemini reveals distinct strengths and applications across various fields. Although AI can significantly advance gastrointestinal system pharmacology research, broader implications and challenges must be considered. The objective of this study was to compare the responses of AI models to questions about gastrointestinal system pharmacology and their readability. Methodology This study was conducted using 30 multiple-choice questions in the field of Pharmacology. The questions were answered and evaluated using two LLMs: GPT 4.0, developed by Open AI, and GEMINI 2.0, developed by Google. The analysis of readability and comprehensibility values in English was compared using the Automated Readability Index (ARI), Flesch-Kincaid, Gunning Fog index, Coleman-Liau index, SMOG score, and FORCAST scores. Results The average score for responses provided by Open AI was determined to be 26.78±0.41, while the average score for responses provided by GEMINI was determined to be 28.90±0.91. The number of correct answers provided by GEMINI was found to be significantly higher than that of Open AI (p=0.045). A readability comparison was performed for 30 questions. The average Open AI score for ARI was 13.04±1.77, while the average score for GEMINI was 14.76±2.04, and a significant difference was observed between them (p<0.001). Conclusion The present study demonstrated discrepancies in the utilisation of gastrointestinal system pharmacology by ChatGPT and Google Gemini, in addition to alterations in the readability of the responses.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.485 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.371 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.827 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.549 Zit.