OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 13.04.2026, 16:19

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Warning: Artificial intelligence chatbots can generate inaccurate medical and scientific information and references

2024·7 Zitationen·Exploration of Digital Health TechnologiesOpen Access
Volltext beim Verlag öffnen

7

Zitationen

3

Autoren

2024

Jahr

Abstract

The use of generative artificial intelligence (AI) chatbots, such as ChatGPT and YouChat, has increased enormously since their release in late 2022. Concerns have been raised over the potential of chatbots to facilitate cheating in education settings, including essay writing and exams. In addition, multiple publishers have updated their editorial policies to prohibit chatbot authorship on publications. This article highlights another potentially concerning issue; the strong propensity of chatbots in response to queries requesting medical and scientific information and its underlying references, to generate plausible looking but inaccurate responses, with the chatbots also generating nonexistent citations. As an example, a number of queries were generated and, using two popular chatbots, demonstrated that both generated inaccurate outputs. The authors thus urge extreme caution, because unwitting application of inconsistent and potentially inaccurate medical information could have adverse outcomes.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationCongenital Heart Disease StudiesMachine Learning in Healthcare
Volltext beim Verlag öffnen