Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Comparing answers of artificial intelligence systems and clinical toxicologists to questions about poisoning: Can their answers be distinguished?
2
Zitationen
3
Autoren
2024
Jahr
Abstract
OBJECTIVE: To present questions about poisoning to 4 artificial intelligence (AI) systems and 4 clinical toxicologists and determine whether readers can identify the source of the answers. To evaluate and compare text quality and level of knowledge found in the AI and toxicologists' responses. METHODS: Ten questions about toxicology were presented to the following AI systems: Copilot, Bard, Luzia, and ChatGPT. Four clinical toxicologists were asked to answer the same questions. Twenty-four recruited experts in toxicology were sent a pair of answers (1 from an AI system and one from a toxicologist) for each of the 10 questions. For each answer, the experts had to identify the source, evaluate text quality, and assess level of knowledge reflected. Quantitative variables were described as mean (SD) and qualitative ones as absolute frequency and proportion. A value of P .05 was considered significant in all comparisons. RESULTS: Of the 240 evaluated AI answers, the expert evaluators thought that 21 (8.8%) and 38 (15.8%), respectively, were certainly or probably written by a toxicologist. The experts were unable to guess the source of 13 (5.4%) AI answers. Luzia and ChatGPT were better able to mislead the experts than Bard (P = .036 and P = .041, respectively). Text quality was judged excellent in 38.8% of the AI answers. ChatGPT text quality was rated highest (61.3% excellent) vs Bard (34.4%), Luzia (31.7%), and Copilot (26.3%) (P .001, all comparisons). The average score for the level of knowledge perceived in the AI answers was 7.23 (1.57) out of 10. The highest average score was achieved by ChatGPT at 8.03 (1.26) vs Luzia (7.02 [1,63]), Bard (6.91 [1.64]), and Copilot (6.91 [1.46]) (P .001, all comparisons). CONCLUSIONS: Luzia and ChatGPT answers to the toxicology questions were often thought to resemble those of clinical toxicologists. ChatGPT answers were judged to be very well-written and reflect a very high level of knowledge.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.652 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.567 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.083 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.856 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.