Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Large Language Models Seem Miraculous, but Science Abhors Miracles
9
Zitationen
1
Autoren
2024
Jahr
Abstract
Generative artificial intelligence models exhibit amazing abilities but make serious errors. We have a very limited understanding of why they work well at all or of the circumstances under which they give incorrect responses. This suggests the need for additional research and great caution in deploying such models for critical applications. Since the availability of ChatGPT in late 2022, based on OpenAI's GPT 3.5 large language model, those of us who have explored its capabilities have been amazed by its facility with language and its abilities to generate coherent — and even insightful — synopses; answer questions about everything from general knowledge to domain-specific topics; offer advice on how to accomplish tasks, including for medical diagnosis, therapy, and prognosis; deduce consequences of assumptions; and even write effective computer programs. Nevertheless, I would urge great caution in adopting such methods in health care, mainly because of our lack of understanding of how they accomplish the miraculous-seeming things they are able to do.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.436 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.311 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.753 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.523 Zit.