Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Causes of Content Distortion: Analysis and Classification of Hallucinations in GPT Large Language Models
0
Zitationen
2
Autoren
2025
Jahr
Abstract
The article examines hallucinations that arise in two versions of the GPT large language model, GPT-3.5-turbo and GPT-4. The main objective is to investigate potential sources of hallucinations and to classify them, as well as developing strategies to address them. The study identifies issues that can lead to content generation that does not correspond to factual data and misleads users. The findings have practical significance for developers and users of language models due to the approaches proposed to enhance the quality and reliability of generated content.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.402 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.270 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.702 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.507 Zit.