OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 08.04.2026, 18:02

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Causes of Content Distortion: Analysis and Classification of Hallucinations in GPT Large Language Models

2025·0 Zitationen·Scientific and Technical Information Processing
Volltext beim Verlag öffnen

0

Zitationen

2

Autoren

2025

Jahr

Abstract

The article examines hallucinations that arise in two versions of the GPT large language model, GPT-3.5-turbo and GPT-4. The main objective is to investigate potential sources of hallucinations and to classify them, as well as developing strategies to address them. The study identifies issues that can lead to content generation that does not correspond to factual data and misleads users. The findings have practical significance for developers and users of language models due to the approaches proposed to enhance the quality and reliability of generated content.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationExplainable Artificial Intelligence (XAI)Misinformation and Its Impacts
Volltext beim Verlag öffnen