OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 27.04.2026, 18:41

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Generative AI are More Truth-Biased than Humans: A Replication and Extension of Core Truth-Default Theory Principles

2023·6 ZitationenOpen Access
Volltext beim Verlag öffnen

6

Zitationen

2

Autoren

2023

Jahr

Abstract

Human communication requires cooperative partners for it to be effective and efficient. A result of this requirement is the truth-bias, defined as the perception that others are honest independent of actual message veracity. Does the truth-bias extend to technology like generative Artificial Intelligence (AI)? Drawing on truth-default theory (TDT), we had humans and three chatbots running different large language models — ChatGPT (GPT-3.5), Bard (LaMDA), ChatSonic (GPT-4) — make nearly 1,000 veracity judgments across three prompts. Consistent with TDT, human detection accuracies were near chance (50%-53%) with notable truth-biases (59%-64%). Critically, AI had a substantially greater truth-bias than humans (67%-99%), even after providing AI with a genuine lie-truth base-rate. GPT-4 was also truth-default, not suspecting any deception across samples when veracity assessments were unprompted. These data replicate the idea that people judge most information to be true, and such evidence also extends to artificial intelligence.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Ethics and Social Impacts of AIArtificial Intelligence in Healthcare and EducationMisinformation and Its Impacts
Volltext beim Verlag öffnen