Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Generative AI are More Truth-Biased than Humans: A Replication and Extension of Core Truth-Default Theory Principles
6
Zitationen
2
Autoren
2023
Jahr
Abstract
Human communication requires cooperative partners for it to be effective and efficient. A result of this requirement is the truth-bias, defined as the perception that others are honest independent of actual message veracity. Does the truth-bias extend to technology like generative Artificial Intelligence (AI)? Drawing on truth-default theory (TDT), we had humans and three chatbots running different large language models — ChatGPT (GPT-3.5), Bard (LaMDA), ChatSonic (GPT-4) — make nearly 1,000 veracity judgments across three prompts. Consistent with TDT, human detection accuracies were near chance (50%-53%) with notable truth-biases (59%-64%). Critically, AI had a substantially greater truth-bias than humans (67%-99%), even after providing AI with a genuine lie-truth base-rate. GPT-4 was also truth-default, not suspecting any deception across samples when veracity assessments were unprompted. These data replicate the idea that people judge most information to be true, and such evidence also extends to artificial intelligence.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.711 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.884 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.502 Zit.
Fairness through awareness
2012 · 3.301 Zit.
AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations
2018 · 3.192 Zit.