OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 21.04.2026, 10:27

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Using conversational AI to reduce science skepticism

2025·0 Zitationen·Current Opinion in PsychologyOpen Access
Volltext beim Verlag öffnen

0

Zitationen

5

Autoren

2025

Jahr

Abstract

Mistrust of the scientific consensus around issues such as climate change and vaccination is mainstream, compromising our ability to respond to existential global threats. In the wrong hands, Generative AI can spread misinformation with unprecedented scale and psychological sophistication. However, large language models (LLMs) have also shown considerable promise for reducing misinformation and conspiracy theories, potentially revolutionizing science communication. This review summarizes the rapidly evolving frontier of empirical research on how conversational AI such as ChatGPT can be used to defuse mistrust of science around hot-button scientific issues. These studies find negligible evidence that LLM responds to human queries by reproducing conspiracy theories or misinformation about scientific topics. Rather, conversations with LLMs typically reduce participants' levels of science skepticism and misinformation endorsement. We conclude that LLMs (in their current form) have potential to complement existing science communication strategies, provided their use is accompanied by safeguards that preserve informational integrity and public trust.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Misinformation and Its ImpactsArtificial Intelligence in Healthcare and EducationVaccine Coverage and Hesitancy
Volltext beim Verlag öffnen