Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Using conversational AI to reduce science skepticism
0
Zitationen
5
Autoren
2025
Jahr
Abstract
Mistrust of the scientific consensus around issues such as climate change and vaccination is mainstream, compromising our ability to respond to existential global threats. In the wrong hands, Generative AI can spread misinformation with unprecedented scale and psychological sophistication. However, large language models (LLMs) have also shown considerable promise for reducing misinformation and conspiracy theories, potentially revolutionizing science communication. This review summarizes the rapidly evolving frontier of empirical research on how conversational AI such as ChatGPT can be used to defuse mistrust of science around hot-button scientific issues. These studies find negligible evidence that LLM responds to human queries by reproducing conspiracy theories or misinformation about scientific topics. Rather, conversations with LLMs typically reduce participants' levels of science skepticism and misinformation endorsement. We conclude that LLMs (in their current form) have potential to complement existing science communication strategies, provided their use is accompanied by safeguards that preserve informational integrity and public trust.
Ähnliche Arbeiten
The spread of true and false news online
2018 · 8.118 Zit.
What is Twitter, a social network or a news media?
2010 · 6.667 Zit.
Social Media and Fake News in the 2016 Election
2017 · 6.439 Zit.
Beliefs about beliefs: Representation and constraining function of wrong beliefs in young children's understanding of deception
1983 · 6.270 Zit.
The Matthew Effect in Science
1968 · 6.176 Zit.