Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Generics in science communication: Misaligned interpretations across laypeople, scientists, and large language models
0
Zitationen
5
Autoren
2026
Jahr
Abstract
, that is, unquantified statements about whole categories of people or phenomena, when communicating research findings (e.g. "statins reduce cardiovascular events"). Large language models, such as ChatGPT, frequently adopt the same style when summarizing scientific texts. However, generics can prompt overgeneralizations, especially when they are interpreted differently across audiences. In a study comparing laypeople, scientists, and two leading large language models (ChatGPT-5 and DeepSeek), we found systematic differences in interpretation of generics. Compared with most scientists, laypeople judged scientific generics as more generalizable and credible, while large language models rated them even higher. These mismatches highlight significant risks for science communication. Scientists may use generics and incorrectly assume laypeople share their interpretation, while large language models may systematically overgeneralize scientific findings when summarizing research. Our findings underscore the need for greater attention to language choices in both human and large language model-mediated science communication.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.646 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.554 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.071 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.851 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.