Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Political biases in chatgpt: insights from comparative analysis with human responses
0
Zitationen
2
Autoren
2025
Jahr
Abstract
Abstract We investigate the political and ideological positioning of ChatGPT, a leading large language model (LLM), by comparing its responses to political economy questions from the European Social Survey (ESS) with those of representative human samples. The questions focus on environmental sustainability, civil rights, income inequality, and government size. We analyze two distinct dimensions of bias: an absolute bias, measured as the deviation of ChatGPT’s answers from the positions of ESS respondents who locate themselves at the center, and a self-perception bias, captured by the difference between ChatGPT’s self-reported left-right placement and the ideological stance which can be inferred from its substantive answers. Our results reveal a significant left-leaning absolute bias in ChatGPT’s responses, particularly on environmental and civil rights issues, which exceeds its own declared center-left self-placement. These findings highlight the importance of transparency regarding AI biases to mitigate unintended ideological influences on users. We conclude by discussing the implications for AI governance, debiasing approaches, and the educational use of language models.
Ähnliche Arbeiten
2019 · 31.688 Zit.
Techniques to Identify Themes
2003 · 5.383 Zit.
Answering the Call for a Standard Reliability Measure for Coding Data
2007 · 4.075 Zit.
Basic Content Analysis
1990 · 4.045 Zit.
Text as Data: The Promise and Pitfalls of Automatic Content Analysis Methods for Political Texts
2013 · 3.068 Zit.