Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Does language bias <scp>GenAI</scp> academic evaluation in humanities and social sciences? A mixed‐methods study based on Chinese‐language <scp>HSS</scp> papers
0
Zitationen
6
Autoren
2026
Jahr
Abstract
Abstract As generative AI (GenAI) systems are increasingly deployed in cross‐language research evaluation, whether GenAI evaluates multilingual scholarship without language‐induced bias remains unclear. This study examines language bias patterns in GenAI evaluation of humanities and social sciences (HSS) research across models and disciplines. Using a within‐subjects design, 1150 expert‐selected papers from 23 disciplines were evaluated by GPT‐4o and DeepSeek‐V3 in Chinese and English. Results reveal opposite language biases depending on model type: GPT‐4o favors English (Cohen's d = 1.10), while DeepSeek‐V3 favors Chinese (Cohen's d = −0.87), persisting across all disciplines. Thematic analysis reveals a systematic decoupling between scores and evaluative reasoning: both models generate more critical comments for English papers, yet arrive at opposite scores through different rhetorical strategies—GPT‐4o tends to moderate its positive assessments of Chinese papers while DeepSeek‐V3 amplifies them. This decoupling suggests that bias is embedded in the multi‐layered pathways through which models generate and aggregate evaluations. This study provides controlled evidence that language bias in GenAI evaluation is bidirectional and model‐dependent, with scores not directly reflecting evaluative justifications. The findings have implications for designing fairer multilingual academic evaluation systems and for understanding the limitations of GenAI as scholarly evaluation infrastructure.
Ähnliche Arbeiten
2019 · 31.716 Zit.
Techniques to Identify Themes
2003 · 5.388 Zit.
Answering the Call for a Standard Reliability Measure for Coding Data
2007 · 4.078 Zit.
Basic Content Analysis
1990 · 4.045 Zit.
Text as Data: The Promise and Pitfalls of Automatic Content Analysis Methods for Political Texts
2013 · 3.068 Zit.