OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 19.04.2026, 07:07

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Influence of solution efficiency and valence of instruction on additive and subtractive solution strategies in humans, GPT-4, and GPT-4o

2026·0 Zitationen·Communications PsychologyOpen Access
Volltext beim Verlag öffnen

0

Zitationen

5

Autoren

2026

Jahr

Abstract

Abstract Generative artificial intelligences, particularly Large Language Models (LLMs), increasingly influence human decision-making, making it essential to understand how cognitive biases are reproduced or amplified in these systems. Building on evidence of the human “addition bias” – a preference for additive over subtractive problem-solving strategies 1 – this research compared humans with GPT-4 (Study 1) and GPT-4o (Study 2) in spatial and linguistic tasks. Study 1 comprised four experiments (1a, 1b, 2a, 2b) with 588 human participants and 680 GPT-4 outputs; Study 2 included two experiments (3a, 3b) with 751 human participants and 1,080 GPT-4o outputs. We manipulated (a) solution efficiency and (b) instruction valence. Across both studies, a general addition bias emerged, more pronounced in the LLMs than in humans. Humans made fewer additive choices when subtraction was more efficient than addition (compared to when both were equally efficient), whereas GPT-4’s output showed the opposite pattern. GPT-4o’s outputs aligned with those of humans in the linguistic task but showed no efficiency effect in the spatial task. Instruction valence did not reach statistical significance for either agent in the spatial task. In the linguistic task, positive valence (compared to neutral valence) led to more additive outputs in both GPT models, but only in Study 2 for humans. These findings indicate that addition bias has been transferred to LLMs, which can replicate and, depending on context, amplify this human bias. This emphasizes the importance of further theoretical and empirical work on the cognitive and data-driven mechanisms underlying addition bias in both humans and LLMs.

Ähnliche Arbeiten