Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
An exploratory semantic analysis of age-related stereotypes in OpenAI’s GPT 4o model
0
Zitationen
2
Autoren
2025
Jahr
Abstract
BACKGROUND AND OBJECTIVES: Generative artificial intelligence, particularly large language models (LLMs), is increasingly used to navigate information, potentially shaping users' perceptions of different social groups. This study examines age-related stereotypes in LLM-generated text using natural language processing (NLP) techniques. RESEARCH DESIGN AND METHODS: To ensure neutrality, extensive pilot testing was conducted to craft a prompt that did not elicit bias yet generated coherent responses. The final prompt, "Describe the personality of a [AGE]-year-old person," was used with OpenAI's GPT-4o API in February 2025, varying AGE from 10 to 90 in 10-year increments. The analysis was guided by the Stereotype Content Model, which assesses social cognition along two key dimensions: warmth (sociability, morality) and competence (ability, assertiveness). Scores were quantified using sentence embeddings. RESULTS: Text similarity and stereotype content analyses revealed three age clusters, with older adults showing the most internal consistency. Descriptions of individuals aged 60 years and above featured relatively higher warmth but lower competence compared to younger groups. Notably, positive assertiveness terms were rarely used to describe older adults. DISCUSSION AND IMPLICATIONS: Findings suggest that GPT-4o may embed subtle age-related stereotypes, even when using largely positive language. These patterns potentially influence user perceptions through repeated exposure. Future research should investigate the mechanisms behind these biases and explore mitigation strategies to promote more age-inclusive artificial intelligence-generated content.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.652 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.567 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.083 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.856 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.