Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Generative artificial intelligence dependency: Scale development, validation, and its motivational, behavioral, and psychological correlates
10
Zitationen
3
Autoren
2025
Jahr
Abstract
As generative Artificial Intelligence (AI) becomes increasingly integrated into daily life, concerns have emerged about growing dependency on AI and its potential psychological and behavioral consequences. The present research develops and validates the Generative AI Dependency Scale, a multidimensional tool developed to assess individual differences in dependency on generative AI systems. Across six studies involving 1,333 participants from the United States and Singapore, the Generative AI Dependency Scale demonstrated strong psychometric properties, including a stable three-factor structure (cognitive preoccupation, negative consequences, withdrawal) and good test-retest reliability (ICC = .87). Confirmatory factor analysis supported a higher-order dependency construct, and scalar measurement invariance was established across sex and cultures. Convergent and discriminant validity were demonstrated through associations with an existing AI addiction scale and the Big Five personality traits respectively. Generative AI dependency was also significantly associated with a range of motivational (e.g., lower basic psychological need satisfaction, greater fear of missing out), behavioral (e.g., increased procrastination and cognitive failures, lower task performance and critical thinking), and psychological (e.g., reduced self-concept clarity, greater loneliness) outcomes. Framed within Goodman’s behavioral dependency framework and self-determination theory, these findings suggest that generative AI dependency reflects not merely excessive technology use, but a deeper misalignment between psychological needs and the strategies employed to meet them. The Generative AI Dependency Scale offers a psychometrically robust foundation for future research into the impacts of generative AI, with implications for responsible AI design and use.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.422 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.300 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.734 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.519 Zit.