Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Artificial Intelligence ( <scp>AI</scp> ) Chatbots and Mental Health: Have We Learned Nothing From the Global Social Media Experiment?
3
Zitationen
1
Autoren
2025
Jahr
Abstract
What can possibly go wrong with having an online social network where friends, acquaintances, and relatives across continents can connect, correspond, share photos, and like each other's “posts”? Now, almost 20 years after Facebook was widely launched and followed by a host of competing social media (SoMe) services, the answer, unfortunately, seems to be: quite a lot! Indeed, there are credible reports of SoMe having been used for targeted misinformation of voters and persecution of vulnerable minorities [1-4]. In addition to these problems for democracies and minorities, there seems to be consensus that SoMe has a net-negative effect on the mental health of its users, probably driven, at least partly, by upward social comparison (i.e., “my peers lead a better life than I do”) [5-13]. This knowledge is now leading governments to enforce age restrictions on access to SoMe and propose reduced time spent on SoMe [14, 15]. In hindsight, these regulations came way too late, and the harm caused by this latency is likely both substantial and irreversible in many cases. These days, it seems that this history is repeating itself—only with SoMe replaced by chatbots driven by generative artificial intelligence (AI chatbots). The AI chatbots have taken the world by storm led by ChatGPT from OpenAI, which has surpassed 800 million weekly users [16]. Unfortunately, the extremely rapid uptake of this technology has been accompanied by an increasing number of reports of vulnerable individuals experiencing severe mental health crises, for example delusions and mania—some with fatal consequences (incl. a case of murder-suicide [17])—alongside use of AI chatbots [18-22]. Although the jury is still out as to whether this represents a causal effect of AI chatbot use, the anecdotal evidence represented by the cases of mental health crises appears to be strong enough to have led OpenAI to modify ChatGPT in an attempt to make it more psychologically safe [23, 24]. Time will tell whether these endeavors suffice. My gut feeling is that it will be very difficult to strike the right balance between the AI chatbots being safe to use from a mental health perspective and maintaining their user appeal (and, hence, their commercial potential). This is due to an important aspect of them being appealing (their personal and appraising communication tone—sometimes referred to as “sycophancy” when obsequious) also seeming to be a central part of the mechanism driving the psychological harm they appear to cause (e.g., by validating delusional thinking or stimulating pathologically elevated mood) [18-22]. Some may argue that the introduction of new types of media and technologies has always led to a public “scare” regarding behavioral/psychological consequences that have since turned out to be exaggerated. However, I will argue that both SoMe and AI chatbots are so qualitatively different from past types of media and technologies that the scare is merited in these two cases—and may even underestimate the risks. Specifically, regarding SoMe, the very visible/accessible number of friends, followers, likes, and reposts allows for direct, quantitative social comparison, unlike all media preceding it. AI chatbots, on the other hand, are unique in their combination of seamless, highly anthropomorphized user interface (including convincing voice modes) and sycophancy. Both, I would argue, can be recipes for disaster from a mental health perspective, but with quite different consequences that align with their unique features outlined above. Indeed, while social media seems to contribute predominantly to “internalizing” symptoms (e.g., depression, anxiety, and reduced psychological well-being) [5-13], AI chatbots appear to contribute more strongly to “productive” symptoms (e.g., delusions and mania) [18-22]. Unfortunately, despite these somewhat opposing tendencies, they are highly unlikely to cancel each other out. If AI chatbots can indeed have negative mental health consequences for the users, the question posed in the title of this editorial becomes urgent. Have we learned nothing from the global social media experiment? Unfortunately, it seems that the answer is a resounding “no”. In hindsight, a key factor in allowing SoMe to do psychological harm for almost two decades was the absence of requirements for psychological safety testing preceding worldwide roll out—essentially making this an uncontrolled experiment on a global scale. We now seem to be witnessing the exact same train of events in the case of AI chatbots. Indeed, the psychological safety testing appears to be predominantly taking place after the fact [23] and the users are paying the prize [18-22]. Unfortunately, there is little reason for hope regarding more firm regulation of the, predominantly US-based, companies developing the AI chatbots with regard to requirements for psychological safety tests and so forth—quite the contrary [25]. Thus, it seems that we remain in the hands of the tech companies, one of which allegedly had “move fast and break things” as an earlier internal motto [26]. Unfortunately, with the global roll out of AI chatbots in mind, it is hard to see that changing this motto has led to changes in risk appetite in this industry. There was no specific funding for this editorial. Søren Dinesen Østergaard received the 2020 Lundbeck Foundation Young Investigator Prize. Furthermore, S.D.Ø. owns/has owned units of mutual funds with stock tickers DKIGI, IAIMWC, SPIC25KL, and WEKAFKI, and owns/has owned units of exchange-traded funds with stock tickers BATE, TRET, QDV5, QDVH, QDVE, SADM, IQQH, USPY, EXH2, 2B76, IS4S, OM3X, EUNL, and SXRV. Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.
Ähnliche Arbeiten
The Psychological Meaning of Words: LIWC and Computerized Text Analysis Methods
2009 · 5.718 Zit.
The Stress Process
1981 · 4.485 Zit.
Mental health problems and social media exposure during COVID-19 outbreak
2020 · 2.795 Zit.
Cross-national prevalence and risk factors for suicidal ideation, plans and attempts
2008 · 2.635 Zit.
Psychological Aspects of Natural Language Use: Our Words, Our Selves
2002 · 2.561 Zit.