Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Governing AI in Mental Health: 50-State Legislative Review
9
Zitationen
8
Autoren
2025
Jahr
Abstract
Background: Mental health-related artificial intelligence (MH-AI) systems are proliferating across consumer and clinical contexts, outpacing regulatory frameworks and raising urgent questions about safety, accountability, and clinical integration. Reports of adverse events, including instances of self-harm and harmful clinical advice, highlight the risks of deploying such tools without clear standards and oversight. Federal authority over MH-AI is fragmented, leaving state legislatures to serve as de facto laboratories for MH-AI policy. Some states have been highly active in this area during recent legislative sessions. Yet, clinicians and professional organizations have mainly remained absent or sidelined from public commentary and policymaking bodies, raising concerns that new laws may diverge from the realities of mental health care. Objective: To systematically analyze recent state-level legislation relevant to MH-AI, categorize bills by relevance to mental health, identify major regulatory themes and gaps, and evaluate implications for clinicians and patients. Methods: We conducted a systematic analysis of bills introduced in all 50 US states between January 1, 2022, and May 19, 2025, using standardized searches on the legislative research website (LegiScan). Bills were screened and categorized using a custom 4-tier taxonomy based on their applicability to MH-AI. Bills passing threshold review were coded by topic using a 25-tag system developed through iterative consensus. Legally trained reviewers adjudicated final classifications to ensure consistency and rigor. Results: Among 793 state bills reviewed, 143 were identified as potentially impactful to MH-AI: 28 explicitly referenced mental health uses, while 115 had substantial or indirect implications. Of these 143 bills, 20 were enacted across 11 states. Legislative efforts varied widely, but 4 thematic domains consistently emerged: (1) professional oversight, including deployer liability and licensure obligations; (2) harm prevention, encompassing safety protocols, malpractice exposure, and risk stratification frameworks; (3) patient autonomy, particularly in areas of disclosure, consent, and transparency; and (4) data governance, with notable gaps in privacy protections for sensitive mental health data. Conclusions: State legislatures are rapidly shaping the regulatory landscape for MH-AI, but most laws treat mental health as incidental to broader artificial intelligence or health care regulation. Explicit mental health provisions remain rare, and clinician and patient perspectives are seldom incorporated into policymaking. The result is a fragmented and uneven environment that risks leaving patients unprotected and clinicians overburdened. Mental health professionals must proactively engage with legislators, professional organizations, and patient advocates to ensure that emerging frameworks address oversight, harm, autonomy, and privacy in ways that are clinically realistic, ethically sound, and supportive of flexible-but responsible-innovation.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.635 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.543 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.051 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.844 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.