Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Leveraging Stakeholder Engagement to Develop and Evaluate a Responsible Artificial Intelligence Framework at a Large, Multi-State Health System
0
Zitationen
13
Autoren
2026
Jahr
Abstract
• Developed FAIR-AI, a stakeholder-informed AI governance framework • Identified design priorities: risk-benefit balance, human oversight, low-risk reviews • Post-approval interviews revealed seven elements for successful implementation • Emphasized iterative refinement, education, and patient/community dissemination • Offers practical guidance for responsible AI adoption in large health systems Artificial intelligence (AI) offers health systems opportunities to enhance care delivery, improve efficiency, and expand patient access. However, rapid innovation introduces new risks requiring careful oversight. This study examines how diverse stakeholders shaped the design and early evaluation of the Framework for the Appropriate Implementation and Review of AI (FAIR-AI), a system-wide AI governance framework implemented within a large, multi-state health system. We conducted two rounds of semi-structured interviews – before FAIR-AI development and shortly after FAIR-AI was approved – with executive leaders (N=5), risk/compliance/legal leaders (N=11), and data developers (N=8) to identify initial design needs and evaluate the approved framework. Pre-development interviews also included patients (N=5) and clinicians (N=5) to capture AI end-user expectations. Data were analyzed using thematic analysis and inductive and deductive coding methodologies. Pre-development interviews highlighted three central priorities: balancing risk tolerance with potential benefits, ensuring direct human oversight, and streamlining review for low-risk solutions. Patients and clinicians emphasized the need for clinician control over care decisions, with AI serving as supplemental support. Post-approval interviews identified seven elements critical to success: (1) transparent and consistent reviews; (2) timely evaluations; (3) ongoing solution monitoring; (4) iterative framework refinement; (5) alignment with institutional priorities and regulatory standards; (6) multi-modal teammate education; and (7) diverse patient dissemination efforts. Our findings highlight the importance of AI governance frameworks integrating both pre-deployment risk assessment and post-implementation solution monitoring, while remaining adaptable through feedback loops and in response to changing regulatory and technological contexts. This stakeholder-informed approach provides practical guidance for responsible AI at enterprise scale.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.652 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.567 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.083 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.856 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.