OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 14.05.2026, 00:22

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Towards Trustworthy Agentic AI in Healthcare: A Zero Trust-Based Security Framework

2026·0 Zitationen·Zenodo (CERN European Organization for Nuclear Research)Open Access
Volltext beim Verlag öffnen

0

Zitationen

2

Autoren

2026

Jahr

Abstract

The swift deployment of agentic artificial intelligence (AI) systems in healthcare settings is transforming the process of clinical, operational, and infrastructure-level decision- making. In contrast to traditional AI models, agentic AI systems have autonomous reasoning, dynamic task execution, and multi-system interaction features, allowing them to be active agents in healthcare processes. Although these capabilities present great efficiency and scalability opportunities, they also increase the attack surface and present new types of security and trust threats. In particular, AI agents have access to sensitive patient data without human intervention, can access external services, and influence clinical outcomes, which are critical issues when it comes to identity assurance, decision integrity, and software verification. The existing security solutions, including the traditional perimeter-based security and the commonplace Zero Trust Architecture (ZTA), are not well-equipped to support the behavioral and operational complexities of agentic AI systems. The approaches that have been implemented are more or less user-centric or device-centric trust validation and little consideration is being given to autonomous software entities that can evolve, adapt and act without the close supervision of a human being. This is especially a problem in healthcare facilities, where the sensitivity of the da ta, regulatory requirements and patient safety place dire limitations on the reliability and accountability of the system. To overcome these issues, this paper presents the TAZAI framework (Trustworthy Agentic Zero Trust Architecture for AI in Healthcare), a new security framework that aims to implement a consistent trust validation of all layers of agentic AI activities. The proposed framework broadens the concepts of Zero Trust to encompass agent identity verification, context-sensitive policing, real-time behavior monitoring, and data management in a safe way. With the combination of these mechanisms into a single architecture, TAZAI allows controlling the actions of AI agents finely, without compromising the interoperability of systems between cloud and on-premise healthcare infrastructures. The effectiveness of the framework is demonstrated with systematic threat model and a scenario of healthcare deployment, demonstrating how TAZAI mitigates threats, such as unauthorized data access, prompt injection attacks, and autonomous decision manipulation. The findings reveal that incorporating the principles of Zero Trust into agentic AI processes can greatly improve the resilience of the system, minimize its exposure to new attack vectors, and create a verifiable trust boundary of autonomous activity. This publication adds a methodology towards ensuring future AI-based healthcare systems and sets the groundwork of future studies in reliable autonomous computing.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Access Control and TrustArtificial Intelligence in Healthcare and EducationHealthcare Technology and Patient Monitoring
Volltext beim Verlag öffnen