OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 13.05.2026, 15:12

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

A Zero-Trust, AI-RMF–Governed Architecture for LLM-Enabled Telemedicine-as-a-Service: Mitigating Poisoning, Leakage and Unsafe-Output Threats

2025·0 Zitationen
Volltext beim Verlag öffnen

0

Zitationen

9

Autoren

2025

Jahr

Abstract

Large-Language-Model (LLM) functionality is rapidly becoming a cornerstone of Telemedicine-as-a-Service (PGaaS) platforms. Recent Q1 studies demonstrate that even minuscule training-set or parameter perturbations can introduce persistent back-doors, while inference pipelines leak protected health information (PHI) if left unguarded. Building on the NIST AI Risk Management Framework (AI RMF), this paper proposes and implements a zero-trust, multi-cloud security architecture that couples (i) knowledge-graph–driven data-integrity validation, (ii) containerised fine-tuning isolation, (iii) AI-RMF–centred governance and continuous risk registers, (iv) a privacy-preserving response-sanitisation gateway enhanced with one-time-password (OTP) and KYC identity binding, and (v) remote-attestation-backed zero-knowledge-proof (ZKP) integrity challenges for model weights at runtime. An extensive multi-cloud evaluation shows that the framework detects 94.6 % of tainted samples before ingestion and blocks 91.3 % of unsafe outputs, with a median latency overhead of 66 ms—well below clinical tele-consultation thresholds.

Ähnliche Arbeiten