Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
A Zero-Trust, AI-RMF–Governed Architecture for LLM-Enabled Telemedicine-as-a-Service: Mitigating Poisoning, Leakage and Unsafe-Output Threats
0
Zitationen
9
Autoren
2025
Jahr
Abstract
Large-Language-Model (LLM) functionality is rapidly becoming a cornerstone of Telemedicine-as-a-Service (PGaaS) platforms. Recent Q1 studies demonstrate that even minuscule training-set or parameter perturbations can introduce persistent back-doors, while inference pipelines leak protected health information (PHI) if left unguarded. Building on the NIST AI Risk Management Framework (AI RMF), this paper proposes and implements a zero-trust, multi-cloud security architecture that couples (i) knowledge-graph–driven data-integrity validation, (ii) containerised fine-tuning isolation, (iii) AI-RMF–centred governance and continuous risk registers, (iv) a privacy-preserving response-sanitisation gateway enhanced with one-time-password (OTP) and KYC identity binding, and (v) remote-attestation-backed zero-knowledge-proof (ZKP) integrity challenges for model weights at runtime. An extensive multi-cloud evaluation shows that the framework detects 94.6 % of tainted samples before ingestion and blocks 91.3 % of unsafe outputs, with a median latency overhead of 66 ms—well below clinical tele-consultation thresholds.
Ähnliche Arbeiten
Rethinking the Inception Architecture for Computer Vision
2016 · 30.699 Zit.
MobileNetV2: Inverted Residuals and Linear Bottlenecks
2018 · 24.991 Zit.
CBAM: Convolutional Block Attention Module
2018 · 21.814 Zit.
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
2020 · 21.500 Zit.
Xception: Deep Learning with Depthwise Separable Convolutions
2017 · 18.707 Zit.