Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Adversarial Machine Learning in Healthcare: Risks to AI-Driven Diagnostics and Treatment Plans
0
Zitationen
2
Autoren
2025
Jahr
Abstract
The rapid integration of artificial intelligence (AI) in healthcare has enhanced diagnostics, predictive analytics, and clinical decision-making. However, AI-driven models, particularly deep learning architectures, remain highly vulnerable to adversarial machine learning (AML) attacks, which can result in misdiagnoses, unsafe treatment recommendations, and compromised patient safety. This study systematically evaluates adversarial risks in medical AI, quantifies their impact on model performance, and assesses the efficacy of defense mechanisms. We analyzed CNNs (medical imaging), RNNs (ECG analysis), and Transformer models (clinical NLP) under FGSM, PGD, and JSMA attacks. Results show that the CNN accuracy of 92% was reduced to 40% under JSMA, ECG-based AI performance dropped by 42% under PGD, and Transformer-based NLP models experienced a 30% decline under FGSM. Defense mechanisms such as randomized smoothing and adversarial training improved accuracy by 15% and 14%, respectively, though at high computational costs (1.8× and 1.5× training overhead). Across five independent trials, all degradations were statistically significant (p< 0.01), and ANOVA with Tukey’s HSD confirmed that randomized smoothing and adversarial training significantly outperformed gradient masking (p< 0.01). These findings demonstrate that medical AI systems are highly susceptible to adversarial manipulation and underscore the necessity of robust, efficient, and regulatory-compliant defenses. Strengthening adversarial resilience is critical to ensuring safe, reliable, and ethically responsible deployment of AI in healthcare.
Ähnliche Arbeiten
Rethinking the Inception Architecture for Computer Vision
2016 · 30.694 Zit.
MobileNetV2: Inverted Residuals and Linear Bottlenecks
2018 · 24.984 Zit.
CBAM: Convolutional Block Attention Module
2018 · 21.802 Zit.
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
2020 · 21.499 Zit.
Xception: Deep Learning with Depthwise Separable Convolutions
2017 · 18.702 Zit.