Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Security and Privacy in Machine Learning for Health Systems: Strategies and Challenges
9
Zitationen
3
Autoren
2023
Jahr
Abstract
OBJECTIVES: Machine learning (ML) is a powerful asset to support physicians in decision-making procedures, providing timely answers. However, ML for health systems can suffer from security attacks and privacy violations. This paper investigates studies of security and privacy in ML for health. METHODS: We examine attacks, defenses, and privacy-preserving strategies, discussing their challenges. We conducted the following research protocol: starting a manual search, defining the search string, removing duplicated papers, filtering papers by title and abstract, then their full texts, and analyzing their contributions, including strategies and challenges. Finally, we collected and discussed 40 papers on attacks, defense, and privacy. RESULTS: Our findings identified the most employed strategies for each domain. We found trends in attacks, including universal adversarial perturbation (UAPs), generative adversarial network (GAN)-based attacks, and DeepFakes to generate malicious examples. Trends in defense are adversarial training, GAN-based strategies, and out-of-distribution (OOD) to identify and mitigate adversarial examples (AE). We found privacy-preserving strategies such as federated learning (FL), differential privacy, and combinations of strategies to enhance the FL. Challenges in privacy comprehend the development of attacks that bypass fine-tuning, defenses to calibrate models to improve their robustness, and privacy methods to enhance the FL strategy. CONCLUSIONS: In conclusion, it is critical to explore security and privacy in ML for health, because it has grown risks and open vulnerabilities. Our study presents strategies and challenges to guide research to investigate issues about security and privacy in ML applied to health systems.
Ähnliche Arbeiten
Rethinking the Inception Architecture for Computer Vision
2016 · 30.686 Zit.
MobileNetV2: Inverted Residuals and Linear Bottlenecks
2018 · 24.972 Zit.
CBAM: Convolutional Block Attention Module
2018 · 21.784 Zit.
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
2020 · 21.495 Zit.
Xception: Deep Learning with Depthwise Separable Convolutions
2017 · 18.693 Zit.