OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 12.04.2026, 05:29

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Deeper Leakage from Gradients through Membership Inference Attack

2024·0 Zitationen
Volltext beim Verlag öffnen

0

Zitationen

2

Autoren

2024

Jahr

Abstract

Several prior studies underscore the vulnerability of individual participants’ local data in collaborative learning settings, such as federated learning, to exposure through a small subset of gradients used in updating the local model. This deep leakage from gradients (DLG) empowers attackers to reconstruct images with high pixel-wise accuracy and texts with a high probability of token-wise matching. Additionally, research has revealed that a black-box machine learning model is susceptible to a membership inference attack (MIA). In this scenario, an attacker can develop a separate machine learning model, termed an MIA model, capable of accurately determining whether a given input belongs to the training dataset of the black-box machine learning model, achieving notably high prediction accuracy. Building on these understandings, our work proposes an enhanced DLG attack to further explore the depth of gradient leakage through the MIA attack, termed as DLG-MIA attack. Specifically, we enhance the performance of the attack compared to the standard DLG attack by leveraging the insight that a higher probability of the reconstructed sample belonging to the training dataset corresponds to better performance for the DLG attack. By incorporating additional constraints from the MIA attack and employing various techniques to extract more assistive information, the DLG-MIA attack achieves superior performance. We present a mathematical illustration demonstrating how our method enhances the DLG attack through the incorporation of the MIA attack and how it extracts additional assistive information. Additionally, we empirically showcase the advantages of our approach over the standard DLG attack.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Privacy-Preserving Technologies in DataAdversarial Robustness in Machine LearningArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen