Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Deeper Leakage from Gradients through Membership Inference Attack
0
Zitationen
2
Autoren
2024
Jahr
Abstract
Several prior studies underscore the vulnerability of individual participants’ local data in collaborative learning settings, such as federated learning, to exposure through a small subset of gradients used in updating the local model. This deep leakage from gradients (DLG) empowers attackers to reconstruct images with high pixel-wise accuracy and texts with a high probability of token-wise matching. Additionally, research has revealed that a black-box machine learning model is susceptible to a membership inference attack (MIA). In this scenario, an attacker can develop a separate machine learning model, termed an MIA model, capable of accurately determining whether a given input belongs to the training dataset of the black-box machine learning model, achieving notably high prediction accuracy. Building on these understandings, our work proposes an enhanced DLG attack to further explore the depth of gradient leakage through the MIA attack, termed as DLG-MIA attack. Specifically, we enhance the performance of the attack compared to the standard DLG attack by leveraging the insight that a higher probability of the reconstructed sample belonging to the training dataset corresponds to better performance for the DLG attack. By incorporating additional constraints from the MIA attack and employing various techniques to extract more assistive information, the DLG-MIA attack achieves superior performance. We present a mathematical illustration demonstrating how our method enhances the DLG attack through the incorporation of the MIA attack and how it extracts additional assistive information. Additionally, we empirically showcase the advantages of our approach over the standard DLG attack.
Ähnliche Arbeiten
k-ANONYMITY: A MODEL FOR PROTECTING PRIVACY
2002 · 8.418 Zit.
Calibrating Noise to Sensitivity in Private Data Analysis
2006 · 6.923 Zit.
Deep Learning with Differential Privacy
2016 · 5.655 Zit.
Federated Machine Learning
2019 · 5.627 Zit.
Communication-Efficient Learning of Deep Networks from Decentralized\n Data
2016 · 5.601 Zit.