Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
STRATEGY FOR PROTECTING PERSONAL DATA IN MACHINE LEARNING SYSTEMS
0
Zitationen
1
Autoren
2025
Jahr
Abstract
Massive amounts of personal data drive modern machine- learning pipelines, but that same data can also pose privacy risks. This study gathers and reorganizes scattered empirical evidence on privacy- preserving methods- such as differential privacy, federated optimization, secure aggregation, private transfer learning, and fully homomorphic encryption- into a practical strategy that practitioners can follow confidently. Instead of collecting new datasets, we review twelve peer- reviewed experiments from 2021 to 2025, re- analyze their metrics, and compare the results with regulatory thresholds from GDPR and the draft EU AI Act. The meta- analysis shows that keeping the privacy budget at two or less maintains macro- F 1 losses under three percentage points across vision, speech, and clinical tasks. However, energy costs increase by a median factor of 2.1. 1. Interestingly, speech- command recognition under DP- SGD became more stable, likely by reducing overfitting. Based on these findings, we introduce a tiered decision matrix: high- sensitivity data require DP- SGD with adaptive clipping; geographically fragmented datasets benefit from federated learning coupled with threshold aggregation; untrusted- cloud deployments need lightweight homomorphic inference; and if none of these apply, private transfer learning on anonymized embeddings remains a solid fallback. To test the matrix, we use three synthetic but realistic scenarios- critical- care triage, smart- home automation, and retail loyalty prediction- that show how trade- offs change when latency, bandwidth, and legal concerns vary. This framework, called “privacy elasticity,” measures how much model quality can be adjusted before individual rights are at risk and provides practical guidelines for engineers and compliance officers. By connecting empirical data with ethical principles, this article offers more than just a survey. It presents a coherent theory and an easy- to- use tool. We argue that privacy protection has moved beyond just an add- on feature-
Ähnliche Arbeiten
k-ANONYMITY: A MODEL FOR PROTECTING PRIVACY
2002 · 8.423 Zit.
Calibrating Noise to Sensitivity in Private Data Analysis
2006 · 6.926 Zit.
Deep Learning with Differential Privacy
2016 · 5.659 Zit.
Federated Machine Learning
2019 · 5.635 Zit.
Communication-Efficient Learning of Deep Networks from Decentralized\n Data
2016 · 5.602 Zit.