OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 15.04.2026, 03:10

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

STRATEGY FOR PROTECTING PERSONAL DATA IN MACHINE LEARNING SYSTEMS

2025·0 Zitationen·Cybersecurity Education Science TechniqueOpen Access
Volltext beim Verlag öffnen

0

Zitationen

1

Autoren

2025

Jahr

Abstract

Massive amounts of personal data drive modern machine- learning pipelines, but that same data can also pose privacy risks. This study gathers and reorganizes scattered empirical evidence on privacy- preserving methods- such as differential privacy, federated optimization, secure aggregation, private transfer learning, and fully homomorphic encryption- into a practical strategy that practitioners can follow confidently. Instead of collecting new datasets, we review twelve peer- reviewed experiments from 2021 to 2025, re- analyze their metrics, and compare the results with regulatory thresholds from GDPR and the draft EU AI Act. The meta- analysis shows that keeping the privacy budget at two or less maintains macro- F 1 losses under three percentage points across vision, speech, and clinical tasks. However, energy costs increase by a median factor of 2.1. 1. Interestingly, speech- command recognition under DP- SGD became more stable, likely by reducing overfitting. Based on these findings, we introduce a tiered decision matrix: high- sensitivity data require DP- SGD with adaptive clipping; geographically fragmented datasets benefit from federated learning coupled with threshold aggregation; untrusted- cloud deployments need lightweight homomorphic inference; and if none of these apply, private transfer learning on anonymized embeddings remains a solid fallback. To test the matrix, we use three synthetic but realistic scenarios- critical- care triage, smart- home automation, and retail loyalty prediction- that show how trade- offs change when latency, bandwidth, and legal concerns vary. This framework, called “privacy elasticity,” measures how much model quality can be adjusted before individual rights are at risk and provides practical guidelines for engineers and compliance officers. By connecting empirical data with ethical principles, this article offers more than just a survey. It presents a coherent theory and an easy- to- use tool. We argue that privacy protection has moved beyond just an add- on feature-

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Privacy-Preserving Technologies in DataArtificial Intelligence in Healthcare and EducationEthics and Social Impacts of AI
Volltext beim Verlag öffnen