Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
S <sup>2</sup> NeRF: Privacy-preserving Training Framework for NeRF
0
Zitationen
6
Autoren
2024
Jahr
Abstract
Neural Radiance Fields (NeRF) have revolutionized 3D computer vision and graphics, facilitating novel view synthesis and influencing sectors like extended reality and e-commerce. However, NeRF's dependence on extensive data collection, including sensitive scene image data, introduces significant privacy risks when users upload this data for model training. To address this concern, we first propose SplitNeRF, a training framework that incorporates split learning (SL) techniques to enable privacy-preserving collaborative model training between clients and servers without sharing local data. Despite its benefits, we identify vulnerabilities in SplitNeRF by developing two attack methods, Surrogate Model Attack and Scene-aided Surrogate Model Attack, which exploit the shared gradient data and a few leaked scene images to reconstruct private scene information. To counter these threats, we introduce S^2NeRF, secure SplitNeRF that integrates effective defense mechanisms. By introducing decaying noise related to the gradient norm into the shared gradient information, S^2NeRF preserves privacy while maintaining a high utility of the NeRF model. Our extensive evaluations across multiple datasets demonstrate the effectiveness of S^2NeRF against privacy breaches, confirming its viability for secure NeRF training in sensitive applications.
Ähnliche Arbeiten
k-ANONYMITY: A MODEL FOR PROTECTING PRIVACY
2002 · 8.427 Zit.
Calibrating Noise to Sensitivity in Private Data Analysis
2006 · 6.933 Zit.
Deep Learning with Differential Privacy
2016 · 5.670 Zit.
Federated Machine Learning
2019 · 5.649 Zit.
Communication-Efficient Learning of Deep Networks from Decentralized\n Data
2016 · 5.602 Zit.