Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
ReFEdit: Rehearsal-Free Lifelong Knowledge Editing for Large Language Models
0
Zitationen
5
Autoren
2025
Jahr
Abstract
Knowledge editing has emerged as a promising strategy for updating obsolete or inaccurate knowledge embedded within large language models (LLMs) without costly fine-tuning. The widely adopted locating-then-editing paradigm first locates parameters responsible for knowledge storage and then modifies them to integrate updated knowledge. However, in lifelong knowledge editing scenarios, catastrophic forgetting poses a significant challenge. Existing methods often rely on rehearsal-based techniques, such as storing a feature covariance matrix of previously preserved knowledge to constrain errors, thus raising efficiency and privacy issues. To address this, we introduce ReFEdit, a Rehearsal-Free Lifelong Knowledge Editing framework that enforces an orthogonality restriction on parameter modifications. By aligning the update direction orthogonally to both the latest and initial parameters, ReFEdit minimizes the interference between sequentially edited knowledge while mitigating the impact on previously preserved knowledge, thereby effectively addressing catastrophic forgetting. Extensive evaluations on multiple representative LLMs, including LLaMA3, GPT-J, and GPT2-XL, demonstrate that ReFEdit significantly outperforms most existing rehearsal-based knowledge editing methods while eliminating the need for the rehearsal phase, marking a substantial advancement toward more reliable and flexible lifelong knowledge editing. Our code is available at: https://github.com/Cedric-Mo/ReFEdit