Total: 1
As concerns regarding privacy in deep learning continue to grow, individuals are increasingly apprehensive about the potential exploitation of their personal knowledge in trained models. Despite several research efforts to address this, they often fail to consider the real-world demand from users for complete knowledge erasure. Furthermore, our investigation reveals that existing methods have a risk of leaking personal knowledge through embedding features. To address these issues, we introduce a novel concept of $\textbf{K}$nowledge $\textbf{D}$eletion ($\textbf{KD}$), an advanced task that considers both concerns, and provides an appropriate metric, named $\textbf{K}$nowledge $\textbf{R}$etention score ($\textbf{KR}$), for assessing knowledge retention in feature space. To achieve this, we propose a novel training-free erasing approach named $\textbf{E}$rasing $\textbf{S}$pace $\textbf{C}$oncept ($\textbf{ESC}$), which restricts the important subspace for the forgetting knowledge by eliminating the relevant activations in the feature. In addition, we suggest $\textbf{ESC}$ with $\textbf{T}$raining ($\textbf{ESC-T}$), which uses a learnable mask to better balance the trade-off between forgetting and preserving knowledge in KD. Our extensive experiments on various datasets and models demonstrate that our proposed methods achieve the fastest and state-of-the-art performance. Notably, our methods are applicable to diverse forgetting scenarios, such as facial domain setting, demonstrating the generalizability of our methods.