Total: 1
Existing continual deepfake detection methods typically treat stability (retaining previously learned forgery knowl- edge) and plasticity (adapting to novel forgeries) as con- flicting properties, emphasizing an inherent trade-off be- tween them, while regarding generalization to unseen forg- eries as secondary. In contrast, we reframe the problem: stability and plasticity can coexist and be jointly improved through the model's inherent generalization. Specifically, we propose Generalization-Preserved Learning (GPL), a novel framework consisting of two key components: (1) Hy- perbolic Visual Alignment, which introduces learnable wa- termarks to align incremental data with the base set in hy- perbolic space, alleviating inter-task distribution shifts; (2) Generalized Gradient Projection, which prevents parame- ter updates that conflict with generalization constraints, en- suring new knowledge learning does not interfere with pre- viously acquired knowledge. Notably, GPL requires nei- ther backbone retraining nor historical data storage. Ex- periments conducted on four mainstream datasets (FF++, Celeb-DF v2, DFD, and DFDCP) demonstrate that GPL achieves an accuracy of 92.14%, outperforming replay- based state-of-the-art methods by 2.15%, while reducing forgetting by 2.66%. Moreover, GPL achieves an 18.38% improvement on unseen forgeries using only 1% of baseline parameters, thus presenting an efficient adaptation to con- tinuously evolving forgery techniques.