Xu_Face_Retouching_with_Diffusion_Data_Generation_and_Spectral_Restorement@ICCV2025@CVF

Total: 1

#1 Face Retouching with Diffusion Data Generation and Spectral Restorement [PDF1] [Copy] [Kimi2] [REL]

Authors: Zhidan Xu, Xiaoqin Zhang, Shijian Lu

Face retouching has achieved impressive performance largely driven by its wide range of applications in various real-world tasks. However, most existing works often encounters a dilemma between global consistency and local detail preservation, partially due to the lack of large-scale and high-quality training data. We address the face retouching challenge from two perspectives. First, we create a large-scale face retouching benchmark to mitigate the data scarcity issue. The benchmark comprises 25,000 pairs of high-quality facial images (before and after face retouching) that contain a variety of facial attributes and blemish types such as acne and moles. Second, we design a novel framework that introduces frequency selection and restoration (FSR) and multi-resolution fusion (MRF) that leverages frequency-aware dynamic aggregation and spatial-frequency filtering to achieve global consistency and local detail preservation concurrently. Inspired by the principle of JPEG compression, FSR introduces frequency-domain quantization with spatial projections to learn enhanced feature representations. MRF fuses multi-resolution features via laplacian pyramid fusion, removing large-area blemishes and preserving local fine details effectively. Extensive experiments over multiple benchmarks show that the proposed framework outperforms the state-of-the-art quantitatively and qualitatively. The created benchmark also provides valuable data for training and evaluating both existing and future face retouching networks.

Subject: ICCV.2025 - Poster