Total: 1
Existing distillation-based and reconstruction-based methods have a critical limitation: Autoencoder-based frameworks trained exclusively on normal samples unexpectedly well reconstruct abnormal features, leadingto degraded detection performance. We identify this phenomenon as 'anomaly leakage' (AL): the decoder optimized by reconstruction loss tends to directly copy the encoded input, regardless of whether the input is a normal or abnormal feature. To address this issue, we propose a novel framework that explicitly decouples encoded features into normal and abnormal components through a special invertible mapping in a prior latent space. Next, we remove abnormal components and leverage normal remainders for feature reconstruction. Compared to previous methods, the invertible structure can eliminate anomalous information point-to-point without damaging the information of neighboring patches, improving reconstruction. In this process, effective synthetic abnormal features are essential for training the decoupling process. Therefore, we propose applying adversarial training to find suitable perturbations to simulate feature-level anomalies. Extensive experimental evaluations on benchmark datasets, including MVTec AD, VisA, and Real-IAD, demonstrate that our method achieves competitive performance compared to state-of-the-art approaches. Code is available at: DecAD.