Mi_ADD_Attribution-Driven_Data_Augmentation_Framework_for_Boosting_Image_Super-Resolution@CVPR2025@CVF

Total: 1

#1 ADD: Attribution-Driven Data Augmentation Framework for Boosting Image Super-Resolution [PDF7] [Copy] [Kimi2] [REL]

Authors: Ze-Yu Mi, Yu-Bin Yang

Data augmentation (DA) stands out as a powerful technique to enhance the generalization capabilities of deep neural networks across diverse tasks. However, in low-level vision tasks, DA remains rudimentary ($i.e.$, vanilla DA), facing a critical bottleneck due to information loss. In this paper, we introduce a novel Calibrated Attribution Map (CAM) to generate saliency masks, followed by two saliency-based DA methods—ADD and ADD+—designed to address this issue. CAM leverages integrated gradients and incorporates two key innovations: a global feature detector and calibrated integrated gradients. Based on CAM and the proposed methods, we highlight two key insights for low-level vision tasks: (1) increasing pixel diversity, as seen in vanilla DA, can improve performance, and (2) focusing on salient features while minimizing the impact of irrelevant pixels, as seen in saliency-based DA, more effectively enhances model performance. Additionally, we propose two guiding principles for designing saliency-based DA: coarse-grained partitioning and diverse augmentation strategies. Extensive experiments demonstrate the compatibility and consistent, significant performance improvement of our method across various SR tasks and networks.

Subject: CVPR.2025 - Poster