Lee_Emulating_Self-attention_with_Convolution_for_Efficient_Image_Super-Resolution@ICCV2025@CVF

Total: 1

#1 Emulating Self-attention with Convolution for Efficient Image Super-Resolution [PDF5] [Copy] [Kimi] [REL]

Authors: Dongheon Lee, Seokju Yun, Youngmin Ro

In this paper, we tackle the high computational overhead of Transformers for efficient image super-resolution (SR). Motivated by the observations of self-attention's inter-layer repetition, we introduce a convolutionized self-attention module named Convolutional Attention (ConvAttn) that emulates self-attention's long-range modeling capability and instance-dependent weighting with a single shared large kernel and dynamic kernels. By utilizing the ConvAttn module, we significantly reduce the reliance on self-attention and its involved memory-bound operations while maintaining the representational capability of Transformers. Furthermore, we overcome the challenge of integrating flash attention into the lightweight SR regime, effectively mitigating self-attention's inherent memory bottleneck. We scale up the window size to 32x32 with flash attention rather than proposing an intricate self-attention module, significantly improving PSNR by 0.31dB on Urban100x2 while reducing latency and memory usage by 16xand 12.2x. Building on these approaches, our proposed network, termed Emulating Self-attention with Convolution (ESC), notably improves PSNR by 0.27 dB on Urban100x4 compared to HiT-SRF, reducing the latency and memory usage by 3.7xand 6.2x, respectively. Extensive experiments demonstrate that our ESC maintains the ability for long-range modeling, data scalability, and the representational power of Transformers despite most self-attention being replaced by the ConvAttn module.

Subject: ICCV.2025 - Highlight