RuMfpz8bTw@OpenReview

Total: 1

#1 Fast Inference with Kronecker-Sparse Matrices [PDF] [Copy] [Kimi] [REL]

Authors: Antoine Gonon, Léon Zheng, Pascal Carrivain, TUNG LE

Kronecker-sparse (KS) matrices—whose supports are Kronecker products of identity and all-ones blocks—underpin the structure of Butterfly and Monarch matrices and offer the promise of more efficient models. However, existing GPU kernels for KS matrix multiplication suffer from high data movement costs, with up to 50% of time spent on memory-bound tensor permutations. We propose a fused, output-stationary GPU kernel that eliminates these overheads, reducing global memory traffic threefold. Across 600 KS patterns, our kernel achieves in FP32 a median speedup of x1.4 and lowers energy consumption by 15%. A simple heuristic based on KS pattern parameters predicts when our method outperforms existing ones. We release all code at [github.com/PascalCarrivain/ksmm](https://github.com/PascalCarrivain/ksmm), including a PyTorch-compatible *KSLinear* layer, and demonstrate in FP32 end-to-end latency reductions of up to 22% in ViT-S/16 and 16% in GPT-2 medium.

Subject: ICML.2025 - Poster