2hbx3wG2I1@OpenReview

Total: 1

#1 Weight matrices compression based on PDB model in deep neural networks [PDF2] [Copy] [Kimi1] [REL]

Authors: Xiaoling Wu, Junpeng Zhu, Zeng Li

Weight matrix compression has been demonstrated to effectively reduce overfitting and improve the generalization performance of deep neural networks. Compression is primarily achieved by filtering out noisy eigenvalues of the weight matrix. In this work, a novel **Population Double Bulk (PDB) model** is proposed to characterize the eigenvalue behavior of the weight matrix, which is more general than the existing Population Unit Bulk (PUB) model. Based on PDB model and Random Matrix Theory (RMT), we have discovered a new **PDBLS algorithm** for determining the boundary between noisy eigenvalues and information. A **PDB Noise-Filtering algorithm** is further introduced to reduce the rank of the weight matrix for compression. Experiments show that our PDB model fits the empirical distribution of eigenvalues of the weight matrix better than the PUB model, and our compressed weight matrices have lower rank at the same level of test accuracy. In some cases, our compression method can even improve generalization performance when labels contain noise. The code is avaliable at https://github.com/xlwu571/PDBLS.

Subject: ICML.2025 - Poster