wang23da@interspeech_2023@ISCA

Total: 1

#1 Task-Agnostic Structured Pruning of Speech Representation Models [PDF] [Copy] [Kimi1]

Authors: Haoyu Wang ; Siyuan Wang ; Wei-Qiang Zhang ; Suo Hongbin ; Yulong Wan

Self-supervised pre-trained models such as Wav2vec2, Hubert, and WavLM have been shown to significantly improve many speech tasks. However, their large memory and strong computational requirements hinder their industrial applicability. Structured pruning is a hardware-friendly model compression technique but usually results in a larger loss of accuracy. In this paper, we propose a fine-grained attention head pruning method to compensate for the performance degradation. In addition, we also introduce the straight through estimator into the L0 regularization to further accelerate the pruned model. Experiments on the SUPERB benchmark show that our model can achieve comparable performance to the dense model in multiple tasks and outperforms the Wav2vec 2.0 base model on average, with 72% fewer parameters and 2 times faster inference speed.