yqcKvMW1sL@OpenReview

Total: 1

#1 Learning to Quantize for Training Vector-Quantized Networks [PDF2] [Copy] [Kimi1] [REL]

Authors: Peijia Qin, Jianguo Zhang

Deep neural networks incorporating discrete latent variables have shown significant potential in sequence modeling.A notable approach is to leverage vector quantization (VQ) to generate discrete representations within a codebook.However, its discrete nature prevents the use of standard backpropagation, which has led to challenges in efficient codebook training.In this work, we introduce **Meta-Quantization (MQ)**, a novel vector quantization training framework inspired by meta-learning.Our method separates the optimization of the codebook and the auto-encoder into two levels.Furthermore, we introduce a hyper-net to replace the embedding-parameterized codebook, enabling the codebook to be dynamically generated based on the feedback from the auto-encoder.Different from previous VQ objectives, our innovation results in a meta-objective that makes the codebook training task-aware.We validate the effectiveness of MQ with VQVAE and VQGAN architecture on image reconstruction and generation tasks.Experimental results showcase the superior generative performance of MQ, underscoring its potential as a robust alternative to existing VQ methods.

Subject: ICML.2025 - Poster