Total: 1
With the advancement of RNN models with linear complexity, the quadratic complexity challenge of transformers has the potential to be overcome. Notably, the emerging Mamba-2 has demonstrated competitive performance, bridging the gap between RNN models and transformers. However, due to sequential processing and vanishing gradients, RNN models struggle to capture long-range dependencies, limiting contextual understanding. This results in slow convergence, high resource demands, and poor performance on downstream understanding and complex reasoning tasks. In this work, we present MaTVLM, a method for distilling pre-trained vision-language models (VLMs) into an efficient Mamba-Transformer hybrid architecture. Specifically, we construct this hybrid architecture by replacing a portion of the transformer decoder layers in the pre-trained VLM with Mamba-2 layers. Building on this design, we employ a single-stage distillation process, incorporating a clever initialization strategy, leveraging the inherent relationship between attention mechanisms and Mamba-2, and initialize Mamba-2 with corresponding attention weights, which notably accelerates convergence. With the pre-trained VLM serving as the teacher model, this distillation process further boosts both convergence speed and model performance. Furthermore, we investigate the impact of differential distillation loss within our training framework. We evaluate MaTVLM on multiple benchmarks, demonstrating competitive performance against the teacher model and existing VLMs while surpassing both Mamba-based VLMs and models of comparable parameter scales. Remarkably, MaTVLM achieves up to 4.3 times faster inference than the teacher model while reducing GPU memory consumption by 27.5%, all without compromising performance. Code and models are released at https://github.com/hustvl/MaTVLM.