peng24b@interspeech_2024@ISCA

Total: 1

#1 OWSM v3.1: Better and Faster Open Whisper-Style Speech Models based on E-Branchformer [PDF] [Copy] [Kimi] [REL]

Authors: Yifan Peng ; Jinchuan Tian ; William Chen ; Siddhant Arora ; Brian Yan ; Yui Sudo ; Muhammad Shakeel ; Kwanghee Choi ; Jiatong Shi ; Xuankai Chang ; Jee-weon Jung ; Shinji Watanabe

Recent studies have highlighted the importance of fully open foundation models. The Open Whisper-style Speech Model (OWSM) is an initial step towards reproducing OpenAI Whisper using public data and open-source toolkits. However, previous versions of OWSM (v1 to v3) are still based on standard Transformer, which might lead to inferior performance compared to state-of-the-art speech encoder architectures. This work aims to improve the performance and efficiency of OWSM without additional data. We present a series of E-Branchformer-based models named OWSM v3.1, ranging from 100M to 1B parameters. OWSM v3.1 outperforms its predecessor, OWSM v3, in most evaluation benchmarks, while showing an improved inference speed of up to 25%. We further reveal the emergent ability of OWSM v3.1 in zero-shot contextual biasing speech recognition. We also provide a model trained on a subset of data with low license restrictions. We will publicly release the code, pre-trained models, and training logs.1