likhomanenko21@interspeech_2021@ISCA

Total: 1

#1 Rethinking Evaluation in ASR: Are Our Models Robust Enough? [PDF] [Copy] [Kimi1]

Authors: Tatiana Likhomanenko ; Qiantong Xu ; Vineel Pratap ; Paden Tomasello ; Jacob Kahn ; Gilad Avidov ; Ronan Collobert ; Gabriel Synnaeve

Is pushing numbers on a single benchmark valuable in automatic speech recognition? Research results in acoustic modeling are typically evaluated based on performance on a single dataset. While the research community has coalesced around various benchmarks, we set out to understand generalization performance in acoustic modeling across datasets — in particular, if models trained on a single dataset transfer to other (possibly out-of-domain) datasets. Further, we demonstrate that when a large enough set of benchmarks is used, average word error rate (WER) performance over them provides a good proxy for performance on real-world data. Finally, we show that training a single acoustic model on the most widely-used datasets — combined — reaches competitive performance on both research and real-world benchmarks.