sato23@interspeech_2023@ISCA

Total: 1

#1 Downstream Task Agnostic Speech Enhancement with Self-Supervised Representation Loss [PDF1] [Copy] [Kimi1]

Authors: Hiroshi Sato ; Ryo Masumura ; Tsubasa Ochiai ; Marc Delcroix ; Takafumi Moriya ; Takanori Ashihara ; Kentaro Shinayama ; Saki Mizuno ; Mana Ihori ; Tomohiro Tanaka ; Nobukatsu Hojo

Self-supervised learning (SSL) is the latest breakthrough in speech processing, especially for label-scarce downstream tasks by leveraging massive unlabeled audio data. The noise robustness of the SSL is one of the important challenges to expanding its application. We can use speech enhancement (SE) to tackle this issue. However, the mismatch between the SE model and SSL models potentially limits its effect. In this work, we propose a new SE training criterion that minimizes the distance between clean and enhanced signals in the feature representation of the SSL model to alleviate the mismatch. We expect that the loss in the SSL domain could guide SE training to preserve or enhance various levels of characteristics of the speech signals that may be required for high-level downstream tasks. Experiments show that our proposal improves the performance of an SE and SSL pipeline on five downstream tasks with noisy input while maintaining the SE performance.