niizumi24@interspeech_2024@ISCA

Total: 1

#1 M2D-CLAP: Masked Modeling Duo Meets CLAP for Learning General-purpose Audio-Language Representation [PDF1] [Copy] [Kimi] [REL]

Authors: Daisuke Niizumi ; Daiki Takeuchi ; Yasunori Ohishi ; Noboru Harada ; Masahiro Yasuda ; Shunsuke Tsubaki ; Keisuke Imoto

Contrastive language-audio pre-training (CLAP) enables zero-shot (ZS) inference of audio and exhibits promising performance in several classification tasks. However, conventional audio representations are still crucial for many tasks where ZS is not applicable (e.g., regression problems). Here, we explore a new representation, a general-purpose audio-language representation, that performs well in both ZS and transfer learning. To do so, we propose a new method, M2D-CLAP, which combines self-supervised learning Masked Modeling Duo (M2D) and CLAP. M2D learns an effective representation to model audio signals, and CLAP aligns the representation with text embedding. As a result, M2D-CLAP learns a versatile representation that allows for both ZS and transfer learning. Experiments show that M2D-CLAP performs well on linear evaluation, fine-tuning, and ZS classification with a GTZAN state-of-the-art of 75.17%, thus achieving a general-purpose audio-language representation.