2024.acl-srw.5@ACL

Total: 1

#1 Speech-to-Speech Translation with Discrete-Unit-Based Style Transfer [PDF1] [Copy] [Kimi1] [REL]

Authors: Yongqi Wang ; Bai Jionghao ; Rongjie Huang ; Ruiqi Li ; Zhiqing Hong ; Zhou Zhao

Direct speech-to-speech translation (S2ST) with discrete self-supervised representations has achieved remarkable accuracy, but is unable to preserve the speaker timbre of the source speech. Meanwhile, the scarcity of high-quality speaker-parallel data poses a challenge for learning style transfer during translation. We design an S2ST pipeline with style-transfer capability on the basis of discrete self-supervised speech representations and codec units. The acoustic language model we introduce for style transfer leverages self-supervised in-context learning, acquiring style transfer ability without relying on any speaker-parallel data, thereby overcoming data scarcity. By using extensive training data, our model achieves zero-shot cross-lingual style transfer on previously unseen source languages. Experiments show that our model generates translated speeches with high fidelity and speaker similarity. Audio samples are available at http://stylelm.github.io/ .