2023.findings-emnlp.19@ACL

Total: 1

#1 Long-Form Speech Translation through Segmentation with Finite-State Decoding Constraints on Large Language Models [PDF] [Copy] [Kimi2] [REL]

Authors: Arya McCarthy, Hao Zhang, Shankar Kumar, Felix Stahlberg, Ke Wu

One challenge in speech translation is that plenty of spoken content is long-form, but short units are necessary for obtaining high-quality translations. To address this mismatch, we adapt large language models (LLMs) to split long ASR transcripts into segments that can be independently translated so as to maximize the overall translation quality. We overcome the tendency of hallucination in LLMs by incorporating finite-state constraints during decoding; these eliminate invalid outputs without requiring additional training. We discover that LLMs are adaptable to transcripts containing ASR errors through prompt-tuning or fine-tuning. Relative to a state-of-the-art automatic punctuation baseline, our best LLM improves the average BLEU by 2.9 points for English–German, English–Spanish, and English–Arabic TED talk translation in 9 test sets, just by improving segmentation.