2025.findings-emnlp.550@ACL

Total: 1

#1 XL-Suite: Cross-Lingual Synthetic Training and Evaluation Data for Open-Ended Generation [PDF] [Copy] [Kimi] [REL]

Authors: Vivek Iyer, Pinzhen Chen, Ricardo Rei, Alexandra Birch

Cross-lingual open-ended generation – responding in a language different from that of the query – is an important yet understudied problem. This work proposes XL-Instruct, a novel technique for generating high-quality synthetic data, and introduces XL-AlpacaEval, a new benchmark for evaluating cross-lingual generation capabilities of large language models (LLMs). Our experiments show that fine-tuning with just 8K instructions generated using XL-Instruct significantly improves model performance, increasing the win rate against GPT-4o-mini from 7.4% to 21.5% and improving on several fine-grained quality metrics. Moreover, base LLMs fine-tuned on XL-Instruct exhibit strong zero-shot improvements to same-language question answering, as shown on our machine-translated m-AlpacaEval. These consistent gains highlight the promising role of XL-Instruct in the post-training of multilingual LLMs. Finally, we publicly release XL-Suite, a collection of training and evaluation data to facilitate research in cross-lingual open-ended generation.

Subject: EMNLP.2025 - Findings