Total: 1
Large language models (LLMs) have become standard for natural language generation tasks, with instruction-tuning enhancing their capabilities. However, the lack of instruction-tuning datasets in languages other than English limits their application to diverse languages. To address this, researchers have adapted English-centric LLMs to other languages by appending English tuning data with its translated pair, from which we observe negative interference between the two. To resolve this, our contribution is identifying English as an internal pivot language, based on which we disentangle the roles of English and target language data in training. Specifically, we first design two roles as pivoted objectives, and also propose to regulate between the two, to better generalize for under-represented languages. Experiments across various languages demonstrate the effectiveness of our approach on multiple benchmarks. The code is publicly available for further exploration.