2025.naacl-industry.17@ACL

Total: 1

#1 Breaking Boundaries: Investigating the Effects of Model Editing on Cross-linguistic Performance [PDF] [Copy] [Kimi] [REL]

Authors: Somnath Banerjee, Avik Halder, Rajarshi Mandal, Sayan Layek, Ian Soboroff, Rima Hazra, Animesh Mukherjee

Pretrained language models (PLMs) have revolutionized NLP but amplify linguistic inequities in multilingual applications. While prior studies focused on transformer architectures such as BERT, we evaluate large language models (LLMs) including Mistral, TowerInstruct, OpenHathi, Tamil-Llama, and Kan-Llama. Through rigorous testing across eight languages spanning high-resource (English, German, French, Italian, Spanish) and low-resource (Hindi, Tamil, Kannada) settings, we reveal systemic failures in preserving multilingual reliability and adaptability. Using paradigms like each language for itself’ (ELFI) and each language for others’ (ELFO), we highlight the inability of current LLMs to bridge linguistic divides. Even model merging fail to mitigate these gaps, exposing fundamental limitations. These findings emphasize the critical need for reimagining AI architectures to deliver true linguistic inclusivity and equitable performance across diverse languages.

Subject: NAACL.2025 - Industry Track