2025.naacl-long.100@ACL

Total: 1

#1 An Interpretable and Crosslingual Method for Evaluating Second-Language Dialogues [PDF1] [Copy] [Kimi1] [REL]

Authors: Rena Gao, Jingxuan Wu, Xuetong Wu, Carsten Roever, Jing Wu, Long Lv, Jey Han Lau

We analyse the cross-lingual transferability of a dialogue evaluation framework that assesses the relationships between micro-level linguistic features (e.g. backchannels) and macro-level interactivity labels (e.g. topic management), originally designed for English-as-a-second-language dialogues. To this end, we develop CNIMA (**C**hinese **N**on-Native **I**nteractivity **M**easurement and **A**utomation), a Chinese-as-a-second-language labelled dataset with 10K dialogues. We found the evaluation framework to be robust across languages, revealing language-specific and language-universal relationships between micro-level and macro-level features. Next, we propose an automated, interpretable approach with low data requirements that scores the overall quality of a second-language dialogue based on the framework. Our approach is interpretable in that it reveals the key linguistic and interactivity features that contributed to the overall quality score. As our approach does not require labelled data, it can also be adapted to other languages for second-language dialogue evaluation.

Subject: NAACL.2025 - Long Papers