578@2022@IJCAI

Total: 1

#1 MuiDial: Improving Dialogue Disentanglement with Intent-Based Mutual Learning [PDF] [Copy] [Kimi] [REL]

Authors: Ziyou Jiang ; Lin Shi ; Celia Chen ; Fangwen Mu ; Yumin Zhang ; Qing Wang

The main goal of dialogue disentanglement is to separate the mixed utterances from a chat slice into independent dialogues. Existing models often utilize either an utterance-to-utterance (U2U) prediction to determine whether two utterances that have the “reply-to” relationship belong to one dialogue, or an utterance-to-thread (U2T) prediction to determine which dialogue-thread a given utterance should belong to. Inspired by mutual leaning, we propose MuiDial, a novel dialogue disentanglement model, to exploit the intent of each utterance and feed the intent to a mutual learning U2U-U2T disentanglement model. Experimental results and in-depth analysis on several benchmark datasets demonstrate the effectiveness and generalizability of our approach.