41442@AAAI

Total: 1

#1 Balancing Accuracy and Efficiency in Multi-Turn Intent Classification for LLM-Powered Dialog Systems in Production [PDF] [Copy] [Kimi] [REL]

Authors: Junhua Liu, Yong Keat Tan, Bin Fu, Kwan Hui Lim

Accurate multi-turn intent classification is critical for advancing conversational AI systems but remains challenging due to limited datasets and complex contextual dependencies across dialogue turns. This paper presents two novel approaches leveraging Large Language Models (LLMs) to enhance scalability and reduce latency in production dialogue systems. First, we introduce Symbol Tuning, which simplifies intent labels to reduce task complexity and improve performance in multi-turn dialogues. Second, we propose Consistency-aware, Linguistics Adaptive Retrieval Augmentation (CLARA), a framework that employs LLMs for data augmentation and pseudo-labeling to generate synthetic multi-turn dialogues. These enriched datasets are used to fine-tune a small, efficient model suitable for deployment. Experiments on multilingual dialogue datasets show that our methods result in notable gains in both accuracy and resource efficiency, with improvements of 5.09% in classification accuracy, a 40% reduction in annotation costs, and effective deployment in low-resource multilingual industrial settings.

Subject: AAAI.2026 - IAAI