2023.acl-demo.9@ACL

Total: 1

#1 OpenSLU: A Unified, Modularized, and Extensible Toolkit for Spoken Language Understanding [PDF2] [Copy] [Kimi1] [REL]

Authors: Libo Qin, Qiguang Chen, Xiao Xu, Yunlong Feng, Wanxiang Che

Spoken Language Understanding (SLU) is one of the core components of a task-oriented dialogue system, which aims to extract the semantic meaning of user queries (e.g., intents and slots). In this work, we introduce OpenSLU, an open-source toolkit to provide a unified, modularized, and extensible toolkit for spoken language understanding. Specifically, OpenSLU unifies 10 SLU models for both single-intent and multi-intent scenarios, which support both non-pretrained and pretrained models simultaneously. Additionally, OpenSLU is highly modularized and extensible by decomposing the model architecture, inference, and learning process into reusable modules, which allows researchers to quickly set up SLU experiments with highly flexible configurations. OpenSLU is implemented based on PyTorch, and released at https://github.com/LightChen233/OpenSLU.