N19-4011@ACL

Total: 1

#1 ChatEval: A Tool for Chatbot Evaluation [PDF] [Copy] [Kimi] [REL]

Authors: João Sedoc ; Daphne Ippolito ; Arun Kirubarajan ; Jai Thirani ; Lyle Ungar ; Chris Callison-Burch

Open-domain dialog systems (i.e. chatbots) are difficult to evaluate. The current best practice for analyzing and comparing these dialog systems is the use of human judgments. However, the lack of standardization in evaluation procedures, and the fact that model parameters and code are rarely published hinder systematic human evaluation experiments. We introduce a unified framework for human evaluation of chatbots that augments existing tools and provides a web-based hub for researchers to share and compare their dialog systems. Researchers can submit their trained models to the ChatEval web interface and obtain comparisons with baselines and prior work. The evaluation code is open-source to ensure standardization and transparency. In addition, we introduce open-source baseline models and evaluation datasets. ChatEval can be found at https://chateval.org.