yamamoto21@interspeech_2021@ISCA

Total: 1

#1 Comparison of Remote Experiments Using Crowdsourcing and Laboratory Experiments on Speech Intelligibility [PDF] [Copy] [Kimi1]

Authors: Ayako Yamamoto ; Toshio Irino ; Kenichi Arai ; Shoko Araki ; Atsunori Ogawa ; Keisuke Kinoshita ; Tomohiro Nakatani

Many subjective experiments have been performed to develop objective speech intelligibility measures, but the novel coronavirus outbreak has made it difficult to conduct experiments in a laboratory. One solution is to perform remote testing using crowdsourcing; however, because we cannot control the listening conditions, it is unclear whether the results are entirely reliable. In this study, we compared the speech intelligibility scores obtained from remote and laboratory experiments. The results showed that the mean and standard deviation (SD) of the remote experiments’ speech reception threshold (SRT) were higher than those of the laboratory experiments. However, the variance in the SRTs across the speech-enhancement conditions revealed similarities, implying that remote testing results may be as useful as laboratory experiments to develop an objective measure. We also show that practice session scores are correlated with SRT values. This is a priori information before performing the main tests and would be useful for data screening to reduce the variability of the SRT distribution.