leschly25@interspeech_2025@ISCA

Total: 1

#1 An Exploration of Interpretable Deep Learning Models for the Assessment of Mild Cognitive Impairment [PDF1] [Copy] [Kimi] [REL]

Authors: Emma Cathrine Liisborg Leschly, Oliver Roesler, Michael Neumann, Jackson Liscombe, Abhishek Hosamath, Lakshmi Arbatti, Line H. Clemmensen, Melanie Ganz, Vikram Ramanarayanan

Early diagnosis and intervention are crucial for mild cognitive impairment (MCI), as MCI often progresses to more severe neurodegenerative conditions. In this study, we explore utilizing deep learning for MCI detection without loosing the interpretability provided by feature-based approaches. We used a dataset consisting of 90 MCI patients and 91 controls collected via a remote assessment platform and analyzed the participants’ spontaneous speech responses to the Patient Report of Problems (PROP) which asks patients to report their most bothersome general health problems. The proposed deep neural network, which features a bottleneck layer including 13 interpretable symptom domains, achieved an AUC of 0.62, thereby outperforming a set of feature-based classifiers while ensuring interpretability due to the bottleneck layer. We further illustrated the model’s interpretability by examining how the predicted PROP domains influence final predictions using Shapley values.

Subject: INTERSPEECH.2025 - Modelling and Learning