Total: 1
Adolescent suicide is a pressing global public health issue. Timely identification of suicide risk is crucial. Traditional methods of assessing suicide risk are often limited by their reliance on subjective input and resource requirements. This paper aims to address these limitations by detecting suicide risk from multi-task-based speech, utilizing a dataset of 600 Chinese adolescents (age: 10-18 yr) provided by the 1st SpeechWellness Challenge. Our approach involved both acoustic and semantic features extracted by OpenSmile, Emotion2Vec, and a fine-tuned BERT-Chinese model. The base models were trained using XGBoost and SVM, etc., with hyperparameters tuned by Bayesian optimization. Then we implemented a multi-model and multi-task nested voting ensemble framework to integrate the base models, achieving a final test set accuracy of 0.63 (recall=0.74, F1≈0.67). This work highlights the potential of voice-based biomarkers in mental health assessment.