2024.acl-short.21@ACL

Total: 1

#1 Fine-Tuning Pre-Trained Language Models with Gaze Supervision [PDF1] [Copy] [Kimi11] [REL]

Authors: Shuwen Deng ; Paul Prasse ; David Reich ; Tobias Scheffer ; Lena Jäger

Human gaze data provide cognitive information that reflect human language comprehension and has been effectively integrated into a variety of natural language processing (NLP) tasks, demonstrating improved performance over corresponding plain text-based models. In this work, we propose to integrate a gaze module into pre-trained language models (LMs) at the fine-tuning stage to improve their capabilities to learn representations that are grounded in human language processing. This is done by extending the conventional purely text-based fine-tuning objective with an auxiliary loss to exploit cognitive signals. The gaze module is only included during training, retaining compatibility with existing pre-trained LM-based pipelines. We evaluate the proposed approach using two distinct pre-trained LMs on the GLUE benchmark and observe that the proposed model improves performance compared to both standard fine-tuning and traditional text augmentation baselines.