2025.naacl-long.413@ACL

Total: 1

#1 Making Language Models Robust Against Negation [PDF] [Copy] [Kimi] [REL]

Authors: MohammadHossein Rezaei, Eduardo Blanco

Negation has been a long-standing challenge for language models.Previous studies have shown that they struggle with negation in many natural language understanding tasks.In this work, we propose a self-supervised method to make language models more robust against negation.We introduce a novel task, Next Sentence Polarity Prediction (NSPP), and a variation of the Next Sentence Prediction (NSP) task.We show that BERT and RoBERTa further pre-trained on our tasks outperform the off-the-shelf versions on nine negation-related benchmarks.Most notably, our pre-training tasks yield between 1.8% and 9.1% improvement on CondaQA, a large question-answering corpus requiring reasoning over negation.

Subject: NAACL.2025 - Long Papers