2025.acl-long.1155@ACL

Total: 1

#1 Exploring Explanations Improves the Robustness of In-Context Learning [PDF] [Copy] [Kimi1] [REL]

Authors: Ukyo Honda, Tatsushi Oka

In-context learning (ICL) has emerged as a successful paradigm for leveraging large language models (LLMs).However, it often struggles to generalize beyond the distribution of the provided demonstrations.A recent advancement in enhancing robustness is ICL with explanations (X-ICL), which improves prediction reliability by guiding LLMs to understand and articulate the reasoning behind correct labels.Building on this approach, we introduce an advanced framework that extends X-ICL by systematically exploring explanations for all possible labels (X2-ICL), thereby enabling more comprehensive and robust decision-making.Experimental results on multiple natural language understanding datasets validate the effectiveness of X2-ICL, demonstrating significantly improved robustness to out-of-distribution data compared to the existing ICL approaches.

Subject: ACL.2025 - Long Papers