2025.emnlp-main.387@ACL

Total: 1

#1 DA-Pred: Performance Prediction for Text Summarization under Domain-Shift and Instruct-Tuning [PDF] [Copy] [Kimi] [REL]

Authors: Anum Afzal, Florian Matthes, Alexander Fabbri

Large Language Models (LLMs) often don’t perform as expected under Domain Shift or after Instruct-tuning. A reliable indicator of LLM performance in these settings could assist in decision-making. We present a method that uses the known performance in high-resource domains and fine-tuning settings to predict performance in low-resource domains or base models, respectively. In our paper, we formulate the task of performance prediction, construct a dataset for it, and train regression models to predict the said change in performance. Our proposed methodology is lightweight and, in practice, can help researchers & practitioners decide if resources should be allocated for data labeling and LLM Instruct-tuning.

Subject: EMNLP.2025 - Main