2025.findings-emnlp.417@ACL

Total: 1

#1 ZEBRA: Leveraging Model-Behavioral Knowledge for Zero-Annotation Preference Dataset Construction [PDF] [Copy] [Kimi] [REL]

Authors: Jeesu Jung, Chanjun Park, Sangkeun Jung

Recent efforts in LLM alignment have focused on constructing large-scale preference datasets via human or Artificial Intelligence(AI) annotators. However, such approaches rely on instance-wise supervision, incurring substantial annotation cost and limited interpretability. In this paper, we propose **ZEBRA**—a model behavior-wise zero-annotation framework that constructs preference data by leveraging model behavior knowledge derived from benchmark performances.ZEBRA binarizes response pairs by evaluating the quality and similarity of their origin models, entirely bypassing instance-level annotation. This allows scalable, controllable, and cost-effective alignment data generation. Empirical results show that ZEBRA achieves alignment performance comparable to instance-supervised methods, despite requiring no manual or model-based labeling.

Subject: EMNLP.2025 - Findings