2025.emnlp-tutorials.2@ACL

Total: 1

#1 Advancing Language Models through Instruction Tuning: Recent Progress and Challenges [PDF3] [Copy] [Kimi] [REL]

Authors: Zhihan Zhang, Renze Lou, Fangkai Jiao, Wenpeng Yin, Meng Jiang

The capability of following instructions is a key dimension for AI systems. Therefore, in NLP, instruction tuning – the process of training language models to follow natural language instructions – has become a fundamental component of the model development pipeline. This tutorial addresses three critical questions within the field: (1) What are the current focal points in instruction tuning research? (2) What are the best practices in training an instruction-following model? (3) What new challenges have emerged? To answer these questions, the tutorial presents a systematic overview of recent advances in instruction tuning. It covers different stages in model training: supervised fine-tuning, preference optimization, and reinforcement learning. It introduces scalable strategies for building high-quality instruction data, explores approaches for training autonomous AI agents that handle complex real-world tasks, and discusses common criteria for evaluating instruction-following models. The audience will gain a comprehensive understanding of cutting-edge trends in instruction tuning and insights into promising directions for future research.

Subject: EMNLP.2025 - Tutorial Abstracts