2025.acl-long.1204@ACL

Total: 1

#1 Can Language Models Replace Programmers for Coding? REPOCOD Says ‘Not Yet’ [PDF] [Copy] [Kimi1] [REL]

Authors: Shanchao Liang, Nan Jiang, Yiran Hu, Lin Tan

Recently, a number of repository-level code generation benchmarks–such as CoderEval, DevEval, RepoEval, RepoBench, and LongCode-Arena–have emerged to evaluate the capabilities of large language models (LLMs) beyond standalone benchmarks like HumanEval and MBPP. Thus, a natural question is, would LLMs have similar performance in real world coding tasks as their performance in these benchmarks? Unfortunately, one cannot answer this question, since these benchmarks consist of short completions, synthetic examples, or focus on limited scale repositories, failing to represent real-world coding tasks.To address these challenges, we create RepoCod, a Python code-generation benchmark containing complex tasks with realistic dependencies in real-world large projects and appropriate metrics for evaluating source code. It includes 980 whole-function generation tasks from 11 popular projects, 50.8% of which require repository-level context. RepoCod includes 314 developer-written test cases per instance for better evaluation. We evaluate ten LLMs on RepoCod and find that none achieves more than 30% pass@1 on RepoCod, indicating the necessity of building stronger LLMs that can help developers in real-world software development. In addition, we found that retrieval-augmented generation achieves better results than using target function dependencies as context.

Subject: ACL.2025 - Long Papers