LkdH35003E@OpenReview

Total: 1

#1 Position: AI Should Not Be An Imitation Game: Centaur Evaluations [PDF1] [Copy] [Kimi1] [REL]

Authors: Andreas Haupt, Erik Brynjolfsson

Benchmarks and evaluations are central to machine learning methodology and direct research in the field. Current evaluations commonly test systems in the absence of humans. This position paper argues that the machine learning community should increasingly use _centaur evaluations_, in which humans and AI jointly solve tasks. Centaur Evaluations refocus machine learning development toward human augmentation instead of human replacement, they allow for direct evaluation of human-centered desiderata, such as interpretability and helpfulness, and they can be more challenging and realistic than existing evaluations. By shifting the focus from _automation_ toward _collaboration_ between humans and AI, centaur evaluations can drive progress toward more effective and human-augmenting machine learning systems.

Subject: ICML.2025 - Poster