tNGdLEL4R0@OpenReview

Total: 1

#1 Scaling Trends in Language Model Robustness [PDF2] [Copy] [Kimi3] [REL]

Authors: Nikolaus Howe, Ian McKenzie, Oskar Hollinsworth, Michał Zając, Tom Tseng, Aaron Tucker, Pierre-Luc Bacon, Adam Gleave

Increasing model size has unlocked a dazzling array of capabilities in language models.At the same time, even frontier models remain vulnerable to jailbreaks and prompt injections, despite concerted efforts to make them robust.As both attackers and defenders gain access to more compute, and as models become larger, what will be the effect on robustness?We argue that to answer this question requires a *scaling lens*, which we adopt in an extensive study of language model robustness across several classification tasks, model families, and adversarial attacks.We find that in the absence of explicit safety training, larger models are not consistently more robust; however, scale improves sample efficiency in adversarial training, though it worsens compute efficiency.Further, we find that increasing attack compute smoothly improves attack success rate against both undefended and adversarially trained models.Finally, after exploring robustness transfer across attacks and threat models, we combine attack and defense scaling rates to study the offense-defense balance.We find that while attack scaling outpaces adversarial training across all models studied, larger adversarially trained models might give defense the advantage in the long run.These results underscore the utility of the scaling lens, and provide a paradigm for evaluating future attacks and defenses on frontier models.Code for this project is available at https://github.com/AlignmentResearch/scaling-llm-robustness-paper.

Subject: ICML.2025 - Spotlight