glLqTK9En3@OpenReview

Total: 1

#1 Functional Alignment Can Mislead: Examining Model Stitching [PDF2] [Copy] [Kimi2] [REL]

Authors: Damian Smith, Harvey Mannering, Antonia Marcu

A common belief in the representational comparison literature is that if two representations can be functionally aligned, they must capture similar information. In this paper we focus on model stitching and show that models can be functionally aligned, but represent very different information. Firstly, we show that discriminative models with very different biases can be stitched together. We then show that models trained to solve entirely different tasks on different data modalities, and even clustered random noise, can be successfully stitched into MNIST or ImageNet-trained models. We end with a discussion of the wider impact of our results on the community's current beliefs. Overall, our paper draws attention to the need to correctly interpret the results of such functional similarity measures and highlights the need for approaches that capture informational similarity.

Subject: ICML.2025 - Spotlight