rcjgzbYvhF@OpenReview

Total: 1

#1 Highly Compressed Tokenizer Can Generate Without Training [PDF] [Copy] [Kimi] [REL]

Authors: Lukas Lao Beyer, Tianhong Li, Xinlei Chen, Sertac Karaman, Kaiming He

Commonly used image tokenizers produce a 2D grid of spatially arranged tokens. In contrast, so-called *1D* image tokenizers represent images as highly compressed one-dimensional sequences of as few as 32 discrete tokens. We find that the high degree of compression achieved by a 1D tokenizer with vector quantization enables image editing and generative capabilities through heuristic manipulation of tokens, demonstrating that even very crude manipulations -- such as copying and replacing tokens between latent representations of images -- enable fine-grained image editing by transferring appearance and semantic attributes. Motivated by the expressivity of the 1D tokenizer's latent space, we construct an image generation pipeline leveraging gradient-based test-time optimization of tokens with plug-and-play loss functions such as reconstruction or CLIP similarity. Our approach is demonstrated for inpainting and text-guided image editing use cases, and can generate diverse and realistic samples without requiring training of any generative model.

Subject: ICML.2025 - Poster