iwKT7MEZZw@OpenReview

Total: 1

#1 Sign-In to the Lottery: Reparameterizing Sparse Training [PDF] [Copy] [Kimi] [REL]

Authors: Advait Gadhikar, Tom Jacobs, Chao Zhou, Rebekka Burkholz

The performance gap between training sparse neural networks from scratch (PaI) and dense-to-sparse training presents a major roadblock for efficient deep learning. According to the Lottery Ticket Hypothesis, PaI hinges on finding a problem specific parameter initialization. As we show, to this end, determining correct parameter signs is sufficient. Yet, they remain elusive to PaI. To address this issue, we propose Sign-In, which employs a dynamic reparameterization that provably induces sign flips. Such sign flips are complementary to the ones that dense-to-sparse training can accomplish, rendering Sign-In as an orthogonal method. While our experiments and theory suggest performance improvements of PaI, they also carve out the main open challenge to close the gap between PaI and dense-to-sparse training.

Subject: NeurIPS.2025 - Poster