Total: 1
Spectral graph learning builds upon two foundations: Graph Fourier basis as its theoretical cornerstone,with polynomial approximation to enable practical implementation. While this framework has led to numerous successful designs, we argue that its effectiveness might stem from mechanisms different from its theoretical foundations. In this paper, we identify two fundamental issues that challenge our current understanding: (1) The graph Fourier basis $\mathbf{U}$ (eigenvectors of the normalized graph Laplacian) faces too many questions to truly serve its intended role, particularly in preserving its semantic properties of Fourier analysis; (2) The limitations preventing expressive filters are not merely practical constraints, but fundamental barriers that naturally protect stability and generalization. Importantly, the two issues entangle with each other. The second obscured the first: the natural avoidance of complex filters has prevented us from fully confronting the questions about $\mathbf{U}$'s role as a Fourier basis. This observation leads to our position: the effectiveness of spectral GNNs relies less on Graph Fourier basis than originally conceived, or, in other words, **spectral GNNs might not be so spectral**. The position leads us to at least two potential research interests: to incorporate a more semantically meaningful graph dictionary except for $\mathbf{U}$, and to re-examine the theoretical role of the introduced polynomial techniques.