Total: 1
Semantic hashing has become a crucial component of fast similarity search in many large-scale information retrieval systems, in particular, for text data. Variational auto-encoders (VAEs) with binary latent variables as hashing codes provide state-of-the-art performance in terms of precision for document retrieval. We propose a pairwise loss function with discrete latent VAE to reward within-class similarity and between-class dissimilarity for supervised hashing. Instead of solving the optimization for training relying on existing biased gradient estimators, an unbiased, low-variance gradient estimator, which evaluates the non-differentiable loss function over two correlated sets of binary hashing codes to control the gradient variance, is adopted to optimize the hashing function to achieve superior performance compared to the state-of-the-arts, as demonstrated by our comprehensive experiments.