Total: 1
Generative Flow Networks (GFlowNets) is a new family of probabilistic samplers for generating objects under an unnormalized reward distribution. It has emerged as a promising framework for learning stochastic policies that generate high-quality and diverse discrete objects proportional to their rewards, surpassing traditional reward-maximizing reinforcement learning methods. However, existing GFlowNets often suffer with data efficiency due to the direct parameterization of edge flows or dependence on backward policies that are challenging to specify or optimize, especially in high-dimensional action spaces. While the recent development of GFlowNets has primarily focused on developing alternative loss functions, we introduce a novel approach by exploring enhanced flow representations from an architectural perspective. In this paper, we propose to factorize the conventional edge flows into separate state flow and edge-based allocation streams. By introducing an effective method to synergistically combine these two streams to estimate the flows, we develop Bifurcated Generative Flow Networks (BN), a practical implementation to improve learning efficiency. We conduct extensive experiments on various standard benchmarks, and results show that BN significantly improves learning efficiency and effectiveness compared to state-of-the-art baselines.