Total: 1
Bayesian Optimization (BO) is a widely used method for optimizing expensive black-box functions, relying on probabilistic surrogate models such as Gaussian Processes (GPs). The quality of the surrogate model is crucial for good optimization performance, especially in the few-shot setting where only a small number of batches of points can be evaluated. In this setting, the initialization plays a critical role in shaping the surrogate's predictive quality and guiding subsequent optimization. Despite this, practitioners typically rely on (quasi-)random designs to cover the input space. However, such approaches neglect two key factors: (a) random designs may not be space-filling, and (b) efficient hyperparameter learning during initialization is essential for high-quality prediction, which may conflict with space-filling designs. To address these limitations, we propose Hyperparameter-Informed Predictive Exploration (HIPE), a novel acquisition strategy that balances space-filling exploration with hyperparameter learning using information-theoretic principles. We derive a closed-form expression for HIPE in the GP setting and demonstrate its effectiveness through extensive experiments in active learning and few-shot BO. Our results show that HIPE outperforms standard initialization strategies in terms of predictive accuracy, hyperparameter identification, and optimization performance, particularly in large-batch, few-shot settings relevant to many real-world BO applications.