| Total: 7
In the 1970s and 1980s, searches performed by L. Carter, C. Lam, L. Thiel, and S. Swiercz showed that projective planes of order ten with weight 16 codewords do not exist. These searches required highly specialized and optimized computer programs and required about 2,000 hours of computing time on mainframe and supermini computers. In 2010, these searches were verified by D. Roy using an optimized C program and 16,000 hours on a cluster of desktop machines. We performed a verification of these searches by reducing the problem to the Boolean satisfiability problem (SAT). Our verification uses the cube-and-conquer SAT solving paradigm, symmetry breaking techniques using the computer algebra system Maple, and a result of Carter that there are ten nonisomorphic cases to check. Our searches completed in about 30 hours on a desktop machine and produced nonexistence proofs of about 1 terabyte in the DRAT (deletion resolution asymmetric tautology) format.
Minimum dominating set (MinDS) is a canonical NP-hard combinatorial optimization problem with applications. For large and hard instances one must resort to heuristic approaches to obtain good solutions within reasonable time. This paper develops an efficient local search algorithm for MinDS, which has two main ideas. The first one is a novel local search framework, while the second is a construction procedure with inference rules. Our algorithm named FastDS is evaluated on 4 standard benchmarks and 3 massive graphs benchmarks. FastDS obtains the best performance for almost all benchmarks, and obtains better solutions than state-of-the-art algorithms on massive graphs.
Evolution Strategies (ES) are a class of black-box optimization algorithms and have been widely applied to solve problems, e.g., in reinforcement learning (RL), where the true gradient is unavailable. ES estimate the gradient of an objective function with respect to the parameters by randomly sampling search directions and evaluating parameter perturbations in these directions. However, the gradient estimator of ES tends to have a high variance for high-dimensional optimization, thus requiring a large number of samples and making ES inefficient. In this paper, we propose a new ES algorithm SGES, which utilizes historical estimated gradients to construct a low-dimensional subspace for sampling search directions, and adjusts the importance of this subspace adaptively. We prove that the variance of the gradient estimator of SGES can be much smaller than that of Vanilla ES; meanwhile, its bias can be well bounded. Empirical results on benchmark black-box functions and a set of popular RL tasks exhibit the superior performance of SGES over state-of-the-art ES algorithms.
The p-center problem consists of choosing p centers from a set of candidates to minimize the maximum cost between any client and its assigned facility. In this paper, we transform the p-center problem into a series of set covering subproblems, and propose a vertex weighting-based tabu search (VWTS) algorithm to solve them. The proposed VWTS algorithm integrates distinguishing features such as a vertex weighting technique and a tabu search strategy to help the search to jump out of the local optima. Computational experiments on 138 most commonly used benchmark instances show that VWTS is highly competitive comparing to the state-of-the-art methods in spite of its simplicity. As a well-known NP-hard problem which has already been studied for over half a century, it is a challenging task to break the records on these classic datasets. Yet VWTS improves the best known results for 14 out of 54 large instances, and matches the optimal results for all remaining 84 ones. In addition, the computational time taken by VWTS is much shorter than other algorithms in the literature.
This work presents an exploration and imitation-learning-based agent capable of state-of-the-art performance in playing text-based computer games. These games are of interest as they can be seen as a testbed for language understanding, problem-solving, and language generation by artificial agents. Moreover, they provide a learning setting in which these skills can be acquired through interactions with an environment rather than using fixed corpora. One aspect that makes these games particularly challenging for learning agents is the combinatorially large action space. Existing methods for solving text-based games are limited to games that are either very simple or have an action space restricted to a predetermined set of admissible actions. In this work, we propose to use the exploration approach of Go-Explore for solving text-based games. More specifically, in an initial exploration phase, we first extract trajectories with high rewards, after which we train a policy to solve the game by imitating these trajectories. Our experiments show that this approach outperforms existing solutions in solving text-based games, and it is more sample efficient in terms of the number of interactions with the environment. Moreover, we show that the learned policy can generalize better than existing solutions to unseen games without using any restriction on the action space.
Virtual machine (VM) provisioning is a common and critical problem in cloud computing. In industrial cloud platforms, there are a huge number of VMs provisioned per day. Due to the complexity and resource constraints, it needs to be carefully optimized to make cloud platforms effectively utilize the resources. Moreover, in practice, provisioning a VM from scratch requires fairly long time, which would degrade the customer experience. Hence, it is advisable to provision VMs ahead for upcoming demands. In this work, we formulate the practical scenario as the predictive VM provisioning (PreVMP) problem, where upcoming demands are unknown and need to be predicted in advance, and then the VM provisioning plan is optimized based on the predicted demands. Further, we propose Uncertainty-Aware Heuristic Search (UAHS) for solving the PreVMP problem. UAHS first models the prediction uncertainty, and then utilizes the prediction uncertainty in optimization. Moreover, UAHS leverages Bayesian optimization to interact prediction and optimization to improve its practical performance. Extensive experiments show that UAHS performs much better than state-of-the-art competitors on two public datasets and an industrial dataset. UAHS has been successfully applied in Microsoft Azure and brought practical benefits in real-world applications.
The minimum connected dominating set (MCDS) problem is an important extension of the minimum dominating set problem, with wide applications, especially in wireless networks. Despite its practical importance, there are few works on solving MCDS for massive graphs, mainly due to the complexity of maintaining connectivity. In this paper, we propose two novel ideas, and develop a new local search algorithm for MCDS called NuCDS. First, a hybrid dynamic connectivity maintenance method is designed to switch alternately between a novel fast connectivity maintenance method based on spanning tree and its previous counterpart. Second, we define a new vertex property called \emph{safety} to make the algorithm more considerate when selecting vertices. Experiments show that NuCDS significantly outperforms the state-of-the-art MCDS algorithms on both massive graphs and classic benchmarks.