IJCAI.2021 - Heuristic Search and Game Playing

| Total: 6

#1 Faster Guarantees of Evolutionary Algorithms for Maximization of Monotone Submodular Functions [PDF] [Copy] [Kimi] [REL]

Author: Victoria G. Crawford

In this paper, the monotone submodular maximization problem (SM) is studied. SM is to find a subset of size kappa from a universe of size n that maximizes a monotone submodular objective function f . We show using a novel analysis that the Pareto optimization algorithm achieves a worst-case ratio of (1 − epsilon)(1 − 1/e) in expectation for every cardinality constraint kappa < P , where P ≤ n + 1 is an input, in O(nP ln(1/epsilon)) queries of f . In addition, a novel evolutionary algorithm called the biased Pareto optimization algorithm, is proposed that achieves a worst-case ratio of (1 − epsilon)(1 − 1/e − epsilon) in expectation for every cardinality constraint kappa < P in O(n ln(P ) ln(1/epsilon)) queries of f . Further, the biased Pareto optimization algorithm can be modified in order to achieve a a worst-case ratio of (1 − epsilon)(1 − 1/e − epsilon) in expectation for cardinality constraint kappa in O(n ln(1/epsilon)) queries of f . An empirical evaluation corroborates our theoretical analysis of the algorithms, as the algorithms exceed the stochastic greedy solution value at roughly when one would expect based upon our analysis.

#2 DACBench: A Benchmark Library for Dynamic Algorithm Configuration [PDF] [Copy] [Kimi] [REL]

Authors: Theresa Eimer ; André Biedenkapp ; Maximilian Reimer ; Steven Adriansen ; Frank Hutter ; Marius Lindauer

Dynamic Algorithm Configuration (DAC) aims to dynamically control a target algorithm's hyperparameters in order to improve its performance. Several theoretical and empirical results have demonstrated the benefits of dynamically controlling hyperparameters in domains like evolutionary computation, AI Planning or deep learning. Replicating these results, as well as studying new methods for DAC, however, is difficult since existing benchmarks are often specialized and incompatible with the same interfaces. To facilitate benchmarking and thus research on DAC, we propose DACBench, a benchmark library that seeks to collect and standardize existing DAC benchmarks from different AI domains, as well as provide a template for new ones. For the design of DACBench, we focused on important desiderata, such as (i) flexibility, (ii) reproducibility, (iii) extensibility and (iv) automatic documentation and visualization. To show the potential, broad applicability and challenges of DAC, we explore how a set of six initial benchmarks compare in several dimensions of difficulty.

#3 Bounded-cost Search Using Estimates of Uncertainty [PDF] [Copy] [Kimi] [REL]

Authors: Maximilian Fickert ; Tianyi Gu ; Wheeler Ruml

Many planning problems are too hard to solve optimally. In bounded-cost search, one attempts to find, as quickly as possible, a plan that costs no more than a user-provided absolute cost bound. Several algorithms have been previously proposed for this setting, including Potential Search (PTS) and Bounded-cost Explicit Estimation Search (BEES). BEES attempts to improve on PTS by predicting whether nodes will lead to plans within the cost bound or not. This paper introduces a relatively simple algorithm, Expected Effort Search (XES), which uses not just point estimates but belief distributions in order to estimate the probability that a node will lead to a plan within the bound. XES's expansion order minimizes expected search time in a simplified formal model. Experimental results on standard planning and search benchmarks show that it consistently exhibits strong performance, outperforming both PTS and BEES. We also derive improved variants of BEES that can exploit belief distributions. These new methods advance the recent trend of taking advantage of uncertainty estimates in deterministic single-agent search.

#4 A Runtime Analysis of Typical Decomposition Approaches in MOEA/D Framework for Many-objective Optimization Problems [PDF] [Copy] [Kimi] [REL]

Authors: Zhengxin Huang ; Yuren Zhou ; Chuan Luo ; Qingwei Lin

Decomposition approach is an important component in multi-objective evolutionary algorithm based on decomposition (MOEA/D), which is a popular method for handing many-objective optimization problems (MaOPs). This paper presents a theoretical analysis on the convergence ability of using the typical weighted sum (WS), Tchebycheff (TCH) or penalty-based boundary intersection (PBI) approach in a basic MOEA/D for solving two benchmark MaOPs. The results show that using WS, the algorithm can always find an optimal solution for any subproblem in polynomial expected runtime. In contrast, the algorithm needs at least exponential expected runtime for some subproblems if using TCH or PBI. Moreover, our analyses discover an obvious shortcoming of using WS, that is, the optimal solutions of different subproblems are easily corresponding to the same solution. In addition, this analysis indicates that if using PBI, a small value of the penalty parameter is a good choice for faster converging to the Pareto front, but it may lose the diversity. This study reveals some optimization behaviors of using three typical decomposition approaches in the well-known MOEA/D framework for solving MaOPs.

#5 A New Upper Bound Based on Vertex Partitioning for the Maximum K-plex Problem [PDF] [Copy] [Kimi] [REL]

Authors: Hua Jiang ; Dongming Zhu ; Zhichao Xie ; Shaowen Yao ; Zhang-Hua Fu

Given an undirected graph, the Maximum k-plex Problem (MKP) is to find a largest induced subgraph in which each vertex has at most k−1 non-adjacent vertices. The problem arises in social network analysis and has found applications in many important areas employing graph-based data mining. Existing exact algorithms usually implement a branch-and-bound approach that requires a tight upper bound to reduce the search space. In this paper, we propose a new upper bound for MKP, which is a partitioning of the candidate vertex set with respect to the constructing solution. We implement a new branch-and-bound algorithm that employs the upper bound to reduce the number of branches. Experimental results show that the upper bound is very effective in reducing the search space. The new algorithm outperforms the state-of-the-art algorithms significantly on real-world massive graphs, DIMACS graphs and random graphs.

#6 Choosing the Right Algorithm With Hints From Complexity Theory [PDF] [Copy] [Kimi] [REL]

Authors: Shouda Wang ; Weijie Zheng ; Benjamin Doerr

Choosing a suitable algorithm from the myriads of different search heuristics is difficult when faced with a novel optimization problem. In this work, we argue that the purely academic question of what could be the best possible algorithm in a certain broad class of black-box optimizers can give fruitful indications in which direction to search for good established optimization heuristics. We demonstrate this approach on the recently proposed DLB benchmark, for which the only known results are O(n^3) runtimes for several classic evolutionary algorithms and an O(n^2 log n) runtime for an estimation-of-distribution algorithm. Our finding that the unary unbiased black-box complexity is only O(n^2) suggests the Metropolis algorithm as an interesting candidate and we prove that it solves the DLB problem in quadratic time. Since we also prove that better runtimes cannot be obtained in the class of unary unbiased algorithms, we shift our attention to algorithms that use the information of more parents to generate new solutions. An artificial algorithm of this type having an O(n log n) runtime leads to the result that the significance-based compact genetic algorithm (sig-cGA) can solve the DLB problem also in time O(n log n). Our experiments show a remarkably good performance of the Metropolis algorithm, clearly the best of all algorithms regarded for reasonable problem sizes.