IJCAI.2017 - Constraints and Satisfiability

| Total: 31

#1 Generating Hard Random Boolean Formulas and Disjunctive Logic Programs [PDF] [Copy] [Kimi] [REL]

Authors: Giovanni Amendola ; Francesco Ricca ; Miroslaw Truszczynski

We propose a model of random quantified boolean formulas and their natural random disjunctive logic program counterparts. The model extends the standard models for random SAT and 2QBF. We provide theoretical bounds for the phase transition region in the new model, and show experimentally the presence of the easy-hard-easy pattern. Importantly, we show that the model is well suited for assessing solvers tuned to real-world instances. Moreover, to the best of our knowledge, our model and results on random disjunctive logic programs are the first of their kind.

#2 Stochastic Constraint Programming with And-Or Branch-and-Bound [PDF] [Copy] [Kimi] [REL]

Authors: Behrouz Babaki ; Tias Guns ; Luc de Raedt

Complex multi-stage decision making problems often involve uncertainty, for example, regarding demand or processing times. Stochastic constraint programming was proposed as a way to formulate and solve such decision problems, involving arbitrary constraints over both decision and random variables. What stochastic constraint programming still lacks is support for the use of factorized probabilistic models that are popular in the graphical model community. We show how a state-of-the-art probabilistic inference engine can be integrated into standard constraint solvers. The resulting approach searches over the And-Or search tree directly, and we investigate tight bounds on the expected utility objective. This significantly improves search efficiency and outperforms scenario-based methods that ground out the possible worlds.

#3 Scalable Constraint-based Virtual Data Center Allocation [PDF] [Copy] [Kimi] [REL]

Authors: Sam Bayless ; Nodir Kodirov ; Ivan Beschastnikh ; Holger H. Hoos ; Alan J. Hu

Constraint-based techniques can solve challenging problems arising from highly diverse applications. This paper considers the problem of virtual data center (VDC) allocation, an important, emerging challenge for modern data center operators. To solve this problem, we introduce NETSOLVER, which is based on the general-purpose constraint solver MONOSAT. NETSOLVER represents a major improvement over existing approaches: it is sound, complete, and scalable, providing support for end-to-end, multi-path bandwidth guarantees across all the layers of hosting infrastructure, from servers to top-of-rack switches to aggregation switches to access routers. NETSOLVER scales to realistic data center sizes and VDC topologies, typically requiring just seconds to allocate VDCs of 5–15 virtual machines to physical data centers with 1000+ servers, maintaining this efficiency even when the data center is nearly saturated. In many cases, NETSOLVER can allocate 150%−300% as many total VDCs to the same physical data center as previous methods. Essential to our solution efficiency is our formulation of VDC allocation using monotonic theories, illustrating the practical value of the recently proposed SAT modulo monotonic theories approach.

#4 Compact MDDs for Pseudo-Boolean Constraints with At-Most-One Relations in Resource-Constrained Scheduling Problems [PDF] [Copy] [Kimi] [REL]

Authors: Miquel Bofill ; Jordi Coll ; Josep Suy ; Mateu Villaret

Pseudo-Boolean (PB) constraints are usually encoded into Boolean clauses using compact Binary Decision Diagram (BDD) representations. Although these constraints appear in many problems, they are particularly useful for representing resource constraints in scheduling problems. Sometimes, the Boolean variables in the PB constraints have implicit at-most-one relations. In this work we introduce a way to take advantage of these implicit relations to obtain a compact Multi-Decision Diagram (MDD) representation for those PB constraints. We provide empirical evidence of the usefulness of this technique for some Resource-Constrained Project Scheduling Problem (RCPSP) variants, namely the Multi-Mode RCPSP (MRCPSP) and the RCPSP with Time-Dependent Resource Capacities and Requests (RCPSP/t). The size reduction of the representation of the PB constraints lets us decrease the number of Boolean variables in the encodings by one order of magnitude. We close/certify the optimum of many instances of these problems.

#5 Relaxed Exists-Step Plans in Planning as SMT [PDF] [Copy] [Kimi] [REL]

Authors: Miquel Bofill ; Joan Espasa ; Mateu Villaret

Planning Modulo Theories (PMT), inspired by Satisfiability Modulo Theories (SMT), allows the integration of arbitrary first order theories, such as linear arithmetic, with propositional planning. Under this setting, planning as SAT is generalized to planning as SMT. In this paper we introduce a new encoding for planning as SMT, which adheres to the relaxed relaxed ∃-step (R 2 ∃-step) semantics for parallel plans. We show the benefits of relaxing the requirements on the set of actions eligible to be executed at the same time, even though many redundant actions can be introduced. We also show how, by a MaxSMT based post-processing step, redundant actions can be efficiently removed, and provide experimental results showing the benefits of this approach.

#6 From Decimation to Local Search and Back: A New Approach to MaxSAT [PDF] [Copy] [Kimi] [REL]

Authors: Shaowei Cai ; Chuan Luo ; Haochen Zhang

Maximum Satisfiability (MaxSAT) is an important NP-hard combinatorial optimization problem with many applications and MaxSAT solving has attracted much interest. This work proposes a new incomplete approach to MaxSAT. We propose a novel decimation algorithm for MaxSAT, and then combine it with a local search algorithm. Our approach works by interleaving between the decimation algorithm and the local search algorithm, with useful information passed between them. Experiments show that our solver DeciLS achieves state of the art performance on all unweighted benchmarks from the MaxSAT Evaluation 2016. Moreover, compared to SAT-based MaxSAT solvers which dominate industrial benchmarks for years, it performs better on industrial benchmarks and significantly better on application formulas from SAT Competition. We also extend this approach to (Weighted) Partial MaxSAT, and the resulting solvers significantly improve local search solvers on crafted and industrial benchmarks, and are complementary (better on WPMS crafted benchmarks) to SAT-based solvers.

#7 On the Kernelization of Global Constraints [PDF] [Copy] [Kimi] [REL]

Authors: Clément Carbonnel ; Emmanuel Hebrard

Kernelization is a powerful concept from parameterized complexity theory that captures (a certain idea of) efficient polynomial-time preprocessing for hard decision problems. However, exploiting this technique in the context of constraint programming is challenging. Building on recent results for the VertexCover constraint, we introduce novel "loss-less" kernelization variants that are tailored for constraint propagation. We showcase the theoretical interest of our ideas on two constraints, VertexCover and EdgeDominatingSet.

#8 The DNA Word Design Problem: A New Constraint Model and New Results [PDF] [Copy] [Kimi] [REL]

Authors: Michael Codish ; Michael Frank ; Vitaly Lagoon

A fundamental problem in coding theory concerns the computation of the maximum cardinality of a set S of length n code words over an alphabet of size q, such that every pair of code words has Hamming distance at least d, and the set of additional constraints U on S is satisfied. This problem has application in several areas, one of which is the design of DNA codes where q=4 and the alphabet is {A,C,G,T}. We describe a new constraint model for this problem and demonstrate that it improves on previous solutions (computes better lower bounds) for various instances of the problem. Our approach is based on a clustering of DNA words into small sets of words. Solutions are then obtained as the union of such clusters. Our approach is SAT based: we specify constraints on clusters of DNA words and solve these using a Boolean satisfiability solver.

#9 Learning-Based Abstractions for Nonlinear Constraint Solving [PDF] [Copy] [Kimi] [REL]

Authors: Sumanth Dathathri ; Nikos Arechiga ; Sicun Gao ; Richard M. Murray

We propose a new abstraction refinement procedure based on machine learning to improve the performance of nonlinear constraint solving algorithms on large-scale problems. The proposed approach decomposes the original set of constraints into smaller subsets, and uses learning algorithms to propose sequences of abstractions that take the form of conjunctions of classifiers. The core procedure is a refinement loop that keeps improving the learned results based on counterexamples that are obtained from partial constraints that are easy to solve. Experiments show that the proposed techniques significantly improve the performance of state-of-the-art constraint solvers on many challenging benchmarks. The mechanism is capable of producing intermediate symbolic abstractions that are also important for many applications and for understanding the internal structures of hard constraint solving problems.

#10 The Hard Problems Are Almost Everywhere For Random CNF-XOR Formulas [PDF] [Copy] [Kimi] [REL]

Authors: Jeffrey M. Dudek ; Kuldeep S. Meel ; Moshe Y. Vardi

Recent universal-hashing based approaches to sampling and counting crucially depend on the runtime performance of SAT solvers on formulas expressed as the conjunction of both CNF constraints and variable-width XOR constraints (known as CNF-XOR formulas). In this paper, we present the first study of the runtime behavior of SAT solvers equipped with XOR-reasoning techniques on random CNF-XOR formulas. We empirically demonstrate that a state-of-the-art SAT solver scales exponentially on random CNF-XOR formulas across a wide range of XOR-clause densities, peaking around the empirical phase-transition location. On the theoretical front, we prove that the solution space of a random CNF-XOR formula 'shatters' at all nonzero XOR-clause densities into well-separated components, similar to the behavior seen in random CNF formulas known to be difficult for many SAT algorithms.

#11 Solving Integer Linear Programs with a Small Number of Global Variables and Constraints [PDF] [Copy] [Kimi] [REL]

Authors: Pavel Dvořák ; Eduard Eiben ; Robert Ganian ; Dušan Knop ; Sebastian Ordyniak

Integer Linear Programming (ILP) has a broad range of applications in various areas of artificial intelligence. Yet in spite of recent advances, we still lack a thorough understanding of which structural restrictions make ILP tractable. Here we study ILP instances consisting of a small number of ``global'' variables and/or constraints such that the remaining part of the instance consists of small and otherwise independent components; this is captured in terms of a structural measure we call fracture backdoors which generalizes, for instance, the well-studied class of N-fold ILP instances. Our main contributions can be divided into three parts. First, we formally develop fracture backdoors and obtain exact and approximation algorithms for computing these. Second, we exploit these backdoors to develop several new parameterized algorithms for ILP; the performance of these algorithms will naturally scale based on the number of global variables or constraints in the instance. Finally, we complement the developed algorithms with matching lower bounds. Altogether, our results paint a near-complete complexity landscape of ILP with respect to fracture backdoors.

#12 Personnel Scheduling as Satisfiability Modulo Theories [PDF] [Copy] [Kimi] [REL]

Authors: Christoph Erkinger ; Nysret Musliu

Rotating workforce scheduling (RWS) is an important real-life personnel rostering problem that appears in a large number of different business areas. In this paper, we propose a new exact approach to RWS that exploits the recent advances on Satisfiability Modulo Theories (SMT). While solving can be automated by using a number of so-called SMT-solvers, the most challenging task is to find an efficient formulation of the problem in first-order logic. We propose two new modeling techniques for RWS that encode the problem using formulas over different background theories. The first encoding provides an elegant approach based on linear integer arithmetic. Furthermore, we developed a new formulation based on bitvectors in order to achieve a more compact representation of the constraints and a reduced number of variables. These two modeling approaches were experimentally evaluated on benchmark instances from literature using different state-of-the-art SMT-solvers. Compared to other exact methods, the results of this approach showed an important improvement in the number of found solutions.

#13 Restart and Random Walk in Local Search for Maximum Vertex Weight Cliques with Evaluations in Clustering Aggregation [PDF] [Copy] [Kimi] [REL]

Authors: Yi Fan ; Nan Li ; Chengqian Li ; Zongjie Ma ; Longin Jan Latecki ; Kaile Su

The Maximum Vertex Weight Clique (MVWC) problem is NP-hard and also important in real-world applications. In this paper we propose to use the restart and the random walk strategies to improve local search for MVWC. If a solution is revisited in some particular situation, the search will restart. In addition, when the local search has no other options except dropping vertices, it will use random walk. Experimental results show that our solver outperforms state-of-the-art solvers in DIMACS and finds a new best-known solution. Also it is the unique solver which is comparable with state-of-the-art methods on both BHOSLIB and large crafted graphs. Furthermore we evaluated our solver in clustering aggregation. Experimental results on a number of real data sets demonstrate that our solver outperforms the state-of-the-art for solving the derived MVWC problem and helps improve the final clustering results.

#14 Finding Robust Solutions to Stable Marriage [PDF] [Copy] [Kimi] [REL]

Authors: Begum Genc ; Mohamed Siala ; Barry O'Sullivan ; Gilles Simonin

We study the notion of robustness in stable matching problems. We first define robustness by introducing (a,b)-supermatches. An (a,b)-supermatch is a stable matching in which if a pairs break up it is possible to find another stable matching by changing the partners of those a pairs and at most b other pairs. In this context, we define the most robust stable matching as a (1,b)-supermatch where b is minimum. We show that checking whether a given stable matching is a (1,b)-supermatch can be done in polynomial time. Next, we use this procedure to design a constraint programming model, a local search approach, and a genetic algorithm to find the most robust stable matching. Our empirical evaluation on large instances show that local search outperforms the other approaches.

#15 Locality in Random SAT Instances [PDF] [Copy] [Kimi] [REL]

Authors: Jesús Giráldez-Cru ; Jordi Levy

Despite the success of CDCL SAT solvers solving industrial problems, there are still many open questions to explain such success. In this context, the generation of random SAT instances having computational properties more similar to real-world problems becomes crucial. Such generators are possibly the best tool to analyze families of instances and solvers behaviors on them. In this paper, we present a random SAT instances generator based on the notion of locality. We show that this is a decisive dimension of attractiveness among the variables of a formula, and how CDCL SAT solvers take advantage of it. To the best of our knowledge, this is the first random SAT model that generates both scale-free structure and community structure at once.

#16 A Core-Guided Approach to Learning Optimal Causal Graphs [PDF] [Copy] [Kimi] [REL]

Authors: Antti Hyttinen ; Paul Saikko ; Matti Järvisalo

Discovery of causal relations is an important part of data analysis. Recent exact Boolean optimization approaches enable tackling very general search spaces of causal graphs with feedback cycles and latent confounders, simultaneously obtaining high accuracy by optimally combining conflicting independence information in sample data. We propose several domain-specific techniques and integrate them into a core-guided maximum satisfiability solver, thereby speeding up current state of the art in exact search for causal graphs with cycles and latent confounders on simulated and real-world data.

#17 Cardinality Encodings for Graph Optimization Problems [PDF] [Copy] [Kimi] [REL]

Authors: Alexey Ignatiev ; Antonio Morgado ; Joao Marques-Silva

Different optimization problems defined on graphs find application in complex network analysis. Existing propositional encodings render impractical the use of propositional satisfiability (SAT) and maximum satisfiability (MaxSAT) solvers for solving a variety of these problems on large graphs. This paper has two main contributions. First, the paper identifies sources of inefficiency in existing encodings for different optimization problems in graphs. Second, for the concrete case of the maximum clique problem, the paper develops a novel encoding which is shown to be far more compact than existing encodings for large sparse graphs. More importantly, the experimental results show that the proposed encoding enables existing SAT solvers to compute a maximum clique for large sparse networks, often more efficiently than the state of the art.

#18 Learning to Run Heuristics in Tree Search [PDF] [Copy] [Kimi] [REL]

Authors: Elias B. Khalil ; Bistra Dilkina ; George L. Nemhauser ; Shabbir Ahmed ; Yufen Shao

``Primal heuristics'' are a key contributor to the improved performance of exact branch-and-bound solvers for combinatorial optimization and integer programming. Perhaps the most crucial question concerning primal heuristics is that of at which nodes they should run, to which the typical answer is via hard-coded rules or fixed solver parameters tuned, offline, by trial-and-error. Alternatively, a heuristic should be run when it is most likely to succeed, based on the problem instance's characteristics, the state of the search, etc. In this work, we study the problem of deciding at which node a heuristic should be run, such that the overall (primal) performance of the solver is optimized. To our knowledge, this is the first attempt at formalizing and systematically addressing this problem. Central to our approach is the use of Machine Learning (ML) for predicting whether a heuristic will succeed at a given node. We give a theoretical framework for analyzing this decision-making process in a simplified setting, propose a ML approach for modeling heuristic success likelihood, and design practical rules that leverage the ML models to dynamically decide whether to run a heuristic at each node of the search tree. Experimentally, our approach improves the primal performance of a state-of-the-art Mixed Integer Programming solver by up to 6% on a set of benchmark instances, and by up to 60% on a family of hard Independent Set instances.

#19 An Improved Decision-DNNF Compiler [PDF] [Copy] [Kimi] [REL]

Authors: Jean-Marie Lagniez ; Pierre Marquis

We present and evaluate a new compiler, called d4, targeting the Decision-DNNF language. As the state-of-the-art compilers C2D and Dsharp targeting the same language, d4 is a top-down tree-search algorithm exploring the space of propositional interpretations. d4 is based on the same ingredients as those considered in C2D and Dsharp (mainly, disjoint component analysis, conflict analysis and non-chronological backtracking, component caching). d4 takes advantage of a dynamic decomposition approach based on hypergraph partitioning, used sparingly. Some simplification rules are also used to minimize the time spent in the partitioning steps and to promote the quality of the decompositions. Experiments show that the compilation times and the sizes of the Decision-DNNF representations computed by d4 are in many cases significantly lower than the ones obtained by C2D and Dsharp.

#20 A Recursive Shortcut for CEGAR: Application To The Modal Logic K Satisfiability Problem [PDF] [Copy] [Kimi] [REL]

Authors: Jean-Marie Lagniez ; Daniel Le Berre ; Tiago de Lima ; Valentin Montmirail

Counter-Example-Guided Abstraction Refinement (CEGAR) has been very successful in model checking large systems. Since then, it has been applied to many different problems. It especially proved to be an highly successful practical approach for solving the PSPACE complete QBF problem. In this paper, we propose a new CEGAR-like approach for tackling PSPACE complete problems that we call RECAR (Recursive Explore and Check Abstraction Refinement). We show that this generic approach is sound and complete. Then we propose a specific implementation of the RECAR approach to solve the modal logic K satisfiability problem. We implemented both a CEGAR and a RECAR approach for the modal logic K satisfiability problem within the solver MoSaiC. We compared experimentally those approaches to the state-of-the-art solvers for that problem. The RECAR approach outperforms the CEGAR one for that problem and also compares favorably against the state-of-the-art on the benchmarks considered.

#21 Automatic Synthesis of Smart Table Constraints by Abstraction of Table Constraints [PDF] [Copy] [Kimi] [REL]

Authors: Baudouin Le Charlier ; Minh Thanh Khong ; Christophe Lecoutre ; Yves Deville

The smart table constraint represents a powerful modeling tool that has been recently introduced. This constraint allows the user to represent compactly a number of well-known (global) constraints and more generally any arbitrarily structured constraints, especially when disjunction is at stake. In many problems, some constraints are given under the basic and simple form of tables explicitly listing the allowed combinations of values. In this paper, we propose an algorithm to convert automatically any (ordinary) table into a compact smart table. Its theoretical time complexity is shown to be quadratic in the size of the input table. Experimental results demonstrate its compression efficiency on many constraint cases while showing its reasonable execution time. It is then shown that using filtering algorithms on the resulting smart table is more efficient than using state of the art filtering algorithms on the initial table.

#22 Solving Stochastic Boolean Satisfiability under Random-Exist Quantification [PDF] [Copy] [Kimi] [REL]

Authors: Nian-Ze Lee ; Yen-Shi Wang ; Jie-Hong R. Jiang

Stochastic Boolean Satisfiability (SSAT) is a powerful formalism to represent computational problems with uncertainly, such as belief network inference and propositional probabilistic planning. Solving SSAT formulas lies in the same complexity class (PSPACE-complete) as solving Quantified Boolean Formula (QBF). While many endeavors have been made to enhance QBF solving, SSAT has drawn relatively less attention in recent years. This paper focuses on random-exist quantified SSAT formulas, and proposes an algorithm combining binary decision diagram (BDD), logic synthesis, and modern SAT techniques to improve computational efficiency. Unlike prior exact SSAT algorithms, the proposed method can be easily modified to solve approximate SSAT by deriving upper and lower bounds of satisfying probability. Experimental results show that our method outperforms the state-of-the-art algorithm on random k-CNF formulas and has effective application to approximate SSAT on circuit benchmarks.

#23 Enhancing Campaign Design in Crowdfunding: A Product Supply Optimization Perspective [PDF] [Copy] [Kimi] [REL]

Authors: Qi Liu ; Guifeng Wang ; Hongke Zhao ; Chuanren Liu ; Tong Xu ; Enhong Chen

Crowdfunding is an emerging Internet application for creators designing campaigns (projects) to collect funds from public investors. Usually, the limited budget of the creator is manually divided into several perks (reward options), that should fit various market demand and further bring different monetary contributions for the campaign. Therefore, it is very challenging for each creator to design an effective campaign. To this end, in this paper, we aim to enhance the funding performance of the newly proposed campaigns, with a focus on optimizing the product supply of perks. Specifically, given the expected budget and the perks of a campaign, we propose a novel solution to automatically recommend the optimal product supply to every perk for balancing the expected return of this campaign against the risk. Along this line, we define it as a constrained portfolio selection problem, where the risk of each campaign is measured by a multi-task learning method. Finally, experimental results on the real-world crowdfunding data clearly prove that the optimized product supply can help improve the campaign performance significantly, and meanwhile, our multi-task learning method could more precisely estimate the risk of each campaign.

#24 An Effective Learnt Clause Minimization Approach for CDCL SAT Solvers [PDF] [Copy] [Kimi] [REL]

Authors: Mao Luo ; Chu-Min Li ; Fan Xiao ; Felip Manyà ; Zhipeng Lü

Learnt clauses in CDCL SAT solvers often contain redundant literals. This may have a negative impact on performance because redundant literals may deteriorate both the effectiveness of Boolean constraint propagation and the quality of subsequent learnt clauses. To overcome this drawback, we define a new inprocessing SAT approach which eliminates redundant literals from learnt clauses by applying Boolean constraint propagation. Learnt clause minimization is activated before the SAT solver triggers some selected restarts, and affects only some learnt clauses during the search process. Moreover, we conducted an empirical evaluation on instances coming from the hard combinatorial and application categories of recent SAT competitions. The results show that a remarkable number of additional instances are solved when the approach is incorporated into five of the best performing CDCL SAT solvers (Glucose, TC_Glucose, COMiniSatPS, MapleCOMSPS and MapleCOMSPS_LRB).

#25 A Partitioning Algorithm for Maximum Common Subgraph Problems [PDF] [Copy] [Kimi] [REL]

Authors: Ciaran McCreesh ; Patrick Prosser ; James Trimble

We introduce a new branch and bound algorithm for the maximum common subgraph and maximum common connected subgraph problems which is based around vertex labelling and partitioning. Our method in some ways resembles a traditional constraint programming approach, but uses a novel compact domain store and supporting inference algorithms which dramatically reduce the memory and computation requirements during search, and allow better dual viewpoint ordering heuristics to be calculated cheaply. Experiments show a speedup of more than an order of magnitude over the state of the art, and demonstrate that we can operate on much larger graphs without running out of memory.