AAAI.2017 - Planning and Scheduling

| Total: 21

#1 Computational Issues in Time-Inconsistent Planning [PDF] [Copy] [Kimi] [REL]

Authors: Pingzhong Tang, Yifeng Teng, Zihe Wang, Shenke Xiao, Yichong Xu

Time-inconsistency refers to a paradox in decision making where agents exhibit inconsistent behaviors over time. Examples are procrastination where agents tend to postpone easy tasks, and abandonments where agents start a plan and quit in the middle. To capture such behaviors and to quantify inefficiency caused by such behaviors, Kleinberg and Oren (2014) propose a graph model with a certain cost structure and initiate the study of several interesting computation problems: 1) cost ratio: the worst ratio between the actual cost of the agent and the optimal cost, over all the graph instances; 2) motivating subgraph: how to motivate the agent to reach the goal by deleting nodes and edges; 3) Intermediate rewards: how to incentivize agents to reach the goal by placing intermediate rewards. Kleinberg and Oren give partial answers to these questions, but the main problems are open. In this paper, we give answers to all three open problems. First, we show a tight upper bound of cost ratio for graphs, and confirm the conjecture by Kleinberg and Oren that Akerlof’s structure is indeed the worst case for cost ratio. Second, we prove that finding a motivating subgraph is NP-hard, showing that it is generally inefficient to motivate agents by deleting nodes and edges in the graph. Last but not least, we show that computing a strategy to place minimum amount of total reward is also NP-hard and we provide a 2n- approximation algorithm.


#2 Validating Domains and Plans for Temporal Planning via Encoding into Infinite-State Linear Temporal Logic [PDF] [Copy] [Kimi] [REL]

Authors: Alessandro Cimatti, Andrea Micheli, Marco Roveri

Temporal planning is an active research area of Artificial Intelligence because of its many applications ranging from roboticsto logistics and beyond. Traditionally, authors focused on theautomatic synthesis of plans given a formal representation of thedomain and of the problem. However, the effectiveness of suchtechniques is limited by the complexity of the modeling phase: it ishard to produce a correct model for the planning problem at hand. In this paper, we present a technique to simplify the creation ofcorrect models by leveraging formal-verification tools for automaticvalidation. We start by using the ANML language, a very expressivelanguage for temporal planning problems that has been recentlypresented. We chose ANML because of its usability andreadability. Then, we present a sound-and-complete, formal encodingof the language into Linear Temporal Logic over predicates withinfinite-state variables. Thanks to this reduction, we enable theformal verification of several relevant properties over the planningproblem, providing useful feedback to the modeler.


#3 Robust Execution of Probabilistic Temporal Plans [PDF] [Copy] [Kimi] [REL]

Authors: Kyle Lund, Sam Dietrich, Scott Chow, James Boerkoel

A critical challenge in temporal planning is robustly dealing with non-determinism, e.g., the durational uncertainty of a robot's activity due to slippage or other unexpected influences. Recent advances show that robustness is a better measure of solution quality than traditional metrics such as flexibility. This paper introduces the Robust Execution Problem for finding maximally robust dispatch strategies for general probabilistic temporal planning problems. While generally intractable, we introduce approximate solution techniques — one that can be computed statically prior to the start of execution with robustness guarantees and one that dynamically adjusts to opportunities and setbacks during execution. We show empirically that our dynamic approach outperforms all known approaches in terms of execution success rate.


#4 On the Disruptive Effectiveness of Automated Planning for LTL<i>f</i>-Based Trace Alignment [PDF] [Copy] [Kimi] [REL]

Authors: Giuseppe De Giacomo, Fabrizio Maria Maggi, Andrea Marrella, Fabio Patrizi

One major task in business process management is that of aligning real process execution traces to a process model by (minimally) introducing and eliminating steps. Here, we look at declarative process specifications expressed in Linear Temporal Logic on finite traces (LTLf). We provide a sound and complete technique to synthesize the alignment instructions relying on finite automata theoretic manipulations. Such a technique can be effectively implemented by using planning technology. Notably, the resulting planning-based alignment system significantly outperforms all current state-of-the-art ad-hoc alignment systems. We report an in-depth experimental study that supports this claim.


#5 Landmark-Based Heuristics for Goal Recognition [PDF] [Copy] [Kimi] [REL]

Authors: Ramon Pereira, Nir Oren, Felipe Meneguzzi

Automated planning can be used to efficiently recognize goals and plans from partial or full observed action sequences. In this paper, we propose goal recognition heuristics that rely on information from planning landmarks - facts or actions that must occur if a plan is to achieve a goal when starting from some initial state. We develop two such heuristics: the first estimates goal completion by considering the ratio between achieved and extracted landmarks of a candidate goal, while the second takes into account how unique each landmark is among landmarks for all candidate goals. We empirically evaluate these heuristics over both standard goal/plan recognition problems, and a set of very large problems. We show that our heuristics can recognize goals more accurately, and run orders of magnitude faster, than the current state-of-the-art.


#6 Narrowing the Gap Between Saturated and Optimal Cost Partitioning for Classical Planning [PDF] [Copy] [Kimi] [REL]

Authors: Jendrik Seipp, Thomas Keller, Malte Helmert

In classical planning, cost partitioning is a method for admissibly combining a set of heuristic estimators by distributing operator costs among the heuristics. An optimal cost partitioning is often prohibitively expensive to compute. Saturated cost partitioning is an alternative that is much faster to compute and has been shown to offer high-quality heuristic guidance on Cartesian abstractions. However, its greedy nature makes it highly susceptible to the order in which the heuristics are considered. We show that searching in the space of orders leads to significantly better heuristic estimates than with previously considered orders. Moreover, using multiple orders leads to a heuristic that is significantly better informed than any single-order heuristic. In experiments with Cartesian abstractions, the resulting heuristic approximates the optimal cost partitioning very closely.


#7 Higher-Dimensional Potential Heuristics for Optimal Classical Planning [PDF] [Copy] [Kimi] [REL]

Authors: Florian Pommerening, Malte Helmert, Blai Bonet

Potential heuristics for state-space search are defined as weighted sums over simple state features. Atomic features consider the value of a single state variable in a factored state representation, while binary features consider joint assignments to two state variables. Previous work showed that the set of all admissible and consistent potential heuristics using atomic features can be characterized by a compact set of linear constraints. We generalize this result to binary features and prove a hardness result for features of higher dimension. Furthermore, we prove a tractability result based on the treewidth of a new graphical structure we call the context-dependency graph. Finally, we study the relationship of potential heuristics to transition cost partitioning. Experimental results show that binary potential heuristics are significantly more informative than the previously considered atomic ones.


#8 An Efficient Approach to Model-Based Hierarchical Reinforcement Learning [PDF] [Copy] [Kimi] [REL]

Authors: Zhuoru Li, Akshay Narayan, Tze-Yun Leong

We propose a model-based approach to hierarchical reinforcement learning that exploits shared knowledge and selective execution at different levels of abstraction, to efficiently solve large, complex problems. Our framework adopts a new transition dynamics learning algorithm that identifies the common action-feature combinations of the subtasks, and evaluates the subtask execution choices through simulation. The framework is sample efficient, and tolerates uncertain and incomplete problem characterization of the subtasks. We test the framework on common benchmark problems and complex simulated robotic environments. It compares favorably against the state-of-the-art algorithms, and scales well in very large problems.


#9 Plan Reordering and Parallel Execution — A Parameterized Complexity View [PDF] [Copy] [Kimi] [REL]

Authors: Meysam Aghighi, Christer Bäckström

Bäckström has previously studied a number of optimization problems for partial-order plans, like finding a minimum deordering (MCD) or reordering (MCR), and finding the minimum parallel execution length (PPL), which are all NP-complete. We revisit these problems, but applying parameterized complexity analysis rather than standard complexity analysis. We consider various parameters, including both the original and desired size of the plan order, as well as its width and height. Our findings include that MCD and MCR are W[2]-hard and in W[P] when parameterized with the desired order size, and MCD is fixed-parameter tractable (fpt) when parameterized with the original order size. Problem PPL is fpt if parameterized with the size of the non-concurrency relation, but para-NP-hard in most other cases. We also consider this problem when the number (k) of agents, or processors, is restricted, finding that this number is a crucial parameter; this problem is fixed-parameter tractable with the order size, the parallel execution length and k as parameter, but para-NP-hard without k as parameter.


#10 Optimizing Quantiles in Preference-Based Markov Decision Processes [PDF] [Copy] [Kimi] [REL]

Authors: Hugo Gilbert, Paul Weng, Yan Xu

In the Markov decision process model, policies are usually evaluated by expected cumulative rewards. As this decision criterion is not always suitable, we propose in this paper an algorithm for computing a policy optimal for the quantile criterion. Both finite and infinite horizons are considered. Finally we experimentally evaluate our approach on random MDPs and on a data center control problem.


#11 Best-First Width Search: Exploration and Exploitation in Classical Planning [PDF] [Copy] [Kimi] [REL]

Authors: Nir Lipovetzky, Hector Geffner

It has been shown recently that the performance of greedy best-first search (GBFS) for computing plans that are not necessarily optimal can be improved by adding forms of exploration when reaching heuristic plateaus: from random walks to local GBFS searches. In this work, we address this problem but using structural exploration methods resulting from the ideas of width-based search. Width-based methodsseek novel states, are not goal oriented, and their power has been shown recently in the Atari and GVG-AI video-games. We show first that width-based exploration in GBFS is more effective than GBFS with local GBFS search (GBFS-LS), and then proceed to formulate a simple and general computational framework where standard goal-oriented search (exploitation) and width-based search (structural exploration) are combined to yield a search scheme, best-first width search, that is better than both and which results in classical planning algorithms that outperform the state-of-the-art planners.


#12 An Analysis of Monte Carlo Tree Search [PDF] [Copy] [Kimi] [REL]

Authors: Steven James, George Konidaris, Benjamin Rosman

Monte Carlo Tree Search (MCTS) is a family of directed search algorithms that has gained widespread attention in recent years. Despite the vast amount of research into MCTS, the effect of modifications on the algorithm, as well as the manner in which it performs in various domains, is still not yet fully known. In particular, the effect of using knowledge-heavy rollouts in MCTS still remains poorly understood, with surprising results demonstrating that better-informed rollouts often result in worse-performing agents. We present experimental evidence suggesting that, under certain smoothness conditions, uniformly random simulation policies preserve the ordering over action preferences. This explains the success of MCTS despite its common use of these rollouts to evaluate states. We further analyse non-uniformly random rollout policies and describe conditions under which they offer improved performance.


#13 Human-Aware Plan Recognition [PDF] [Copy] [Kimi] [REL]

Author: Hankz Zhuo

Plan recognition aims to recognize target plans given observed actions with history plan libraries ordomain models in hand. Despite of the success of previous plan recognition approaches, they all neglect the impact of human preferences on plans. For example, a kid in a shopping mall might prefer to "executing'' a plan of playing in water park, while an adult might prefer to "executing'' a plan of having a cup of coffee. It could be helpful for improving the plan recognition accuracy to consider human preferences on plans. We assume there are historical rating scores on a subset of plans given by humans, and action sequences observed on humans. We estimate unknown rating scores based on rating scores in hand using an off-the-shelf collaborative filtering approach. We then discover plans to best explain the estimated rating scores and observed actions using a skip-gram based approach. In the experiment, we evaluate our approach in three planning domains to demonstrate its effectiveness.


#14 Schematic Invariants by Reduction to Ground Invariants [PDF] [Copy] [Kimi] [REL]

Author: Jussi Rintanen

Computation of invariants, which are approximate reachability information for state-space search problems such as AI planning, has been considered to be more scalable when using a schematic representation of actions/events rather than an instantiated/ground representation. A disadvantage of schematic algorithms, however, is their complexity, which also leads to high runtimes when the number of schematic events/actions is high. We propose algorithms that reduce the problem of finding schematic invariants to solving a smaller ground problem.


#15 Logical Filtering and Smoothing: State Estimation in Partially Observable Domains [PDF] [Copy] [Kimi] [REL]

Authors: Brent Mombourquette, Christian Muise, Sheila McIlraith

State estimation is the task of estimating the state of a partially observable dynamical system given a sequence of executed actions and observations. In logical settings, state estimation can be realized via logical filtering, which is exact but can be intractable. We propose logical smoothing, a form of backwards reasoning that works in concert with approximated logical filtering to refine past beliefs in light of new observations. We characterize the notion of logical smoothing together with an algorithm for backwards-forwards state estimation. We also present an approximation of our smoothing algorithm that is space efficient. We prove properties of our algorithms, and experimentally demonstrate their behaviour, contrasting them with state estimation methods for planning. Smoothing and backwards-forwards reasoning are important techniques for reasoning about partially observable dynamical systems, introducing the logical analogue of effective techniques from control theory and dynamic programming.


#16 Accelerated Vector Pruning for Optimal POMDP Solvers [PDF] [Copy] [Kimi] [REL]

Authors: Erwin Walraven, Matthijs Spaan

Partially Observable Markov Decision Processes (POMDPs) are powerful models for planning under uncertainty in partially observable domains. However, computing optimal solutions for POMDPs is challenging because of the high computational requirements of POMDP solution algorithms. Several algorithms use a subroutine to prune dominated vectors in value functions, which requires a large number of linear programs (LPs) to be solved and it represents a large part of the total running time. In this paper we show how the LPs in POMDP pruning subroutines can be decomposed using a Benders decomposition. The resulting algorithm incrementally adds LP constraints and uses only a small fraction of the constraints. Our algorithm significantly improves the performance of existing pruning methods and the commonly used incremental pruning algorithm. Our new variant of incremental pruning is the fastest optimal pruning-based POMDP algorithm.


#17 When to Reset Your Keys: Optimal Timing of Security Updates via Learning [PDF] [Copy] [Kimi] [REL]

Authors: Zizhan Zheng, Ness Shroff, Prasant Mohapatra

Cybersecurity is increasingly threatened by advanced and persistent attacks. As these attacks are often designed to disable a system (or a critical resource, e.g., a user account) repeatedly, it is crucial for the defender to keep updating its security measures to strike a balance between the risk of being compromised and the cost of security updates. Moreover, these decisions often need to be made with limited and delayed feedback due to the stealthy nature of advanced attacks. In addition to targeted attacks, such an optimal timing policy under incomplete information has broad applications in cybersecurity. Examples include key rotation, password change, application of patches, and virtual machine refreshing. However, rigorous studies of optimal timing are rare. Further, existing solutions typically rely on a pre-defined attack model that is known to the defender, which is often not the case in practice. In this work, we make an initial effort towards achieving optimal timing of security updates in the face of unknown stealthy attacks. We consider a variant of the influential FlipIt game model with asymmetric feedback and unknown attack time distribution, which provides a general model to consecutive security updates.The defender's problem is then modeled as a time associative bandit problem with dependent arms. We derive upper confidence bound based learning policies that achieve low regret compared with optimal periodic defense strategies that can only be derived when attack time distributions are known.


#18 Incorporating Domain-Independent Planning Heuristics in Hierarchical Planning [PDF] [Copy] [Kimi] [REL]

Authors: Vikas Shivashankar, Ron Alford, David Aha

Heuristics serve as a powerful tool in modern domain-independent planning (DIP) systems by providing critical guidance during the search for high-quality solutions. However, they have not been broadly used with hierarchical planning techniques, which are more expressive and tend to scale better in complex domains by exploiting additional domain-specific knowledge. Complicating matters, we show that for Hierarchical Goal Network (HGN) planning, a goal-based hierarchical planning formalism that we focus on in this paper, any poly-time heuristic that is derived from a delete-relaxation DIP heuristic has to make some relaxation of the hierarchical semantics. To address this, we present a principled framework for incorporating DIP heuristics into HGN planning using a simple relaxation of the HGN semantics we call Hierarchy-Relaxation. This framework allows for computing heuristic estimates of HGN problems using any DIP heuristic in an admissibility-preserving manner. We demonstrate the feasibility of this approach by using the LMCut heuristic to guide an optimal HGN planner. Our empirical results with three benchmark domains demonstrate that simultaneously leveraging hierarchical knowledge and heuristic guidance substantially improves planning performance.


#19 Multi-Agent Path Finding with Delay Probabilities [PDF] [Copy] [Kimi] [REL]

Authors: Hang Ma, T. K. Satish Kumar, Sven Koenig

Several recently developed Multi-Agent Path Finding (MAPF) solvers scale to large MAPF instances by searching for MAPF plans on 2 levels: The high-level search resolves collisions between agents, and the low-level search plans paths for single agents under the constraints imposed by the high-level search. We make the following contributions to solve the MAPF problem with imperfect plan execution with small average makespans: First, we formalize the MAPF Problem with Delay Probabilities (MAPF-DP), define valid MAPF-DP plans and propose the use of robust plan-execution policies for valid MAPF-DP plans to control how each agent proceeds along its path. Second, we discuss 2 classes of decentralized robust plan-execution policies (called Fully Synchronized Policies and Minimal Communication Policies) that prevent collisions during plan execution for valid MAPF-DP plans. Third, we present a 2-level MAPF-DP solver (called Approximate Minimization in Expectation) that generates valid MAPF-DP plans.


#20 Fast SSP Solvers Using Short-Sighted Labeling [PDF] [Copy] [Kimi] [REL]

Authors: Luis Pineda, Kyle Wray, Shlomo Zilberstein

State-of-the-art methods for solving SSPs often work by limiting planning to restricted regions of the state space. The resulting problems can then be solved quickly, and the process is repeated during execution when states outside the restricted region are encountered. Typically, these approaches focus on states that are within some distance measure of the start state (e.g., number of actions or probability of being reached). However, these short-sighted approaches make it difficult to propagate information from states that are closer to a goal than to the start state, thus missing opportunities to improve planning. We present an alternative approach in which short-sightedness is used only to determine whether a state should be labeled as solved or not, but otherwise the set of states that can be accounted for during planning is unrestricted. Based on this idea, we propose the FLARES algorithm and show that it performs consistently well on a wide range of benchmark problems.


#21 Bounding the Probability of Resource Constraint Violations in Multi-Agent MDPs [PDF] [Copy] [Kimi] [REL]

Authors: Frits de Nijs, Erwin Walraven, Mathijs de Weerdt, Matthijs Spaan

Multi-agent planning problems with constraints on global resource consumption occur in several domains. Existing algorithms for solving Multi-agent Markov Decision Processes can compute policies that meet a resource constraint in expectation, but these policies provide no guarantees on the probability that a resource constraint violation will occur. We derive a method to bound constraint violation probabilities using Hoeffding's inequality. This method is applied to two existing approaches for computing policies satisfying constraints: the Constrained MDP framework and a Column Generation approach. We also introduce an algorithm to adaptively relax the bound up to a given maximum violation tolerance. Experiments on a hard toy problem show that the resulting policies outperform static optimal resource allocations to an arbitrary level. By testing the algorithms on more realistic planning domains from the literature, we demonstrate that the adaptive bound is able to efficiently trade off violation probability with expected value, outperforming state-of-the-art planners.