| Total: 48
We study best-effort synthesis under environment assumptions specified in LTL, and show that this problem has exactly the same computational complexity of standard LTL synthesis: 2EXPTIME-complete. We provide optimal algorithms for computing best-effort strategies, both in the case of LTL over infinite traces and LTL over finite traces (i.e., LTLf). The latter are particularly well suited for implementation.
When designing or analyzing multi-agent systems, a fundamental problem is responsibility ascription: to specify which agents are responsible for the joint outcome of their behaviors and to which extent. We model strategic multi-agent interaction as an extensive form game of imperfect information and define notions of forward (prospective) and backward (retrospective) responsibility. Forward responsibility identifies the responsibility of a group of agents for an outcome along all possible plays, whereas backward responsibility identifies the responsibility along a given play. We further distinguish between strategic and causal backward responsibility, where the former captures the epistemic knowledge of players along a play, while the latter formalizes which players – possibly unknowingly – caused the outcome. A formal connection between forward and backward notions is established in the case of perfect recall. We further ascribe quantitative responsibility through cooperative game theory. We show through a number of examples that our approach encompasses several prior formal accounts of responsibility attribution.
Abstract argumentation as defined by Dung in his seminal 1995 paper is by now a major research area in knowledge representation and reasoning. Dynamics of abstract argumentation frameworks (AFs) as well as syntactical consequences of semantical facts of them are the central issues of this paper. The first main part is engaged with the systematical study of the influence of attackers and supporters regarding the acceptability status of whole sets and/or single arguments. In particular, we investigate the impact of addition or removal of arguments, a line of research that has been around for more than a decade. Apart from entirely new results, we revisit, generalize and sum up similar results from the literature. To gain a comprehensive formal and intuitive understanding of the behavior of AFs we put special effort in comparing different kind of semantics. We concentrate on classical admissibility-based semantics and also give pointers to semantics based on naivity and weak admissibility, a recently introduced mediating approach. In the second main part we show how to infer syntactical information from semantical one. For instance, it is well-known that if a finite AF possesses no stable extension, then it has to contain an odd-cycle. In this paper, we even present a characterization of this issue. Moreover, we show that the change of the number of extensions if adding or removing an argument allows to conclude the existence of certain even or odd cycles in the considered AF without having further information.
We study the semantics of knowledge in strategic reasoning. Most existing works either implicitly assume that agents do not know one another’s strategies, or that all strategies are known to all; and some works present inconsistent mixes of both features. We put forward a novel semantics for Strategy Logic with Knowledge that cleanly models whose strategies each agent knows. We study how adopting this semantics impacts agents’ knowledge and strategic ability, as well as the complexity of the model-checking problem.
Qualitative Choice Logic (QCL) and Conjunctive Choice Logic (CCL) are formalisms for preference handling, with especially QCL being well established in the field of AI. So far, analyses of these logics need to be done on a case-by-case basis, albeit they share several common features. This calls for a more general choice logic framework, with QCL and CCL as well as some of their derivatives being particular instantiations. We provide such a framework, which allows us, on the one hand, to easily define new choice logics and, on the other hand, to examine properties of different choice logics in a uniform setting. In particular, we investigate strong equivalence, a core concept in non-classical logics for understanding formula simplification, and computational complexity. Our analysis also yields new results for QCL and CCL. For example, we show that the main reasoning task regarding preferred models is ϴ₂P-complete for QCL and CCL, while being Δ₂P-complete for a newly introduced choice logic.
Ontology-mediated query answering (OMQA) employs structured knowledge and automated reasoning in order to facilitate access to incomplete and possibly heterogeneous data. While most research on OMQA adopts (unions of) conjunctive queries as the query language, there has been recent interest in handling queries that involve counting. In this paper, we advance this line of research by investigating cardinality queries (which correspond to Boolean atomic counting queries) coupled with DL-Lite ontologies. Despite its apparent simplicity, we show that such an OMQA setting gives rise to rich and complex behaviour. While we prove that cardinality query answering is tractable (TC0) in data complexity when the ontology is formulated in DL-Lite-core, the problem becomes coNP-hard as soon as role inclusions are allowed. For DL-Lite-pos-H (which allows only positive axioms), we establish a P-coNP dichotomy and pinpoint the TC0 cases; for DL-Lite-core-H (allowing also negative axioms), we identify new sources of coNP complexity and also exhibit L-complete cases. Interestingly, and in contrast to related tractability results, we observe that the canonical model may not give the optimal count value in the tractable cases, which led us to develop an entirely new approach based upon exploring a space of strategies to determine the minimum possible number of query matches.
Discounting future costs and rewards is a common practice in accounting, game theory, and machine learning. In spite of this, existing logics for reasoning about strategies with cost and resource constraints do not account for discounting. The paper proposes a sound and complete logical system for reasoning about budget-constrained strategic abilities that incorporates discounting into its semantics.
Abductive Learning is a framework that combines machine learning with first-order logical reasoning. It allows machine learning models to exploit complex symbolic domain knowledge represented by first-order logic rules. However, it is challenging to obtain or express the ground-truth domain knowledge explicitly as first-order logic rules in many applications. The only accessible knowledge base is implicitly represented by groundings, i.e., propositions or atomic formulas without variables. This paper proposes Grounded Abductive Learning (GABL) to enhance machine learning models with abductive reasoning in a ground domain knowledge base, which offers inexact supervision through a set of logic propositions. We apply GABL on two weakly supervised learning problems and found that the model's initial accuracy plays a crucial role in learning. The results on a real-world OCR task show that GABL can significantly reduce the effort of data labeling than the compared methods.
The use of virtual collections of data is often essential in several data and knowledge management tasks. In the literature, the standard way to define virtual data collections is via views, i.e., virtual relations defined using queries. In data and knowledge bases, the notion of views is a staple of data access, data integration and exchange, query optimization, and data privacy. In this work, we study views in Ontology-Based Data Access (OBDA) systems. OBDA is a powerful paradigm for accessing data through an ontology, i.e., a conceptual specification of the domain of interest written using logical axioms. Intuitively, users of an OBDA system interact with the data only through the ontology's conceptual lens. We present a novel framework to express natural and sophisticated forms of views in OBDA systems and introduce fundamental reasoning tasks for these views. We study the computational complexity of these tasks and present classes of views for which these tasks are tractable or at least decidable.
An important issue in ML consists in developing approaches exploiting background knowledge T for improving the accuracy and the robustness of learned classifiers C. Delegating the classification task to a Boolean circuit Σ exhibiting the same input-output behaviour as C, the problem of exploiting T within C can be viewed as a belief change scenario. However, usual change operations are not suited to the task of modifying the classifier encoding Σ in a minimal way, to make it complying with T. To fill the gap, we present a new belief change operation, called rectification. We characterize the family of rectification operators from an axiomatic perspective and exhibit operators from this family. We identify the standard belief change postulates that every rectification operator satisfies and those it does not. We also focus on some computational aspects of rectification and compliance.
Generalized planning aims at finding a general solution for a set of similar planning problems. Abstractions are widely used to solve such problems. However, the connections among these abstraction works remain vague. Thus, to facilitate a deep understanding and further exploration of abstraction approaches for generalized planning, it is important to develop a uniform abstraction framework for generalized planning. Recently, Banihashemi et al. proposed an agent abstraction framework based on the situation calculus. However, expressiveness of such an abstraction framework is limited. In this paper, by extending their abstraction framework, we propose a uniform abstraction framework for generalized planning. We formalize a generalized planning problem as a triple of a basic action theory, a trajectory constraint, and a goal. Then we define the concepts of sound abstractions of a generalized planning problem. We show that solutions to a generalized planning problem are nicely related to those of its sound abstractions. We also define and analyze the dual notion of complete abstractions. Finally, we review some important abstraction works for generalized planning and show that they can be formalized in our framework.
For many reasoning-heavy tasks with raw inputs, it is challenging to design an appropriate end-to-end pipeline to formulate the problem-solving process. Some modern AI systems, e.g., Neuro-Symbolic Learning, divide the pipeline into sub-symbolic perception and symbolic reasoning, trying to utilise data-driven machine learning and knowledge-driven problem-solving simultaneously. However, these systems suffer from the exponential computational complexity caused by the interface between the two components, where the sub-symbolic learning model lacks direct supervision, and the symbolic model lacks accurate input facts. Hence, they usually focus on learning the sub-symbolic model with a complete symbolic knowledge base while avoiding a crucial problem: where does the knowledge come from? In this paper, we present Abductive Meta-Interpretive Learning (MetaAbd) that unites abduction and induction to learn neural networks and logic theories jointly from raw data. Experimental results demonstrate that MetaAbd not only outperforms the compared systems in predictive accuracy and data efficiency but also induces logic programs that can be re-used as background knowledge in subsequent learning tasks. To the best of our knowledge, MetaAbd is the first system that can jointly learn neural networks from scratch and induce recursive first-order logic theories with predicate invention.
Linear Temporal Logic (LTL) synthesis aims at automatically synthesizing a program that complies with desired properties expressed in LTL. Unfortunately it has been proved to be too difficult computationally to perform full LTL synthesis. There have been two success stories with LTL synthesis, both having to do with the form of the specification. The first is the GR(1) approach: use safety conditions to determine the possible transitions in a game between the environment and the agent, plus one powerful notion of fairness, Generalized Reactivity(1), or GR(1). The second, inspired by AI planning, is focusing on finite-trace temporal synthesis, with LTLf (LTL on finite traces) as the specification language. In this paper we take these two lines of work and bring them together. We first study the case in which we have an LTLf agent goal and a GR(1) assumption. We then add to the framework safety conditions for both the environment and the agent, obtaining a highly expressive yet still scalable form of LTL synthesis.
Temporal logics over finite traces, such as LTLf and its extension LDLf, have been adopted in several areas, including Business Process Management (BPM), to check properties of processes whose executions have an unbounded, but finite, length. These logics express properties of single traces in isolation, however, especially in BPM it is also of interest to express properties over the entire log, i.e., properties that relate multiple traces of the log at once. In the case of infinite-traces, HyperLTL has been proposed to express these ``hyper'' properties. In this paper, motivated by BPM, we introduce HyperLDLf, a logic that extends LDLf with the hyper features of HyperLTL. We provide a sound, complete and computationally optimal technique, based on DFAs manipulation, for the model checking problem in the relevant case where the set of traces (i.e., the log) is a regular language. We illustrate how this form of model checking can be used for verifying log of business processes and for advanced forms of process mining.
Consider a set of agents with initial beliefs and a formal operator for incorporating new information. Now suppose that, for each agent, we have a formula that we would like them to believe. Does there exist a single announcement that will lead all agents to believe the corresponding formula? This paper studies the problem of the existence of such an announcement in the context of model-preference definable revision operators. First, we provide two characterisation theorems for the existence of announcements: one in the general case, the other for total partial orderings. Second, we exploit the characterisation theorems to provide upper bound complexity results. Finally, we also provide matching optimal lower bounds for the Dalal and Ginsberg operators.
The constraint satisfaction problem (CSP) is an important framework in artificial intelligence used to model e.g. qualitative reasoning problems such as Allen's interval algebra A. There is strong practical incitement to solve CSPs as efficiently as possible, and the classical complexity of temporal CSPs, including A, is well understood. However, the situation is more dire with respect to running time bounds of the form O(f(n)) (where n is the number of variables) where existing results gives a best theoretical upper bound 2^O(n * log n) which leaves a significant gap to the best (conditional) lower bound 2^o(n). In this paper we narrow this gap by presenting two novel algorithms for temporal CSPs based on dynamic programming. The first algorithm solves temporal CSPs limited to constraints of arity three in O(3^n) time, and we use this algorithm to solve A in O((1.5922n)^n) time. The second algorithm tackles A directly and solves it in O((1.0615n)^n), implying a remarkable improvement over existing methods since no previously published algorithm belongs to O((cn)^n) for any c. We also extend the latter algorithm to higher dimensions box algebras where we obtain the first explicit upper bound.
Argumentation is a widely applied framework for modeling and evaluating arguments and its reasoning with various applications. Popular frameworks are abstract argumentation (Dung’s framework) or logic-based argumentation (Besnard-Hunter’s framework). Their computational complexity has been studied quite in-depth. Incorporating treewidth into the complexity analysis is particularly interesting, as solvers oftentimes employ SAT-based solvers, which can solve instances of low treewidth fast. In this paper, we address whether one can design reductions from argumentation problems to SAT-problems while linearly preserving the treewidth, which results in decomposition-guided (DG) reductions. It turns out that the linear treewidth overhead caused by our DG reductions, cannot be significantly improved under reasonable assumptions. Finally, we consider logic-based argumentation and establish new upper bounds using DG reductions and lower bounds.
We consider the problem to learn a concept or a query in the presence of an ontology formulated in the description logic ELr, in Angluin's framework of active learning that allows the learning algorithm to interactively query an oracle (such as a domain expert). We show that the following can be learned in polynomial time: (1) EL-concepts, (2) symmetry-free ELI-concepts, and (3) conjunctive queries (CQs) that are chordal, symmetry-free, and of bounded arity. In all cases, the learner can pose to the oracle membership queries based on ABoxes and equivalence queries that ask whether a given concept/query from the considered class is equivalent to the target. The restriction to bounded arity in (3) can be removed when we admit unrestricted CQs in equivalence queries. We also show that EL-concepts are not polynomial query learnable in the presence of ELI-ontologies.
Given a specification φ(X, Y ) over inputs X and output Y and defined over a background theory T, the problem of program synthesis is to design a program f such that Y = f (X), satisfies the specification φ. Over the past decade, syntax-guided synthesis (SyGuS) has emerged as a dominant approach to program synthesis where in addition to the specification φ, the end-user also specifies a grammar L to aid the underlying synthesis engine. This paper investigates the feasibility of synthesis techniques without grammar, a sub-class defined as T constrained synthesis. We show that T-constrained synthesis can be reduced to DQF(T),i.e., to the problem of finding a witness of a dependency quantified formula modulo theory. When the underlying theory is the theory of bitvectors, the corresponding DQF problem can be further reduced to Dependency Quantified Boolean Formulas (DQBF). We rely on the progress in DQBF solving to design DQBF-based synthesizers that outperform the domain-specific program synthesis techniques; thereby positioning DQBF as a core representation language for program synthesis. Our empirical analysis shows that T-constrained synthesis can achieve significantly better performance than syntax-guided approaches. Furthermore, the general-purpose DQBF solvers perform on par with domain-specific synthesis techniques.
In this note, we introduce the local version of the operator for belief promotion proposed by Schwind et al. We propose a set of postulates and provide a representation theorem that characterizes the proposal. This family of operators is related to belief promotion in the same way that updating is related to revision, and we provide several results that allow us to show this relationship formally. Furthermore, we also show the relationship of the proposed operator with features of credibility-limited revision theory.
Plan execution on a mobile robot is inherently error-prone, as the robot needs to act in a physical world which can never be completely controlled by the robot. If an error occurs during execution, the true world state is unknown, as a failure may have unobservable consequences. One approach to deal with such failures is diagnosis, where the true world state is determined by identifying a set of faults based on sensed observations. In this paper, we present a novel approach to explanatory diagnosis, based on the assumption that most failures occur due to some robot hardware failure. We model the robot platform components with state machines and formulate action variants for the robots' actions, modelling different fault modes. We apply diagnosis as planning with a top-k planning approach to determine possible diagnosis candidates and then use active diagnosis to find out which of those candidates is the true diagnosis. Finally, based on the platform model, we recover from the occurred failure such that the robot can continue to operate. We evaluate our approach in a logistics robots scenario by comparing it to having no diagnosis and diagnosis without platform models, showing a significant improvement to both alternatives.
In recent years, temporal knowledge graph (TKG) reasoning has received significant attention. Most existing methods assume that all timestamps and corresponding graphs are available during training, which makes it difficult to predict future events. To address this issue, recent works learn to infer future events based on historical information. However, these methods do not comprehensively consider the latent patterns behind temporal changes, to pass historical information selectively, update representations appropriately and predict events accurately. In this paper, we propose the Historical Information Passing (HIP) network to predict future events. HIP network passes information from temporal, structural and repetitive perspectives, which are used to model the temporal evolution of events, the interactions of events at the same time step, and the known events respectively. In particular, our method considers the updating of relation representations and adopts three scoring functions corresponding to the above dimensions. Experimental results on five benchmark datasets show the superiority of HIP network, and the significant improvements on Hits@1 prove that our method can more accurately predict what is going to happen.
We introduce a multi-agent, dynamic extension of abstract argumentation frameworks (AFs), strongly inspired by epistemic logic, where agents have only partial information about the conflicts between arguments. These frameworks can be used to model a variety of situations. For instance, those in which agents have bounded logical resources and therefore fail to spot some of the actual attacks, or those where some arguments are not explicitly and fully stated (enthymematic argumentation). Moreover, we include second-order knowledge and common knowledge of the attack relation in our structures (where the latter accounts for the state of the debate), so as to reason about different kinds of persuasion and about strategic features. This version of multi-agent AFs, as well as their updates with public announcements of attacks (more concretely, the effects of these updates on the acceptability of an argument) can be described using S5-PAL, a well-known dynamic-epistemic logic. We also discuss how to extend our proposal to capture arbitrary higher-order attitudes and uncertainty.
Given a knowledge base and an observation as a set of facts, ABox abduction aims at computing a hypothesis that, when added to the knowledge base, is sufficient to entail the observation. In signature-based ABox abduction, the hypothesis is further required to use only names from a given set. This form of abduction has applications such as diagnosis, KB repair, or explaning missing entailments. It is possible that hypotheses for a given observation only exist if we admit the use of fresh individuals and/or complex concepts built from the given signature, something most approaches for ABox abduction so far do not allow or only allow with restrictions. In this paper, we investigate the computational complexity of this form of abduction---allowing either fresh individuals, complex concepts, or both---for various description logics, and give size bounds on the hypotheses if they exist.
Recently, novel ILP systems under the answer set semantics have been proposed, some of which are robust to noise and scalable over large hypothesis spaces. One such system is FastLAS, which is significantly faster than other state-of-the-art ASP-based ILP systems. FastLAS is, however, only capable of Observational Predicate Learning (OPL), where the learned hypothesis defines predicates that are directly observed in the examples. It cannot learn knowledge that is indirectly observable, such as learning causes of observed events. This class of problems, known as non-OPL, is known to be difficult to handle in the context of non-monotonic semantics. Solving non-OPL learning tasks whilst preserving scalability is a challenging open problem. We address this problem with a new abductive method for translating examples of a non-OPL task to a set of examples, called possibilities, such that the original example is covered iff at least one of the possibilities is covered. This new method allows an ILP system capable of performing OPL tasks to be "upgraded" to solve non-OPL tasks. In particular, we present our new FastNonOPL system, which upgrades FastLAS with the new possibility generation. We compare it to other state-of-the-art ASP-based ILP systems capable of solving non-OPL tasks, showing that FastNonOPL is significantly faster, and in many cases more accurate, than these other systems.