IJCAI.2018 - Knowledge Representation and Reasoning

| Total: 44

#1 Query Answering in Propositional Circumscription [PDF] [Copy] [Kimi] [REL]

Author: Mario Alviano

Propositional circumscription defines a preference relation over the models of a propositional theory, so that models being subset-minimal on the interpretation of a set of objective atoms are preferred.The complexity of several computational tasks increase by one level in the polynomial hierarchy due to such a preference relation;among them there is query answering, which amounts to decide whether there is an optimal model satisfying the query.A complete algorithm for query answering is obtained by searching for a model, not necessarily an optimal one, that satisfies the query, and such that no model unsatisfying the query is more preferred.If the query or its complement are among the objective atoms, the algorithm has a simpler behavior, which is also described in the paper.Moreover, an incomplete algorithm is obtained by searching for a model satisfying both the query and an objective atom being unit-implied by the theory extended with the complement of the query.A prototypical implementation is tested on instances from the 2nd International Competition on Computational Models of Argumentation (ICCMA'17).

#2 Enhancing Existential Rules by Closed-World Variables [PDF] [Copy] [Kimi] [REL]

Authors: Giovanni Amendola ; Nicola Leone ; Marco Manna ; Pierfrancesco Veltri

Existential rules generalize Datalog with existential quantification in the head. Natively, Datalog is interpreted under a closed-world semantics, while existential rules typically employ the open-world assumption. The interpretation domain in the latter case is enlarged by infinitely many "anonymous" individuals. Then, in any rule, each variable ranges over all individuals, even if not needed or required. In this paper, we enhance existential rules by closed-world variables to consciously reason on the properties of "known" (non-anonymous) and arbitrary individuals in different ways. Accordingly, we uniformly generalize the basic classes of existential rules that ensure decidability of ontology-based query answering. For them, after observing that decidability is preserved, we prove that a strict increase in expressiveness is gained, and in most cases the computational complexity is not altered.

#3 Explainable Certain Answers [PDF] [Copy] [Kimi] [REL]

Authors: Giovanni Amendola ; Leonid Libkin

When a dataset is not fully specified and can represent many possible worlds, one commonly answers queries by computing certain answers to them. A natural way of defining certainty is to say that an answer is certain if it is consistent with query answers in all possible worlds, and is furthermore the most informative answer with this property. However, the existence and complexity of such answers is not yet well understood even for relational databases. Thus in applications one tends to use different notions, essentially the intersection of query answers in possible worlds. However, justification of such notions has long been questioned. This leads to two problems: are certain answers based on informativeness feasible in applications? and can a clean justification be provided for intersection-based notions? Our goal is to answer both. For the former, we show that such answers may not exist, or be very large, even in simple cases of querying incomplete data. For the latter, we add the concept of explanations to the notion of informativeness: it shows not only that one object is more informative than the other, but also says why this is so. This leads to a modified notion of certainty: explainable certain answers. We present a general framework for reasoning about them, and show that for open and closed world relational databases, they are precisely the common intersection-based notions of certainty.

#4 Compiling Model Representations for Querying Large ABoxes in Expressive DLs [PDF] [Copy] [Kimi] [REL]

Authors: Labinot Bajraktari ; Magdalena Ortiz ; Mantas Simkus

Answering ontology mediated queries (OMQs) has received much attention in the last decade, but the big gap between practicable algorithms for lightweight ontologies, that are supported by implemented reasoners, and purely theoretical algorithms for expressive ontologies that are not amenable to implementation, has only increased. Towards narrowing the gap, we propose an algorithm to compile a representation of sets of models for ALCHI ontologies, which is sufficient for answering any monotone OMQ. Rather than reasoning for specific ABoxes, or being fully data-independent, we use generic descriptions of families of ABoxes, given by what we call profiles. Our model compilation algorithm runs on TBoxes and sets of profiles, and supports the incremental addition of new profiles. To illustrate the potential of our approach for OMQ answering, we implement a rewriting into an extension of Datalog for OMQs comprising reachability queries, and provide some promising evaluation results.

#5 Abstraction of Agents Executing Online and their Abilities in the Situation Calculus [PDF] [Copy] [Kimi] [REL]

Authors: Bita Banihashemi ; Giuseppe De Giacomo ; Yves Lespérance

We develop a general framework for abstracting online behavior of an agent that may acquire new knowledge during execution (e.g., by sensing), in the situation calculus and ConGolog. We assume that we have both a high-level action theory and a low-level one that represent the agent's behavior at different levels of detail. In this setting, we define ability to perform a task/achieve a goal, and then show that under some reasonable assumptions, if the agent has a strategy by which she is able to achieve a goal at the high level, then we can refine it into a low-level strategy to do so.

#6 First-Order Rewritability of Frontier-Guarded Ontology-Mediated Queries [PDF] [Copy] [Kimi] [REL]

Authors: Pablo Barceló ; Gerald Berger ; Carsten Lutz ; Andreas Pieris

We focus on ontology-mediated queries (OMQs) based on (frontier-)guarded existential rules and (unions of) conjunctive queries, and we investigate the problem of FO-rewritability, i.e., whether an OMQ can be rewritten as a first-order query. We adopt two different approaches. The first approach employs standard two-way alternating parity tree automata. Although it does not lead to a tight complexity bound, it provides a transparent solution based on widely known tools. The second approach relies on a sophisticated automata model, known as cost automata. This allows us to show that our problem is 2EXPTIME-complete. In both approaches, we provide semantic characterizations of FO-rewritability that are of independent interest.

#7 Single-Shot Epistemic Logic Program Solving [PDF] [Copy] [Kimi] [REL]

Authors: Manuel Bichler ; Michael Morak ; Stefan Woltran

Epistemic Logic Programs (ELPs) are an extension of Answer Set Programming (ASP) with epistemic operators that allow for a form of meta-reasoning, that is, reasoning over multiple possible worlds. Existing ELP solving approaches generally rely on making multiple calls to an ASP solver in order to evaluate the ELP. However, in this paper, we show that there also exists a direct translation from ELPs into non-ground ASP with bounded arity. The resulting ASP program can thus be solved in a single shot. We then implement this encoding method, using recently proposed techniques to handle large, non-ground ASP rules, into a prototype ELP solving system. This solver exhibits competitive performance on a set of ELP benchmark instances.

#8 Inconsistency-Tolerant Ontology-Based Data Access Revisited: Taking Mappings into Account [PDF] [Copy] [Kimi] [REL]

Author: Meghyn Bienvenu

Inconsistency-tolerant query answering in the presence of ontologies has received considerable attention in recent years. However, existing work assumes that the data is expressed using the vocabulary of the ontology and is therefore not directly applicable to ontology-based data access (OBDA), where relational data is connected to the ontology via mappings. This motivates us to revisit existing results in the wider context of OBDA with mappings. After formalizing the problem, we perform a detailed analysis of the data complexity of inconsistency-tolerant OBDA for ontologies formulated in DL-Lite and other data-tractable description logics, considering three different semantics (AR, IAR, and brave), two notions of repairs (subset and symmetric difference), and two classes of global-as-view (GAV) mappings. We show that adding plain GAV mappings does not affect data complexity, but there is a jump in complexity if mappings with negated atoms are considered.

#9 Actual Causality in a Logical Setting [PDF] [Copy] [Kimi] [REL]

Author: Alexander Bochman

We provide a definition of actual causation in the logical framework of the causal calculus, which is based on a causal version of the well-known NESS (or INUS) condition. We compare our definition with other, mainly counterfactual, approaches on standard examples. On the way, we explore general capabilities of the logical representation for structural equation models of causation and beyond.

#10 Exploiting Justifications for Lazy Grounding of Answer Set Programs [PDF] [Copy] [Kimi] [REL]

Authors: Bart Bogaerts ; Antonius Weinzierl

Answer set programming (ASP) is an established knowledge representation formalism. Lazy grounding avoids the so-called grounding bottleneck of ASP by interleaving grounding and solving; this technique was recently extended to work with conflict-driven clause learning. Unfortunately, it often happens that such a lazy grounding ASP system, at the fixpoint of the evaluation, arrives at an assignment that contains literals that are true but unjustified. The system then is unable to determine the actual causes of the situation and falls back to chronological backtracking, potentially wasting an exponential amount of time. In this paper, we show how top-down query mechanisms can be used to analyze the situation, learn a new clause or nogood, and backjump further in the search tree. Contributions include a rephrasing of lazy grounding in terms of justifications and algorithms to construct relevant justifications without grounding. Initial experiments indicate that the newly developed techniques indeed allow for an exponential speed-up.

#11 Fast Compliance Checking in an OWL2 Fragment [PDF] [Copy] [Kimi] [REL]

Author: Piero A. Bonatti

We illustrate a formalization of data usage policies in a fragment of OWL2. It can be used to encode (i) a company's data protection policy, (ii) data subjects' consent to data processing, and (iii) part of the GDPR (the forthcoming European Data Protection Regulation). Then a company's policy can be checked for compliance with data subjects' consent and with part of the GDPR by means of subsumption queries. We provide a complete and tractable structural subsumption algorithm for compliance checking and prove the intractability of a natural generalization of the policy language.

#12 Relevance in Structured Argumentation [PDF] [Copy] [Kimi] [REL]

Authors: AnneMarie Borg ; Christian Straßer

We study properties related to relevance in non-monotonic consequence relations obtained by systems of structured argumentation. Relevance desiderata concern the robustness of a consequence relation under the addition of irrelevant information. For an account of what (ir)relevance amounts to we use syntactic and semantic considerations. Syntactic criteria have been proposed in the domain of relevance logic and were recently used in argumentation theory under the names of non-interference and crash-resistance. The basic idea is that the conclusions of a given argumentative theory should be robust under adding information that shares no propositional variables with the original database. Some semantic relevance criteria are known from non-monotonic logic. For instance, cautious monotony states that if we obtain certain conclusions from an argumentation theory, we may expect to still obtain the same conclusions if we add some of them to the given database. In this paper we investigate properties of structured argumentation systems that warrant relevance desiderata.

#13 Learning Conceptual Space Representations of Interrelated Concepts [PDF] [Copy] [Kimi] [REL]

Authors: Zied Bouraoui ; Steven Schockaert

Several recently proposed methods aim to learn conceptual space representations from large text collections. These learned representations associate each object from a given domain of interest with a point in a high-dimensional Euclidean space, but they do not model the concepts from this domain, and can thus not directly be used for categorization and related cognitive tasks. A natural solution is to represent concepts as Gaussians, learned from the representations of their instances, but this can only be reliably done if sufficiently many instances are given, which is often not the case. In this paper, we introduce a Bayesian model which addresses this problem by constructing informative priors from background knowledge about how the concepts of interest are interrelated with each other. We show that this leads to substantially better predictions in a knowledge base completion task.

#14 Embracing Change by Abstraction Materialization Maintenance for Large ABoxes [PDF] [Copy] [Kimi] [REL]

Authors: Markus Brenner ; Birte Glimm

Abstraction Refinement is a recently introduced technique which allows for reducing materialization of an ontology with a large ABox to materialization of a smaller (compressed) `abstraction' of this ontology. In this paper, we show how Abstraction Refinement can be adopted for incremental ABox materialization by combining it with the well-known DRed algorithm for materialization maintenance. Such a combination is non-trivial and to preserve soundness and completeness, already Horn ALCHI requires more complex abstractions. Nevertheless, we show that significant benefits can be obtained for synthetic and real-world ontologies.

#15 The Complexity of Limited Belief Reasoning—The Quantifier-Free Case [PDF] [Copy] [Kimi] [REL]

Authors: Yijia Chen ; Abdallah Saffidine ; Christoph Schwering

The classical view of epistemic logic is that an agent knows all the logical consequences of their knowledge base. This assumption of logical omniscience is often unrealistic and makes reasoning computationally intractable. One approach to avoid logical omniscience is to limit reasoning to a certain belief level, which intuitively measures the reasoning "depth".This paper investigates the computational complexity of reasoning with belief levels. First we show that while reasoning remains tractable if the level is constant, the complexity jumps to PSPACE-complete -- that is, beyond classical reasoning -- when the belief level is part of the input. Then we further refine the picture using parameterized complexity theory to investigate how the belief level and the number of non-logical symbols affect the complexity.

#16 Belief Update in the Horn Fragment [PDF] [Copy] [Kimi] [REL]

Authors: Nadia Creignou ; Adrian Haret ; Odile Papini ; Stefan Woltran

In line with recent work on belief change in fragments of propositional logic, we study belief update in the Horn fragment. We start from the standard KM postulates used to axiomatize belief update operators; these postulates lend themselves to semantic characterizations in terms of partial (resp. total) preorders on possible worlds. Since the Horn fragment is not closed under disjunction, the standard postulates have to be adapted for the Horn fragment. Moreover, a restriction on the preorders (i.e., Horn compliance) and additional postulates are needed to obtain sensible characterizations for the Horn fragment, and this leads to our main contribution: a representation result which shows that the class of update operators captured by Horn compliant partial (resp. total) preorders over possible worlds is precisely that given by the adapted and augmented Horn update postulates. With these results at hand, we provide concrete Horn update operators and are able to shed light on Horn revision operators based on partial preorders.

#17 A Study of Argumentative Characterisations of Preferred Subtheories [PDF] [Copy] [Kimi] [REL]

Authors: Marcello D'Agostino ; Sanjay Modgil

Classical logic argumentation (Cl-Arg) under the stable semantics yields argumentative characterisations of non-monotonic inference in Preferred Subtheories. This paper studies these characterisations under both the standard approach to Cl-Arg, and a recent dialectical approach that is provably rational under resource bounds. Two key contributions are made. Firstly, the preferred extensions are shown to coincide with the stable extensions. This means that algorithms and proof theories for the admissible semantics can now be used to decide credulous inference in Preferred Subtheories. Secondly, we show that as compared with the standard approach, the grounded semantics applied to the dialectical approach more closely approximates sceptical inference in Preferred Subtheories.

#18 Game Description Language and Dynamic Epistemic Logic Compared [PDF] [Copy] [Kimi] [REL]

Authors: Thorsten Engesser ; Robert Mattmüller ; Bernhard Nebel ; Michael Thielscher

Several different frameworks have been proposed to model and reason about knowledge in dynamic multi-agent settings, among them the logic-programming-based game description language GDL-III, and dynamic epistemic logic (DEL), based on possible-worlds semantics. GDL-III and DEL have complementary strengths and weaknesses in terms of ease of modeling and simplicity of semantics. In this paper, we formally study the expressiveness of GDL-III vs. DEL. We clarify the commonalities and differences between those languages, demonstrate how to bridge the differences where possible, and identify large fragments of GDL-III and DEL that are equivalent in the sense that they can be used to encode games or planning tasks that admit the same legal action sequences. We prove the latter by providing compilations between those fragments of GDL-III and DEL.

#19 Probabilistic bipolar abstract argumentation frameworks: complexity results [PDF] [Copy] [Kimi] [REL]

Authors: Bettina Fazzinga ; Sergio Flesca ; Filippo Furfaro

Probabilistic Bipolar Abstract Argumentation Frameworks (prBAFs), combining the possibility of specifying supports between arguments with a probabilistic modeling of the uncertainty, are considered, and the complexity of the fundamentalproblem of computing extensions' probabilities is addressed.The most popular semantics of supports and extensions are considered, as well as different paradigms for defining the probabilistic encoding of the uncertainty.Interestingly, the presence of supports, which does not alter the complexity of verifying extensions in the deterministic case, is shown to introduce a new source of complexity in some probabilistic settings, for which tractable cases are also identified.

#20 From Conjunctive Queries to Instance Queries in Ontology-Mediated Querying [PDF] [Copy] [Kimi] [REL]

Authors: Cristina Feier ; Carsten Lutz ; Frank Wolter

We consider ontology-mediated queries (OMQs) based on expressive description logics of the ALC family and (unions) of conjunctive queries, studying the rewritability into OMQs based on instance queries (IQs). Our results include exact characterizations of when such a rewriting is possible and tight complexity bounds for deciding rewritability. We also give a tight complexity bound for the related problem of deciding whether a given MMSNP sentence (in other words: the complement of a monadic disjunctive Datalog program) is equivalent to a constraint satisfaction problem.

#21 An Empirical Study of Knowledge Tradeoffs in Case-Based Reasoning [PDF] [Copy] [Kimi] [REL]

Authors: Devi Ganesan ; Sutanu Chakraborti

Case-Based Reasoning provides a framework for integrating domain knowledge with data in the form of four knowledge containers namely Case base, Vocabulary, Similarity and Adaptation. It is a known fact in Case-Based Reasoning community that knowledge can be interchanged between the containers. However, the explicit interplay between them, and how this interchange is affected by the knowledge richness of the underlying domain is not yet fully understood. We attempt to bridge this gap by proposing footprint size reduction as a measure for quantifying knowledge tradeoffs between containers. The proposed measure is empirically evaluated on synthetic as well as real world datasets. From a practical standpoint, footprint size reduction provides a unified way of estimating the impact of a given piece of knowledge in any knowledge container, and can also suggest ways of characterizing the nature of domains ranging from ill-defined to well-defined ones. Our study also makes evident the need for maintenance approaches that go beyond case base and competence to include other containers and performance objectives.

#22 Possibilistic ASP Base Revision by Certain Input [PDF] [Copy] [Kimi] [REL]

Authors: Laurent Garcia ; Claire Lefèvre ; Odile Papini ; Igor Stéphan ; Eric Würbel

Belief base revision has been studied within the answer set programming framework. We go a step further by introducing uncertainty and studying belief base revision when beliefs are represented by possibilistic logic programs under possibilistic answer set semantics and revised by certain input. The paper proposes two approaches of rule-based revision operators and presents their semantic characterization in terms of possibilistic distribution. This semantic characterization allows for equivalently considering the evolution of syntactic logic programs and the evolution of their semantic content. It then studies the logical properties of the proposed operators and gives complexity results.

#23 Finite Model Reasoning in Hybrid Classes of Existential Rules [PDF] [Copy] [Kimi] [REL]

Authors: Georg Gottlob ; Marco Manna ; Andreas Pieris

Two paradigmatic restrictions that have been studied for ensuring the decidability of query answering under existential rules are guardedness and stickiness. With the aim of consolidating these restrictions, a flexible condition, called tameness, has been proposed a few years ago, which relies on hybrid reasoning, i.e., a combination of forward and backward procedures. The complexity of query answering under this hybrid class of existential rules is by now well-understood. However, the complexity of finite query answering, i.e., query answering under finite models, has remained an open problem. Closing this problem is the main goal of this work.

#24 Computing Approximate Query Answers over Inconsistent Knowledge Bases [PDF] [Copy] [Kimi] [REL]

Authors: Sergio Greco ; Cristian Molinaro ; Irina Trubitsyna

Consistent query answering is a principled approach for querying inconsistent knowledge bases. It relies on the notion of a "repair", that is, a maximal consistent subset of the facts in the knowledge base. One drawback of this approach is that entire facts are deleted to resolve inconsistency, even if they may still contain useful "reliable" information. To overcome this limitation, we propose a new notion of repair allowing values within facts to be updated for restoring consistency. This more fine-grained repair primitive allows us to preserve more information in the knowledge base. We also introduce the notion of a "universal repair", which is a compact representation of all repairs. Then, we show that consistent query answering in our framework is intractable (coNP-complete). In light of this result, we develop a polynomial time approximation algorithm for computing a sound (but possibly incomplete) set of consistent query answers.

#25 Reverse Engineering Queries in Ontology-Enriched Systems: The Case of Expressive Horn Description Logic Ontologies [PDF] [Copy] [Kimi] [REL]

Authors: Víctor Gutiérrez-Basulto ; Jean Christoph Jung ; Leif Sabellek

We introduce the query-by-example (QBE) paradigm for query answering in the presence of ontologies. Intuitively, QBE permits non-expert users to explore the data by providing examples of the information they (do not) want, which the system then generalizes into a query. Formally, we study the following question: given a knowledge base and sets of positive and negative examples, is there a query that returns all positive but none of the negative examples? We focus on description logic knowledge bases with ontologies formulated in Horn-ALCI and (unions of) conjunctive queries. Our main contributions are characterizations, algorithms and tight complexity bounds for QBE.