IJCAI.2023 - Knowledge Representation and Reasoning

| Total: 33

#1 Preferences and Constraints in Abstract Argumentation [PDF] [Copy] [Kimi1] [REL]

Authors: Gianvincenzo Alfano ; Sergio Greco ; Francesco Parisi ; Irina Trubitsyna

In recent years there has been an increasing interest in extending Dung's framework to facilitate the knowledge representation and reasoning process. In this paper, we present an extension of Abstract Argumentation Framework (AF) that allows for the representation of preferences over arguments' truth values (3-valued preferences). For instance, we can express a preference stating that extensions where argument a is false (i.e. defeated) are preferred to extensions where argument b is false. Interestingly, such a framework generalizes the well-known Preference-based AF with no additional cost in terms of computational complexity for most of the classical argumentation semantics. Then, we further extend AF by considering both (3-valued) preferences and 3-valued constraints, that is constraints of the form \varphi \Rightarrow v or v \Rightarrow \varphi, where \varphi is a logical formula and v is a 3-valued truth value. After investigating the complexity of the resulting framework,as both constraints and preferences may represent subjective knowledge of agents, we extend our framework by considering multiple agents and study the complexity of deciding acceptance of arguments in this context.

#2 Leveraging Argumentation for Generating Robust Sample-based Explanations [PDF] [Copy] [Kimi] [REL]

Authors: Leila Amgoud ; Philippe Muller ; Henri Trenquier

Explaining predictions made by inductive classifiers has become crucial with the rise of complex models acting more and more as black-boxes. Abductive explanations are one of the most popular types of explanations that are provided for the purpose. They highlight feature-values that are sufficient for making predictions. In the literature, they are generated by exploring the whole feature space, which is unreasonable in practice. This paper solves the problem by introducing explanation functions that generate abductive explanations from a sample of instances. It shows that such functions should be defined with great care since they cannot satisfy two desirable properties at the same time, namely existence of explanations for every individual decision (success) and correctness of explanations (coherence). The paper provides a parameterized family of argumentation-based explanation functions, each of which satisfies one of the two properties. It studies their formal properties and their experimental behaviour on different datasets.

#3 Abstraction of Nondeterministic Situation Calculus Action Theories [PDF] [Copy] [Kimi] [REL]

Authors: Bita Banihashemi ; Giuseppe De Giacomo ; Yves Lesperance

We develop a general framework for abstracting the behavior of an agent that operates in a nondeterministic domain, i.e., where the agent does not control the outcome of the nondeterministic actions, based on the nondeterministic situation calculus and the ConGolog programming language. We assume that we have both an abstract and a concrete nondeterministic basic action theory, and a refinement mapping which specifies how abstract actions, decomposed into agent actions and environment reactions, are implemented by concrete ConGolog programs. This new setting supports strategic reasoning and strategy synthesis, by allowing us to quantify separately on agent actions and environment reactions. We show that if the agent has a (strong FOND) plan/strategy to achieve a goal/complete a task at the abstract level, and it can always execute the nondeterministic abstract actions to completion at the concrete level, then there exist a refinement of it that is a (strong FOND) plan/strategy to achieve the refinement of the goal/task at the concrete level.

#4 Bipolar Abstract Dialectical Frameworks Are Covered by Kleene’s Three-valued Logic [PDF] [Copy] [Kimi] [REL]

Authors: Ringo Baumann ; Maximilian Heinrich

Abstract dialectical frameworks (ADFs) are one of the most powerful generalizations of classical Dung-style argumentation frameworks (AFs). The additional expressive power comes with an increase in computational complexity, namely one level up in the polynomial hierarchy in comparison to their AF counterparts. However, there is one important subclass, so-called bipolar ADFs (BADFs) which are as complex as classical AFs while offering strictly more modeling capacities. This property makes BADFs very attractive from a knowledge representation point of view and is the main reason why this class has received much attention recently. The semantics of ADFs rely on the Gamma-operator which takes as an input a three-valued interpretation and returns a new one. However, in order to obtain the output the original definition requires to consider any two-valued completion of a given three-valued interpretation. In this paper we formally prove that in case of BADFs we may bypass the computationally intensive procedure via applying Kleene's three-valued logic K. We therefore introduce the so-called bipolar disjunctive normal form which is simply a disjunctive normal form where any used atom possesses either a positive or a negative polarity. We then show that: First, this normal form is expressive enough to represent any BADF and secondly, the computation can be done via Kleene's K instead of dealing with two-valued completions. Inspired by the main correspondence result we present some first experiments showing the computational benefit of using Kleene.

#5 REPLACE: A Logical Framework for Combining Collective Entity Resolution and Repairing [PDF] [Copy] [Kimi] [REL]

Authors: Meghyn Bienvenu ; Gianluca Cima ; Víctor Gutiérrez-Basulto

This paper considers the problem of querying dirty databases, which may contain both erroneous facts and multiple names for the same entity. While both of these data quality issues have been widely studied in isolation, our contribution is a holistic framework for jointly deduplicating and repairing data. Our REPLACE framework follows a declarative approach, utilizing logical rules to specify under which conditions a pair of entity references can or must be merged and logical constraints to specify consistency requirements. The semantics defines a space of solutions, each consisting of a set of merges to perform and a set of facts to delete, which can be further refined by applying optimality criteria. As there may be multiple optimal solutions, we use classical notions of possible and certain query answers to reason over the alternative solutions, and introduce a novel notion of most informative answer to obtain a more compact presentation of query results. We perform a detailed analysis of the data complexity of the central reasoning tasks of recognizing optimal solutions and (most informative) possible and certain answers, for each of the three notions of optimal solution and for both general and restricted specifications.

#6 Augmenting Automated Spectrum Based Fault Localization for Multiple Faults [PDF] [Copy] [Kimi] [REL]

Authors: Prantik Chatterjee ; Jose Campos ; Rui Abreu ; Subhajit Roy

Spectrum-based Fault Localization (SBFL) uses the coverage of test cases and their outcome (pass/fail) to predict the "suspiciousness'' of program components, e.g., lines of code. SBFL is, perhaps, the most successful fault localization technique due to its simplicity and scalability. However, SBFL heuristics do not perform well in scenarios where a program may have multiple faulty components. In this work, we propose a new algorithm that "augments'' previously proposed SBFL heuristics to produce a ranked list where faulty components ranked low by base SBFL metrics are ranked significantly higher. We implement our ideas in a tool, ARTEMIS, that attempts to "bubble up'' faulty components which are ranked lower by base SBFL metrics. We compare our technique to the most popular SBFL metrics and demonstrate statistically significant improvement in the developer effort for fault localization with respect to the basic strategies.

#7 Automatic Verification for Soundness of Bounded QNP Abstractions for Generalized Planning [PDF] [Copy] [Kimi] [REL]

Authors: Zhenhe Cui ; Weidu Kuang ; Yongmei Liu

Generalized planning (GP) studies the computation of general solutions for a set of planning problems. Computing general solutions with correctness guarantee has long been a key issue in GP. Abstractions are widely used to solve GP problems. For example, a popular abstraction model for GP is qualitative numeric planning (QNP), which extends classical planning with non-negative real variables that can be increased or decreased by some arbitrary amount. The refinement of correct solutions of sound abstractions are solutions with correctness guarantees for GP problems. More recent literature proposed a uniform abstraction framework for GP and gave model-theoretic definitions of sound and complete abstractions for GP problems. In this paper, based on the previous work, we explore automatic verification of sound abstractions for GP. Firstly, we present a proof-theoretic characterization for sound abstractions. Secondly, based on the characterization, we give a sufficient condition for sound abstractions with deterministic actions. Then we study how to verify the sufficient condition when the abstraction models are bounded QNPs where integer variables can be incremented or decremented by one. To this end, we develop methods to handle counting and transitive closure, which are often used to define numerical variables. Finally, we implement a sound bounded QNP abstraction verification system and report experimental results on several domains.

#8 On Translations between ML Models for XAI Purposes [PDF] [Copy] [Kimi] [REL]

Authors: Alexis de Colnet ; Pierre Marquis

In this paper, the succinctness of various ML models is studied. To be more precise, the existence of polynomial-time and polynomial-space translations between representation languages for classifiers is investigated. The languages that are considered include decision trees, random forests, several types of boosted trees, binary neural networks, Boolean multilayer perceptrons, and various logical representations of binary classifiers. We provide a complete map indicating for every pair of languages C, C' whether or not a polynomial-time / polynomial-space translation exists from C to C'. We also explain how to take advantage of the resulting map for XAI purposes.

#9 Description Logics with Pointwise Circumscription [PDF] [Copy] [Kimi] [REL]

Authors: Federica Di Stefano ; Magdalena Ortiz ; Mantas Šimkus

Circumscription is one of the most powerful ways to extend Description Logics (DLs) with non-monotonic reasoning features, albeit with huge computational costs and undecidability in many cases. In this paper, we introduce pointwise circumscription for DLs, which is not only intuitive in terms of knowledge representation, but also provides a sound approximation of classic circumscription and has reduced computational complexity. Our main idea is to replace the second-order quantification step of classic circumscription with a series of (pointwise) local checks on all domain elements and their immediate neighbourhood. Our main positive results are for ontologies in DLs ALCIO and ALCI: we prove that for TBoxes of modal depth 1 (i.e. without nesting of existential or universal quantifiers) standard reasoning problems under pointwise circumscription are (co)NExpTime-complete and ExpTime-complete, respectively. The restriction of modal depth still yields a large class of ontologies useful in practice, and it is further justified by a strong undecidability result for pointwise circumscription with general TBoxes in ALCIO.

#10 Parametrized Gradual Semantics Dealing with Varied Degrees of Compensation [PDF] [Copy] [Kimi] [REL]

Authors: Dragan Doder ; Leila Amgoud ; Srdjan Vesic

Compensation is a strategy that a semantics may follow when it faces dilemmas between quality and quantity of attackers. It allows several weak attacks to compensate one strong attack. It is based on compensation degree, which is a tuple that indicates (i) to what extent an attack is weak and (ii) the number of weak attacks needed to compensate a strong one. Existing principles on compensation do not specify the parameters, thus it is unclear whether semantics satisfying them compensate at only one degree or several degrees, and which ones. This paper proposes a parameterised family of gradual semantics, which unifies multiple semantics that share some principles but differ in their strategy regarding solving dilemmas. Indeed, we show that the two semantics taking the extreme values of the parameter favour respectively quantity and quality, while all the remaining ones compensate at some degree. We define three classes of compensation degrees and show that the novel family is able to compensate at all of them while none of the existing gradual semantics does.

#11 Learning Small Decision Trees with Large Domain [PDF] [Copy] [Kimi] [REL]

Authors: Eduard Eiben ; Sebastian Ordyniak ; Giacomo Paesani ; Stefan Szeider

One favors decision trees (DTs) of the smallest size or depth to facilitate explainability and interpretability. However, learning such an optimal DT from data is well-known to be NP-hard. To overcome this complexity barrier, Ordyniak and Szeider (AAAI 21) initiated the study of optimal DT learning under the parameterized complexity perspective. They showed that solution size (i.e., number of nodes or depth of the DT) is insufficient to obtain fixed-parameter tractability (FPT). Therefore, they proposed an FPT algorithm that utilizes two auxiliary parameters: the maximum difference (as a structural property of the data set) and maximum domain size. They left it as an open question of whether bounding the maximum domain size is necessary. The main result of this paper answers this question. We present FPT algorithms for learning a smallest or lowest-depth DT from data, with the only parameters solution size and maximum difference. Thus, our algorithm is significantly more potent than the one by Szeider and Ordyniak as it can handle problem inputs with features that range over unbounded domains. We also close several gaps concerning the quality of approximation one obtains by only considering DTs based on minimum support sets.

#12 Explaining Answer-Set Programs with Abstract Constraint Atoms [PDF] [Copy] [Kimi] [REL]

Authors: Thomas Eiter ; Tobias Geibinger

Answer-Set Programming (ASP) is a popular declarative reasoning and problem solving formalism. Due to the increasing interest in explainabilty, several explanation approaches have been developed for ASP. However, support for commonly used advanced language features of ASP, as for example aggregates or choice rules, is still mostly lacking. We deal with explaining ASP programs containing Abstract Constraint Atoms, which encompass the above features and others. We provide justifications for the presence, or absence, of an atom in a given answer-set. To this end, we introduce several formal notions of justification in this setting based on the one hand on a semantic characterisation utilising minimal partial models, and on the other hand on a more ruled-guided approach. We provide complexity results for checking and computing such justifications, and discuss how the semantic and syntactic approaches relate and can be jointly used to offer more insight. Our results contribute to a basis for explaining commonly used language features and thus increase accessibility and usability of ASP as an AI tool.

#13 Treewidth-Aware Complexity for Evaluating Epistemic Logic Programs [PDF] [Copy] [Kimi] [REL]

Authors: Jorge Fandinno ; Markus Hecher

Logic programs are a popular formalism for encoding many problems relevant to knowledge representation and reasoning as well as artificial intelligence. However, for modeling rational behavior it is oftentimes required to represent the concepts of knowledge and possibility. Epistemic logic programs (ELPs) is such an extension that enables both concepts, which correspond to being true in all or some possible worlds or stable models. For these programs, the parameter treewidth has recently regained popularity. We present complexity results for the evaluation of key ELP fragments for treewidth, which are exponentially better than known results for full ELPs. Unfortunately, we prove that obtained runtimes can not be significantly improved, assuming the exponential time hypothesis. Our approach defines treewidth-aware reductions between quantified Boolean formulas and ELPs. We also establish that the completion of a program, as used in modern solvers, can be turned treewidth-aware, thereby linearly preserving treewidth.

#14 Quantitative Reasoning and Structural Complexity for Claim-Centric Argumentation [PDF] [Copy] [Kimi] [REL]

Authors: Johannes K. Fichte ; Markus Hecher ; Yasir Mahmood ; Arne Meier

Argumentation is a well-established formalism for nonmonotonic reasoning and a vibrant area of research in AI. Claim-augmented argumentation frameworks (CAFs) have been introduced to deploy a conclusion-oriented perspective. CAFs expand argumentation frameworks by an additional step which involves retaining claims for an accepted set of arguments. We introduce a novel concept of a justification status for claims, a quantitative measure of extensions supporting a particular claim. The well-studied problems of credulous and skeptical reasoning can then be seen as simply the two endpoints of the spectrum when considered as a justification level of a claim. Furthermore, we explore the parameterized complexity of various reasoning problems for CAFs, including the quantitative reasoning for claim assertions. We begin by presenting a suitable graph representation that includes arguments and their associated claims. Our analysis includes the parameter treewidth, and we present decomposition-guided reductions between reasoning problems in CAF and the validity problem for QBF.

#15 An Ensemble Approach for Automated Theorem Proving Based on Efficient Name Invariant Graph Neural Representations [PDF] [Copy] [Kimi] [REL]

Authors: Achille Fokoue ; Ibrahim Abdelaziz ; Maxwell Crouse ; Shajith Ikbal ; Akihiro Kishimoto ; Guilherme Lima ; Ndivhuwo Makondo ; Radu Marinescu

Using reinforcement learning for automated theorem proving has recently received much attention. Current approaches use representations of logical statements that often rely on the names used in these statements and, as a result, the models are generally not transferable from one domain to another. The size of these representations and whether to include the whole theory or part of it are other important decisions that affect the performance of these approaches as well as their runtime efficiency. In this paper, we present NIAGRA; an ensemble Name InvAriant Graph RepresentAtion. NIAGRA addresses this problem by using 1) improved Graph Neural Networks for learning name-invariant formula representations that is tailored for their unique characteristics and 2) an efficient ensemble approach for automated theorem proving. Our experimental evaluation shows state-of-the-art performance on multiple datasets from different domains with improvements up to 10% compared to the best learning-based approaches. Furthermore, transfer learning experiments show that our approach significantly outperforms other learning-based approaches by up to 28%.

#16 Reverse Engineering of Temporal Queries Mediated by LTL Ontologies [PDF] [Copy] [Kimi] [REL]

Authors: Marie Fortin ; Boris Konev ; Vladislav Ryzhikov ; Yury Savateev ; Frank Wolter ; Michael Zakharyaschev

In reverse engineering of database queries, we aim to construct a query from a given set of answers and non-answers; it can then be used to explore the data further or as an explanation of the answers and non-answers. We investigate this query-by-example problem for queries formulated in positive fragments of linear temporal logic LTL over timestamped data, focusing on the design of suitable query languages and the combined and data complexity of deciding whether there exists a query in the given language that separates the given answers from non-answers. We consider both plain LTL queries and those mediated by LTL ontologies.

#17 Disentanglement of Latent Representations via Causal Interventions [PDF] [Copy] [Kimi] [REL]

Authors: Gaël Gendron ; Michael Witbrock ; Gillian Dobbie

The process of generating data such as images is controlled by independent and unknown factors of variation. The retrieval of these variables has been studied extensively in the disentanglement, causal representation learning, and independent component analysis fields. Recently, approaches merging these domains together have shown great success. Instead of directly representing the factors of variation, the problem of disentanglement can be seen as finding the interventions on one image that yield a change to a single factor. Following this assumption, we introduce a new method for disentanglement inspired by causal dynamics that combines causality theory with vector-quantized variational autoencoders. Our model considers the quantized vectors as causal variables and links them in a causal graph. It performs causal interventions on the graph and generates atomic transitions affecting a unique factor of variation in the image. We also introduce a new task of action retrieval that consists of finding the action responsible for the transition between two images. We test our method on standard synthetic and real-world disentanglement datasets. We show that it can effectively disentangle the factors of variation and perform precise interventions on high-level semantic attributes of an image without affecting its quality, even with imbalanced data distributions.

#18 Safety Verification and Universal Invariants for Relational Action Bases [PDF] [Copy] [Kimi] [REL]

Authors: Silvio Ghilardi ; Alessandro Gianola ; Marco Montali ; Andrey Rivkin

Modeling and verification of dynamic systems operating over a relational representation of states are increasingly investigated problems in AI, Business Process Management and Database Theory. To make these systems amenable to verification, the amount of information stored in each state needs to be bounded, or restrictions are imposed on the preconditions and effects of actions. We lift these restrictions by introducing the framework of Relational Action Bases (RABs), which generalizes existing frameworks and in which unbounded relational states are evolved through actions that can (1) quantify both existentially and universally over the data, and (2) use arithmetic constraints. We then study parameterized safety of RABs via (approximated) SMT-based backward search, singling out essential meta-properties of the resulting procedure, and showing how it can be realized by an off-the-shelf combination of existing verification modules of the state-of-the-art MCMT model checker. We demonstrate the effectiveness of this approach on a benchmark of data-aware business processes. Finally, we show how universal invariants can be exploited to make this procedure fully correct.

#19 Tractable Diversity: Scalable Multiperspective Ontology Management via Standpoint EL [PDF] [Copy] [Kimi] [REL]

Authors: Lucía Gómez Álvarez ; Sebastian Rudolph ; Hannes Strass

The tractability of the lightweight description logic EL has allowed for the construction of large and widely used ontologies that support semantic interoperability. However, comprehensive domains with a broad user base are often at odds with strong axiomatisations otherwise useful for inferencing, since these are usually context dependent and subject to diverging perspectives. In this paper we introduce Standpoint EL, a multi-modal extension of EL that allows for the integrated representation of domain knowledge relative to diverse, possibly conflicting standpoints (or contexts), which can be hierarchically organised and put in relation to each other. We establish that Standpoint EL still exhibits EL's favourable PTime standard reasoning, whereas introducing additional features like empty standpoints, rigid roles, and nominals makes standard reasoning tasks intractable.

#20 Ranking-based Argumentation Semantics Applied to Logical Argumentation [PDF] [Copy] [Kimi] [REL]

Authors: Jesse Heyninck ; Badran Raddaoui ; Christian Straßer

In formal argumentation, a distinction can be made between extension-based semantics, where sets of arguments are either (jointly) accepted or not, and ranking-based semantics, where grades of accept- ability are assigned to arguments. Another important distinction is that between abstract approaches, that abstract away from the content of arguments, and structured approaches, that specify a method of constructing argument graphs on the basis of a knowledge base. While ranking-based semantics have been extensively applied to abstract argumentation, few work has been done on ranking-based semantics for structured argumentation. In this paper, we make a systematic investigation into the be- haviour of ranking-based semantics applied to existing formalisms for structured argumentation. We show that a wide class of ranking-based semantics gives rise to so-called culpability measures, and are relatively robust to specific choices in argument construction methods.

#21 Temporal Datalog with Existential Quantification [PDF] [Copy] [Kimi] [REL]

Authors: Matthias Lanzinger ; Markus Nissl ; Emanuel Sallinger ; Przemysław A. Wałęga

Existential rules, also known as tuple-generating dependencies (TGDs) or Datalog+/- rules, are heavily studied in the communities of Knowledge Representation and Reasoning, Semantic Web, and Databases, due to their rich modelling capabilities. In this paper we consider TGDs in the temporal setting, by introducing and studying DatalogMTLE---an extension of metric temporal Datalog (DatalogMTL) obtained by allowing for existential rules in programs. We show that DatalogMTLE is undecidable even in the restricted cases of guarded and weakly-acyclic programs. To address this issue we introduce uniform semantics which, on the one hand, is well-suited for modelling temporal knowledge as it prevents from unintended value invention and, on the other hand, provides decidability of reasoning; in particular, it becomes 2-EXPSPACE-complete for weakly-acyclic programs but remains undecidable for guarded programs. We provide an implementation for the decidable case and demonstrate its practical feasibility. Thus we obtain an expressive, yet decidable, rule-language and a system which is suitable for complex temporal reasoning with existential rules.

#22 A Rule-Based Modal View of Causal Reasoning [PDF] [Copy] [Kimi] [REL]

Author: Emiliano Lorini

We present a novel rule-based semantics for causal reasoning as well as a number of modal languages interpreted over it. They enable us to represent some fundamental concepts in the theory of causality including causal necessity and possibility, interventionist conditionals and Lewisian conditionals. We provide complexity results for the satisfiability checking and model checking problem for these modal languages. Moreover, we study the relationship between our rule-based semantics and the structural equation modeling (SEM) approach to causal reasoning, as well as between our rule-based semantics for causal conditionals and the standard semantics for belief base change.

#23 Probabilistic Temporal Logic for Reasoning about Bounded Policies [PDF] [Copy] [Kimi] [REL]

Authors: Nima Motamed ; Natasha Alechina ; Mehdi Dastani ; Dragan Doder ; Brian Logan

To build a theory of intention revision for agents operating in stochastic environments, we need a logic in which we can explicitly reason about their decision-making policies and those policies' uncertain outcomes. Towards this end, we propose PLBP, a novel probabilistic temporal logic for Markov Decision Processes that allows us to reason about policies of bounded size. The logic is designed so that its expressive power is sufficient for the intended applications, whilst at the same time possessing strong computational properties. We prove that the satisfiability problem for our logic is decidable, and that its model checking problem is PSPACE-complete. This allows us to e.g. algorithmically verify whether an agent's intentions are coherent, or whether a specific policy satisfies safety and/or liveness properties.

#24 Shhh! The Logic of Clandestine Operations [PDF] [Copy] [Kimi] [REL]

Authors: Pavel Naumov ; Oliver Orejola

An operation is called covert if it conceals the identity of the actor; it is called clandestine if the very fact that the operation is conducted is concealed. The paper proposes a formal semantics of clandestine operations and introduces a sound and complete logical system that describes the interplay between the distributed knowledge modality and a modality capturing coalition power to conduct clandestine operations.

#25 The Parameterized Complexity of Finding Concise Local Explanations [PDF] [Copy] [Kimi] [REL]

Authors: Sebastian Ordyniak ; Giacomo Paesani ; Stefan Szeider

We consider the computational problem of finding a smallest local explanation (anchor) for classifying a given feature vector (example) by a black-box model. After showing that the problem is NP-hard in general, we study various natural restrictions of the problem in terms of problem parameters to see whether these restrictions make the problem fixed-parameter tractable or not. We draw a detailed and systematic complexity landscape for combinations of parameters, including the size of the anchor, the size of the anchor's coverage, and parameters that capture structural aspects of the problem instance, including rank-width, twin-width, and maximum difference.