IJCAI.2024 - Knowledge Representation and Reasoning

| Total: 48

#1 General Epistemic Abstract Argumentation Framework: Semantics and Complexity [PDF1] [Copy] [Kimi2] [REL]

Authors: Gianvincenzo Alfano ; Sergio Greco ; Francesco Parisi ; Irina Trubitsyna

Epistemic Abstract Argumentation Framework (EAAF) extends Dung's framework (AAF)---a central formalism in AI for modeling disputes among agents---by allowing the representation of epistemic knowledge. In particular, EAAF augments AAF with weak and strong epistemic attacks whose intuitive meaning is that an argument a defeats an argument b by means of a weak (resp. strong) epistemic attack if a is true in every (resp. at least one) extension. So far, the semantics of EAAF has been defined only for a restricted class of frameworks, namely acyclic EAAF, where epistemic attacks do not occur in any cycle. In this paper, we provide an intuitive semantics for (general) EAAF that naturally extends that for AAF as well as that for acyclic EAAF. After providing some fundamental properties and giving an algorithm that enables the computation of EAAF semantics, by relying on state-of-the-art AAF-solvers, we investigate the complexity of canonical argumentation problems.

#2 AMO-aware Aggregates in Answer Set Programming [PDF] [Copy] [Kimi] [REL]

Authors: Mario Alviano ; Carmine Dodaro ; Salvatore Fiorentino ; Marco Maratea

Aggregates such as sum and count are among the most frequently used linguistic extensions of Answer Set Programming (ASP). At-most-one (AMO) constraints are a specific form of aggregates that excludes the simultaneous truth of multiple elements in a set. This article unleashes a powerful propagation strategy in case groups of elements in an aggregate are also involved in AMO constraints. In fact, the combined knowledge given by aggregates and AMO constraints significantly increases the effectiveness of search space pruning, resulting in sensible performance gains.

#3 Higher-Order Argumentation Frameworks: Principles and Gradual Semantics [PDF1] [Copy] [Kimi] [REL]

Authors: Leila Amgoud ; Dragan Doder ; Marie-Christine Lagasquie-Schiex

The paper investigates how to evaluate elements in complex argumentation frameworks, where both arguments and attacks are weighted and might be attacked by arguments. We propose the first gradual semantics that assign a numerical value to every argument and attack. The value represents the acceptance (seriousness) degree of an argument (attack). We start by highlighting various technical challenges facing semantics in such complex settings, including how to deal with attacks vs arguments, and how to combine their values. We present principles that describe different strategies offered to semantics to meet such challenges. Then, we introduce various semantics per strategy. For instance, some semantics evaluate attacks and arguments in the same way while others, called hybrid, treat them differently. Finally, the principles are used to compare the plethora of novel semantics. The final result is a catalogue of semantics with different formal guarantees and behaviours.

#4 Effective Approach to LTLf Best-Effort Synthesis in Multi-Tier Environments [PDF] [Copy] [Kimi] [REL]

Authors: Benjamin Aminof ; Giuseppe De Giacomo ; Gianmarco Parretti ; Sasha Rubin

We consider an agent acting in a complex environment modeled through a multi-tiered specification, in which each tier adds nondeterminism in the environment response to the agent actions. In this setting, we devise an effective approach to best-effort synthesis, i.e., synthesizing agent strategies that win against a maximal set of possible environment responses in each tier. We do this in a setting where both the multi-tier environment and agent goal are specified in the linear temporal logic on finite traces (LTLf). While theoretical solution techniques based on automata on infinite trees have been developed previously, we completely side-step them here and focus on a DFA-based game-theoretic technique, which can be effectively implemented symbolically. Specifically, we present a provably correct algorithm that is based on solving separately DFA-based games for each tier and then combining the obtained solutions on-the-fly. This algorithm is linear, as opposed to being exponential, in the number of tiers and thus, it can graciously handle multi-tier environments formed of several tiers.

#5 Data Complexity in Expressive Description Logics with Path Expressions [PDF] [Copy] [Kimi] [REL]

Author: Bartosz Bednarczyk

We investigate the data complexity of the satisfiability problem for the very expressive description logic ZOIQ (a.k.a. ALCHbSelfregOIQ) over quasi-forests and establish its NP-completeness. This completes the data complexity landscape for decidable fragments of ZOIQ, and reproves known results on decidable fragments of OWL2 (SR family). Using the same technique, we establish coNEXPTIME-completeness (w.r.t. the combined complexity) of the entailment problem of rooted queries in ZIQ.

#6 Bypassing the ASP Bottleneck: Hybrid Grounding by Splitting and Rewriting [PDF] [Copy] [Kimi] [REL]

Authors: Alexander Beiser ; Markus Hecher ; Kaan Unalan ; Stefan Woltran

Answer Set Programming (ASP) is a key paradigm for problems in artificial intelligence and industrial contexts. In ASP, problems are modeled via a set of rules. Over the time this paradigm grew into a rich language, enabling complex rule types like aggregate expressions. Most practical ASP systems follow a ground-and-solve pattern, where rule schemes are grounded and resulting rules are solved. There, the so-called grounding bottleneck may prevent from solving, due to sheer grounding sizes. Recently body-decoupled grounding (BDG) demonstrated how to reduce grounding sizes by delegating effort to solving. However, BDG provides limited interoperability with traditional grounders and only covers simple rule types. In this work, we establish hybrid grounding — based on a novel splitting theorem that allows us to freely combine BDG with traditional grounders. To mitigate huge groundings in practice, we define rewriting procedures for efficiently deferring grounding effort of aggregates to solving. Our experimental results indicate that this approach is competitive, especially for instances, where traditional grounding fails.

#7 Primal Grammars Driven Automated Induction [PDF] [Copy] [Kimi] [REL]

Authors: Adel Bouhoula ; Miki Hermann

Automated induction is a powerful method for the validation of critical systems. However, the inductive proof process faces major challenges: it is undecidable and diverges even with small examples. Previous methods have proposed ad-hoc heuristics to speculate on additional lemmas that hopefully stop the divergence. Although these methods have succeeded in proving interesting theorems, they have significant limitations: in particular, they often fail to find appropriate lemmas, and the lemmas they provide may not be valid. We present a new method that allows us to perform inductive proofs in conditional theories. This method automatically detects divergence in proof traces and derives primal grammars as well as new lemmas that schematize the divergent sequence. This new construction allows us to break the divergence and complete the proof. Our method is presented as a set of inference rules whose soundness and refutational completeness have been formally proved. Unlike previous methods, our method is fully automated and has no risk of over-generalization. Moreover, our technique for capturing and schematizing divergence represents the most general decidable schematization, with respect to description power, among all known schematizations. Our method has been implemented in C++ and successfully proved over fifty complex examples that fail with well-known theorem provers (e.g., ACL2, Isabelle, PVS, SPIKE) and related methods for handling divergence in proofs by induction. Our method represents a significant contribution to the field of automated reasoning as it can be integrated with existing automated and interactive inductive proof systems to enhance their performance. Moreover, it has the potential to substantially reduce the time needed for the verification of critical systems.

#8 Learning What to Monitor: Using Machine Learning to Improve past STL Monitoring [PDF] [Copy] [Kimi] [REL]

Authors: Andrea Brunello ; Luca Geatti ; Angelo Montanari ; Nicola Saccomanno

Monitoring is a runtime verification technique that can be used to check whether an execution of a system (trace) satisfies or not a given set of properties. Compared to other formal verification techniques, e.g., model checking, one needs to specify the properties to be monitored, but a complete model of the system is no longer necessary. First, we introduce the pure past fragment of Signal Temporal Logic (ppSTL), and we use it to define the monitorable safety (G(ppSTL)) and cosafety (F(ppSTL)) fragments of STL, which properly extend the commonly-used bounded-future fragment. Then, we devise a multi-objective genetic programming algorithm to automatically extend the set of properties to monitor on the basis of the history of failure traces collected over time. The framework resulting from the integration of the monitor and the learning algorithm is then experimentally validated on various public datasets. The outcomes of the experimentation confirm the effectiveness of the proposed solution.

#9 Justifying Argument Acceptance with Collective Attacks: Discussions and Disputes [PDF] [Copy] [Kimi] [REL]

Authors: Giovanni Buraglio ; Wolfgang Dvorak ; Matthias König ; Markus Ulbricht

In formal argumentation one aims for intuitive and concise justifications for the acceptance of arguments. Discussion games and dispute trees are established methods to obtain such a justification. However, so far these techniques are based on instantiating the knowledge base into graph-based Dung style abstract argumentation frameworks (AFs). These instantiations are known to produce frameworks with a large number of arguments and thus also yield long discussion games and large dispute trees. To obtain more concise justifications for argument acceptance, we propose to instantiate the knowledge base as an argumentation framework with collective attacks (SETAF). Remarkably, this approach yields smaller frameworks compared to traditional AF instantiation, while exhibiting increased expressive power. We then introduce discussion games and dispute trees tailored to SETAFs, show that they correspond to credulous acceptance w.r.t. the well-known preferred semantics, analyze and tune them w.r.t. the size, and compare the two notions. Finally, we illustrate how our findings apply to assumption-based argumentation.

#10 Capturing Knowledge Graphs and Rules with Octagon Embeddings [PDF1] [Copy] [Kimi1] [REL]

Authors: Victor Charpenay ; Steven Schockaert

Region based knowledge graph embeddings represent relations as geometric regions. This has the advantage that the rules which are captured by the model are made explicit, making it straightforward to incorporate prior knowledge and to inspect learned models. Unfortunately, existing approaches are severely restricted in their ability to model relational composition, and hence also their ability to model rules, thus failing to deliver on the main promise of region based models. With the aim of addressing these limitations, we investigate regions which are composed of axis-aligned octagons. Such octagons are particularly easy to work with, as intersections and compositions can be straightforwardly computed, while they are still sufficiently expressive to model arbitrary knowledge graphs. Among others, we also show that our octagon embeddings can properly capture a non-trivial class of rule bases. Finally, we show that our model achieves competitive experimental results.

#11 Temporal Knowledge Graph Extrapolation via Causal Subhistory Identification [PDF1] [Copy] [Kimi] [REL]

Authors: Kai Chen ; Ye Wang ; Xin Song ; Siwei Chen ; Han Yu ; Aiping Li

Temporal knowledge graph extrapolation has become a prominent area of study interest in recent years. Numerous methods for extrapolation have been put forth, mining query-relevant information from history to generate forecasts. However, existing approaches normally do not discriminate between causal and non-causal effects in reasoning; instead, they focus on analyzing the statistical correlation between the future events to be predicted and the historical data given, which may be deceptive and hinder the model's capacity to learn real causal information that actually affects the reasoning conclusions. To tackle it, we propose a novel approach called Causal Subhistory Identification (CSI), which focuses on extracting the causal subhistory for reasoning purposes from a large amount of historical data. CSI can improve the clarity and transparency of the reasoning process and more effectively convey the logic behind conclusions by giving priority to the causal subhistory and eliminating non-causal correlations. Extensive experiments demonstrate the remarkable potential of our CSI in the following aspects: superiority, improvement, explainability, and robustness.

#12 Enhancing Controlled Query Evaluation through Epistemic Policies [PDF] [Copy] [Kimi] [REL]

Authors: Gianluca Cima ; Domenico Lembo ; Lorenzo Marconi ; Riccardo Rosati ; Domenico Fabio Savo

In this paper, we propose the use of epistemic dependencies to express data protection policies in Controlled Query Evaluation (CQE), which is a form of confidentiality-preserving query answering over ontologies and databases. The resulting policy language goes significantly beyond those proposed in the literature on CQE so far, allowing for very rich and practically interesting forms of data protection rules. We show the expressive abilities of our framework and study the data complexity of CQE for (unions of) conjunctive queries when ontologies are specified in the Description Logic DL-LiteR. Interestingly, while we show that the problem is in general intractable, we prove tractability for the case of acyclic epistemic dependencies by providing a suitable query rewriting algorithm. The latter result paves the way towards the implementation and practical application of this new approach to CQE.

#13 Compilation and Fast Model Counting beyond CNF [PDF] [Copy] [Kimi] [REL]

Authors: Alexis de Colnet ; Stefan Szeider ; Tianwei Zhang

Circuits in deterministic decomposable negation normal form (d-DNNF) are representations of Boolean functions that enable linear-time model counting. This paper strengthens our theoretical knowledge of what classes of functions can be efficiently transformed, or compiled, into d-DNNF. Our main contribution is the fixed-parameter tractable (FPT) compilation of conjunctions of specific constraints parameterized by incidence treewidth. This subsumes the known result for CNF. The constraints in question are all functions representable by constant-width ordered binary decision diagrams (OBDDs) for all variable orderings. For instance, this includes parity constraints and cardinality constraints with constant threshold. The running time of the FPT compilation is singly exponential in the incidence treewidth but hides large constants in the exponent. To balance that, we give a more efficient FPT algorithm for model counting that applies to a sub-family of the constraints and does not require compilation.

#14 Model Checking Causality [PDF] [Copy] [Kimi] [REL]

Authors: Tiago de Lima ; Emiliano Lorini

We present a novel modal language for causal reasoning and interpret it by means of a semantics in which causal information is represented using causal bases in propositional form. The language includes modal operators of conditional causal necessity where the condition is a causal change operation. We provide a succinct formulation of model checking for our language and a model checking procedure based on a polysize reduction to QBF. We illustrate the expressiveness of our language through some examples and show that it allows us to represent and to formally verify a variety of concepts studied in the field of explainable AI including abductive explanation, intervention and actual cause.

#15 Epistemic Logic Programs: Non-Ground and Counting Complexity [PDF] [Copy] [Kimi] [REL]

Authors: Thomas Eiter ; Johannes K. Fichte ; Markus Hecher ; Stefan Woltran

Answer Set Programming (ASP) is a prominent problem-modeling and solving framework, whose solutions are called answer sets. Epistemic logic programs (ELP) extend ASP to reason about all or some answer sets. Solutions to an ELP can be seen as consequences over multiple collections of answer sets, known as world views. While the complexity of propositional programs is well studied, the non-ground case remains open. This paper establishes the complexity of non-ground ELPs. We provide a comprehensive picture for well-known program fragments, which turns out to be complete for the class NEXPTIME with access to oracles up to SigmaP2. In the quantitative setting, we establish complexity results for counting complexity beyond #EXP. To mitigate high complexity, we establish results in case of bounded predicate arity, reaching up to the fourth level of the polynomial hierarchy. Finally, we provide ETH-tight runtime results for the parameter treewidth, which has applications in quantitative reasoning, where we reason on (marginal) probabilities of epistemic literals.

#16 Computational Aspects of Progression for Temporal Equilibrium Logic [PDF] [Copy] [Kimi] [REL]

Authors: Thomas Eiter ; Davide Soldà

Temporal logic plays a crucial role in specifying and reasoning about dynamic systems, where temporal constraints and properties to be monitored are essential. Traditional approaches like LTL-monitoring assume monotonicity, which limits their applicability to scenarios involving non-monotonic temporal properties. We delve into complexity aspects of monitoring temporal specifications using non-monotonic Temporal Equilibrium Logic (TEL), a temporal extension of Answer Set Programming defined over Temporal Here and There Logic (THT) with a minimality criterion enforcing stable models. Notably, we study the complexity gap between monitoring properties in THT and TEL semantics, and the complexity of monitoring approximations based on progression, which is widely used in verification and in AI. In that, we pay particular attention to the fragment of temporal logic programs.

#17 On the Logic of Theory Change Iteration of KM-Update, Revised [PDF] [Copy] [Kimi] [REL]

Authors: Liangda Fang ; Tong Zhu ; Quanlong Guan ; Junming Qiu ; Zhao-Rong Lai ; Weiqi Luo ; Hai Wan

Belief revision and update, two significant types of belief change, both focus on how an agent modifies her beliefs in presence of new information. The most striking difference between them is that the former studies the change of beliefs in a static world while the latter concentrates on a dynamically-changing world. The famous AGM and KM postulates were proposed to capture rational belief revision and update, respectively. However, both of them are too permissive to exclude some unreasonable changes in the iteration. In response to this weakness, the DP postulates and its extensions for iterated belief revision were presented. Furthermore, Ferme and Goncalves integrated these postulates in belief update. Unfortunately, some redundant components are included in the definitions of belief states and the faithful assignments for semantic characterizations. Moreover, their approach does not meet the desired property of iterated belief update. They also do not discuss the rationale of any DP postulate within the update context. This paper is intended to fix these deficiencies of Ferme and Goncalves’s approach. Firstly, we present a modification of the original KM postulates based on belief states, and propose the notion of faithful collective assignments of belief states to partial preorders. Subsequently, we migrate several well-known postulates for iterated belief revision to iterated belief update. Moreover, we provide the exact semantic characterizations based on partial preorders for each of the proposed postulates. Finally, we analyze the compatibility between the above iterated postulates and the KM postulates for belief update.

#18 Quantitative Reasoning over Incomplete Abstract Argumentation Frameworks [PDF] [Copy] [Kimi] [REL]

Authors: Bettina Fazzinga ; Sergio Flesca ; Filippo Furfaro ; Giuseppina Monterosso

We introduce PERCVER and PERCACC, the problems asking for the percentages of the completions of an incomplete Abstract Argumentation Framework (iAAF) where a set of arguments S is an extension and an argument a is accepted, respectively. These problems give insights into the status of S and a more precise than the “traditional” verification and acceptance tests under the possible and necessary perspectives, that decide if S is an extension and a is accepted in at least one or every completion, respectively. As a first contribution, we investigate the relationship between the proposed framework and probabilistic AAFs (prAAFs) under the constellations approach (that, at first sight, seem to be suitable for starightforwardly encoding the quantitative reasoning underlying PERCVER and PERCACC). In this regard, we show that translating an iAAF into an equivalent prAAF requires a heavy computational cost: this backs the study of PERCVER and PERCACC as new distinguished problems. Then, we investigate the complexity of PERCVER and PERCACC, and we identify interesting islands of tractability.

#19 Improved Encodings of Acyclicity for Translating Answer Set Programming into Integer Programming [PDF] [Copy] [Kimi] [REL]

Authors: Masood Feyzbakhsh Rankooh ; Tomi Janhunen

In this work, we introduce novel translations of Answer Set Programming (ASP) into Integer Programming (IP). While building upon a previously introduced IP translation, we revisit the translation of acyclicity constraints essential for capturing answer sets precisely. By leveraging vertex elimination graphs, we demonstrate that a new translation of acyclicity can yield integer programs with a more restrictive linear relaxation compared to previous methods. This enhancement enables IP solvers to prune the search space more efficiently. Furthermore, we show how acyclicity can be expressed more concisely in IP given any feedback vertex set of the underlying dependency graph. Experimental results underscore the improved efficiency of our methods over the previously implemented translation. The new vertex elimination based translation with Gurobi as the back-end solver turns out competitive against Clingo, a state-of-the-art native ASP solver, in a number of non-tight Answer Set Optimization (ASO) benchmarks.

#20 Revisiting Causal Discovery from a Complexity-Theoretic Perspective [PDF] [Copy] [Kimi] [REL]

Authors: Robert Ganian ; Viktoriia Korchemna ; Stefan Szeider

Causal discovery seeks to unveil causal relationships (represented as a so-called causal graph) from observational data. This paper investigates the complex relationship between the graph structure and the efficiency of constraint-based causal discovery algorithms. Our main contributions include (i) a near-tight characterization of which causal graphs admit a small d-separating set for each pair of vertices and thus can potentially be efficiently recovered by a constraint-based causal discovery algorithm, (ii) the explicit construction of a sequence of causal graphs on which the influential PC algorithm might need exponential time, although there is a small d-separating set between every pair of variables, and (iii) the formulation of a new causal discovery algorithm which achieves fixed-parameter running time by considering the maximum number of edge-disjoint paths between variables in the (undirected) super-structure as the parameter. A distinguishing feature of our investigation is that it is carried out within a more fine-grained model which more faithfully captures the infeasibility of performing accurate independence tests for large sets of conditioning variables.

#21 Finite Groundings for ASP with Functions: A Journey through Consistency [PDF] [Copy] [Kimi] [REL]

Authors: Lukas Gerlach ; David Carral ; Markus Hecher

Answer set programming (ASP) is a logic programming formalism used in various areas of artificial intelligence like combinatorial problem solving and knowledge representation and reasoning. It is known that enhancing ASP with function symbols makes basic reasoning problems highly undecidable. However, even in simple cases, state of the art reasoners, specifically those relying on a ground-and-solve approach, fail to produce a result. Therefore, we reconsider consistency as a basic reasoning problem for ASP. We show reductions that give an intuition for the high level of undecidability. These insights allow for a more fine-grained analysis where we characterize ASP programs as "frugal" and "non-proliferous". For such programs, we are not only able to semi-decide consistency but we also propose a grounding procedure that yields finite groundings on more ASP programs with the concept of "forbidden" facts.

#22 Learning Conditional Preference Networks: An Approach Based on the Minimum Description Length Principle [PDF] [Copy] [Kimi] [REL]

Authors: Pierre-François Gimenez ; Jérôme Mengin

CP-nets are a very expressive graphical model for the representation of preferences over combinatorial spaces. They are particularly well suited for settings where an important task is to compute the optimal completion of some partially specified alternative; this is for instance the case of interactive configurators, where preferences can be used, at every step of the interaction, to guide the decision maker towards a satisfactory configuration. Learning CP-nets turns out to be challenging when the input data has the form of pairwise comparisons between alternatives. Furthermore, this type of preference data is not commonly stored: it can be elicitated but this puts an additional burden on the decision maker. In this article, we propose a new method for learning CP-nets from sales history, a kind of data readily available in many e-commerce applications. The approach is based on the the minimum description length (MDL) principle. We show some theoretical properties of this learning task, namely its sample complexity and its NP-completeness, and we experiment this learning algorithm in a recommendation settings with a real sales history from a car maker.

#23 Quantitative Claim-Centric Reasoning in Logic-Based Argumentation [PDF] [Copy] [Kimi] [REL]

Authors: Markus Hecher ; Yasir Mahmood ; Arne Meier ; Johannes Schmidt

Argumentation is a well-established formalism for nonmonotonic reasoning, with popular frameworks being Dung’s abstract argumentation (AFs) or logic-based argumentation (Besnard-Hunter’s framework). Structurally, a set of formulas forms support for a claim if it is consistent, subset-minimal, and implies the claim. Then, an argument comprises support and a claim. We observe that the computational task (ARG) of asking for support of a claim in a knowledge base is “brave”, since many claims with a single support are accepted. As a result, ARG falls short when it comes to the question of confidence in a claim, or claim strength. In this paper, we propose a concept for measuring the (acceptance) strength of claims, based on counting supports for a claim. Further, we settle classical and structural complexity of counting arguments favoring a given claim in propositional knowledge bases (KBs). We introduce quantitative reasoning to measure the strength of claims in a KB and to determine the relevance strength of a formula for a claim.

#24 Semantics for Non-Flat Assumption-Based Argumentation, Revisited [PDF] [Copy] [Kimi] [REL]

Authors: Jesse Heyninck ; Ofer Arieli

Assumption-based argumentation (ABA) is an argumentative formalism that allows for reasoning on the basis of defeasible assumptions and strict rules. Standard semantics for this formalism sometimes give rise to problematic behaviour in the presence of rules with assumptions in their heads. In this paper, we introduce a six-valued labelling semantics that overcomes these shortcomings while preserving all the usual properties of the standard Dung-style three-valued semantics for ABA frameworks, including existence of the complete semantics, uniqueness of the grounded semantics and preservation of the computational complexity of all main reasoning processes.

#25 Learning Logic Programs by Discovering Higher-Order Abstractions [PDF] [Copy] [Kimi] [REL]

Authors: Céline Hocquette ; Sebastijan Dumancic ; Andrew Cropper

We introduce the higher-order refactoring problem, where the goal is to compress a logic program by discovering higher-order abstractions, such as map, filter, and fold. We implement our approach in Stevie, which formulates the refactoring problem as a constraint optimisation problem. Our experiments on multiple domains, including program synthesis and visual reasoning, show that refactoring can improve the learning performance of an inductive logic programming system, specifically improving predictive accuracies by 27% and reducing learning times by 47%. We also show that Stevie can discover abstractions that transfer to multiple domains.