| Total: 57
Reinforcement learning is a widely used approach for training an agent to maximize rewards in a given environment. Action policies learned with this technique see a broad range of applications in practical areas like games, healthcare, robotics, or autonomous driving. However, enforcing ethical behavior or norms based on deontic constraints that the agent should adhere to during policy execution remains a complex challenge. Especially constraints that emerge after the training can necessitate to redo policy learning, which can be costly and, more critically, time-intense. In order to mitigate this problem, we present a framework for policy fixing in case of a norm violation, which allows the agent to stay operational. Based on answer set programming (ASP), emergency plans are generated that exclude or minimize cost of norm violations by future actions in a horizon of interest. By combining and developing optimization techniques, efficient policy fixing under real-time constraints can be achieved.
We extend belief revision theory from propositional logic to the modal logic S5. Our first contribution takes the form of three new postulates (M1-M3) that go beyond the AGM ones and capture the idea of minimal change in the presence of modalities. Concerning the construction of modal revision operations, we work with set pseudo-distances, i.e., distances between sets of points that may violate the triangle-inequality. Our second contribution is the identification of three axioms (A3-A5) that go beyond the standard axioms of metrics. Loosely speaking, our main result states the following: if a pseudo-distance satisfies certain axioms, then the induced revision operation satisfies (M1-M3). We investigate three pseudo-distances from the literature (Dhaus, Dinj, Dsum), and the three induced revision operations (*Haus, *Inj, *Sum). Using our main result, we show that only *Sum satisfies (M1-M3) all together. As a last contribution, we revisit a major criticism of AGM operations, namely that the revisions of (p ∧ q) and (p ∧ (p → q)) are identical. We show that the problem disappears if instead of material implication we use the modal operator of strict implication that can be defined in S5.
In this paper, we introduce a new family of argument-ranking semantics which can be seen as a refinement of the classification of arguments into skeptically accepted, credulously accepted and rejected. To this end we use so-called social ranking functions which have been developed recently to rank individuals based on their performance in groups. We provide necessary and sufficient conditions for a social ranking function to give rise to an argument-ranking semantics satisfying the desired refinement property.
This paper presents fast and exact methods for computing the probability of an argument’s acceptance using Dung’s semantics in the Constellation paradigm of Abstract Argumentation. For (directed) Singly-Connected Graphs (SCGs), the problem can now be solved in linearithmic time instead of being exponential in the number of attacks, as reported in the literature. Moreover, in the more general case of Directed Acyclic Graphs (DAGs), we provide an algorithm whose time complexity is linearithmic in the product of the out-degree of dependent arguments, i.e., arguments reaching the argument considered for acceptance through multiple paths in the graph. We theoretically show that this complexity is lower than the lower bound of the (exact) Constellation method, which is also supported by empirical results. Our approach to DAGs is also compared with the (approximate) Monte-Carlo method, which is stopped when exact results are obtained. Within this time constraint, Monte-Carlo still outputs significant errors, underlying the fast computation of our approach.
Temporal knowledge graph (TKG) reasoning that infers future missing facts is an essential and challenging task. Predicting future events typically relies on closely related historical facts, yielding more accurate results for repetitive or periodic events. However, for future events with sparse historical interactions, the effectiveness of this method, which focuses on leveraging high-frequency historical information, diminishes. Recently, the capabilities of diffusion models in image generation have opened new opportunities for TKG reasoning. Therefore, we propose a graph node diffusion model with dual-domain periodic contrastive learning (DPCL-Diff). Graph node diffusion model (GNDiff) introduces noise into sparsely related events to simulate new events, generating high-quality data that better conforms to the actual distribution. This generative mechanism significantly enhances the model's ability to reason about new events. Additionally, the dual-domain periodic contrastive learning (DPCL) maps periodic and non-periodic event entities to Poincaré and Euclidean spaces, leveraging their characteristics to distinguish similar periodic events effectively. Experimental results on four public datasets demonstrate that DPCL-Diff significantly outperforms state-of-the-art TKG models in event prediction, demonstrating our approach's effectiveness. This study also investigates the combined effectiveness of GNDiff and DPCL in TKG tasks.
Reasoning future unknowable facts on temporal knowledge graphs (TKGs) is a challenging task, holding significant academic and practical values for various fields. Existing studies exploring explainable reasoning concentrate on modeling comprehensible temporal paths relevant to the query. Yet, these path-based methods primarily focus on local temporal paths appearing in recent times, failing to capture the complex temporal paths in TKG and resulting in the loss of longer historical relations related to the query. Motivated by the Dual Process Theory in cognitive science, we propose a Cognitive Temporal Knowledge Extrapolation framework (CognTKE), which introduces a novel temporal cognitive relation directed graph (TCR-Digraph) and performs interpretable global shallow reasoning and local deep reasoning over the TCR-Digraph. Specifically, the proposed TCR-Digraph is constituted by retrieving significant local and global historical temporal relation paths associated with the query. In addition, CognTKE presents the global shallow reasoner and the local deep reasoner to perform global one-hop temporal relation reasoning (System 1) and local complex multi-hop path reasoning (System 2) over the TCR-Digraph, respectively. The experimental results on four benchmark datasets demonstrate that CognTKE achieves significant improvement in accuracy compared to the state-of-the-art baselines and delivers excellent zero-shot reasoning ability.
Expressing negative conditions is a crucial feature of query languages for knowledge bases (KBs). Answering such queries over ontological KBs, however, is a very challenging task that becomes undecidable even for lightweight Description Logic (DL) ontologies. Such negative results hold even for Conjunctive Queries (CQs) equipped with basic forms of negative conditions such as the so-called safe negation or inequality atoms. One ontology language that is seemingly unaffected by these results is (the DL counterpart of) RDFS even if equipped with disjointness axioms. Answering CQs with inequalities over such ontologies is known to be Pi^p_2-complete, if the number of inequality atoms is unbounded, and NP-complete if we limit this number to one. Notably, these results leave open the cases of CQs with a fixed number greater than two of inequality atoms. Additionally, such a thorough analysis is missing for CQs with safe negation. In this paper, we embark in a refined analysis of the combined complexity of answering CQs with inequality atoms and safe negation over RDFS ontologies augmented with disjointness axioms. Firstly, we provide a unified Pi^p_2 query answering algorithm for the general problem. Secondly, we confirm the generally held conjecture according to which answering CQs with two inequality atoms over such ontologies is already Pi^p_2-hard. This result closes an important gap in the current literature and has an impact on the widely influential problem of query containment. Lastly, for CQs with safe negation, we prove a behavior similar to that of CQs with inequality atoms. Specifically, we show that answering CQs with at most one negated atom can be done in NP, while allowing at most two negated atoms is sufficient to obtain Pi^p_2-hardness.
In this paper, we focus on estimating the causal effect of an intervention over time on a dynamical system. To that end, we formally define causal interventions and their effects over time on discrete-time stochastic processes (DSPs). Then, we show under which conditions the equilibrium states of a DSP, both before and after a causal intervention, can be captured by a structural causal model (SCM). With such an equivalence at hand, we provide an explicit mapping from vector autoregressive models (VARs), broadly applied in econometrics, to linear, but potentially cyclic and/or affected by unmeasured confounders, SCMs. The resulting causal VAR framework allows us to perform causal inference over time from observational time series data. Our experiments on synthetic and real-world datasets show that the proposed framework achieves strong performance in terms of observational forecasting while enabling accurate estimation of the causal effect of interventions on dynamical systems. We demonstrate, through a case study, the potential practical questions that can be addressed using the proposed causal VAR framework.
Inconsistency is a common problem in knowledge, and so there is a need to analyse it. Inconsistency measures assess its severity, but there is a more basic question: "where is the inconsistency?". Typically, not all subsets of a knowledgebase are causing the inconsistency, and minimal inconsistent sets have been the standard way to localise the germane ones, even though there are shortcomings in some scenarios. Recently, ⋆-conflicts were proposed as a more suitable definition to localise inconsistency when considering a method to repair it. But in general there is no way to tell what is a sensible definition to capture the germane conflicts. This work provides a set of desirable properties to assess definitions for germane conflicts. Also, a new conflict definition, based on substitution, is presented and evaluated via the proposed properties, and the related computational complexity is analysed.
We present a new formal framework for generalized planning (GP) based on the situation calculus extended with LTL constraints. The GP problem is specified by a first-order basic action theory whose models are the problem instances. This low-level theory is then abstracted into a high-level propositional nondeterministic basic action theory with a single model. A refinement mapping relates the two theories. LTL formulas are used to specify the temporally extended goals as well as assumed trace constraints. If all LTL trace constraints hold at the low level and the high-level model can simulate all the low-level models with respect to the mapping, we say that we have a temporally lifted abstraction. We prove that if we have such an abstraction and the agent has a strategy to achieve a LTL goal under some trace constraints at the abstract level, then there exists a refinement of the strategy to achieve the refinement of the goal at the concrete level. We use LTL synthesis to generate the strategy at the abstract level. We illustrate our approach by synthesizing a program that solves a data structure manipulation problem.
We introduce a novel language for reasoning about agents' cognitive attitudes of both epistemic and motivational type. We interpret it by means of a computationally grounded semantics using belief bases. Our language includes five types of modal operators for implicit belief, complete attraction, complete repulsion, realistic attraction and realistic repulsion. We give an axiomatization and show that our operators are not mutually expressible and that they can be combined to represent a large variety of psychological concepts including ambivalence, indifference, being motivated, being demotivated and preference. We present a dynamic extension of the language that supports reasoning about the effects of belief change operations. Finally, we provide a succinct formulation of model checking for our languages and a PSPACE model checking algorithm relying on a reduction into TQBF. We present some experimental results for the implemented algorithm on computation time in a concrete example.
Equational reasoning is one of the most intuitive and widely used types of symbolic reasoning. In this setting, the goal is to determine whether a given ground equation t=t' follows as a consequence of a set of equational axioms E using the process of replacing equals with equals. An equation t=t' is variable-preserving if each variable occurring in t occurs in t' and vice versa. Such equations capture essential algebraic properties, such as commutativity, associativity and distributivity, which arise in a wide variety of contexts. In this work, we show that for each fixed set E of variable-preserving equations, the set of ground equations derivable from E in depth at most d is soundly over-approximable in fixed-parameter tractable time. More specifically, we devise an algorithm that takes as input a set E of variable-preserving equations and a target ground equation t=t', and always halts with a YES or NO answer. 1) If equation t=t' can be derived from E in depth at most d, the algorithm always halts with a YES. 2) If equation t=t' does not belong to the equational closure of E, then the algorithm always halts with a NO. In other words, the set of YES instances contains the set of ground equations that can be deduced from E in depth at most d, and possibly other equations that require derivations of higher depth. However, this set contains no ground equation that is not in the equational closure of E. For this reason, the algorithm is sound. Our algorithm works in time f(d) * |t| * |t'|, where |t| and |t'| are the number of symbols in t and t' respectively, d is the depth parameter, and f(d) is a function whose growth depends only on d and on parameters associated with the equations in E.
Generalized planning is concerned with how to find a single plan to solve multiple similar planning instances. Abstractions are widely used for solving generalized planning, and QNP (qualitative numeric planning) is a popular abstract model. Recently, Cui et al. showed that a plan solves a sound and complete abstraction of a generalized planning problem if and only if the refined plan solves the original problem. However, existing work on automatic abstraction for generalized planning can hardly guarantee soundness let alone completeness. In this paper, we propose an automatic sound and complete abstraction method for generalized planning with baggable types. We use a variant of QNP, called bounded QNP (BQNP), where integer variables are increased or decreased by only one. Since BQNP is undecidable, we propose and implement a sound but incomplete solver for BQNP. We present an automatic method to abstract a BQNP problem from a classical planning instance with baggable types. The basic idea for abstraction is to introduce a counter for each bag of indistinguishable tuples of objects. We define a class of domains called proper baggable domains, and show that for such domains, the BQNP problem got by our automatic method is a sound and complete abstraction for a generalized planning problem whose instances share the same bags with the given instance but the sizes of the bags might be different. Thus, the refined plan of a solution to the BQNP problem is a solution to the generalized planning problem. Finally, we implement our abstraction method and experiments on a number of domains demonstrate the promise of our approach.
Knowledge graph (KG) completion aims to identify additional facts that can be inferred from the existing facts in the KG. Recent developments in this field have explored this task in the inductive setting, where at test time one sees entities that were not present during training; the most performant models in the inductive setting have employed path encoding modules in addition to standard subgraph encoding modules. This work similarly focuses on KG completion in the inductive setting, without the explicit use of path encodings, which can be time-consuming and introduces several hyperparameters that require costly hyperparameter optimization. Our approach uses a Transformer-based subgraph encoding module only; we introduce connection-biased attention and entity role embeddings into the subgraph encoding module to eliminate the need for an expensive and time-consuming path encoding module. Evaluations on standard inductive KG completion benchmark datasets demonstrate that our Connection-Biased Link Prediction (CBLiP) model has superior performance to models that do not use path information. Compared to models that utilize path information, CBLiP shows competitive or superior performance while being faster. Additionally, to show that the effectiveness of connection-biased attention and entity role embeddings also holds in the transductive setting, we compare CBLiP's performance on the relation prediction task in the transductive setting.
This paper shows that the semantics of programs with aggregates implemented by the solvers clingo and dlv can be characterized as extended First-Order formulas with intensional functions in the logic of Here-and-There. Furthermore, this characterization can be used to study the strong equivalence of programs with aggregates under either semantics. We also present a transformation that reduces the task of checking strong equivalence to reasoning in classical First-Order logic, which serves as a foundation for automating this procedure.
This paper introduces a general framework for generate-and-test-based solvers for epistemic logic programs that can be instantiated with different generate and test programs, and it provides sufficient conditions on those programs for the correctness of the solvers built using this framework. It also introduces a new generator program that incorporates the propagation of epistemic consequences and shows that this can exponentially reduce the number of candidates that need to be tested while only incurring a linear overhead. We implement a new solver based on these theoretical findings and experimentally show that it outperforms existing solvers by achieving a ~3.3x speed-up and solving 87% more instances on well-known benchmarks.
Abstraction is an important and useful concept in the field of artificial intelligence. To the best of our knowledge, there is no syntactic method to compute a sound and complete abstraction from a given low-level basic action theory and a refinement mapping. This paper aims to address this issue. To this end, we first present a variant of situation calculus, namely linear integer situation calculus, which serves as the formalization of high-level basic action theory. We then migrate Banihashemi, De Giacomo, and Lesperance’s abstraction framework to one from linear integer situation calculus to extended situation calculus. Furthermore, we identify a class of Golog programs, namely guarded actions, so as to restrict low-level Golog programs, and impose some restrictions on refinement mappings. Finally, we design a syntactic approach to computing a sound and complete abstraction from a low-level basic action theory and a restricted refinement mapping.
In the logic of theory change, the AGM model has acquired the status of a standard model. However, the AGM model does not seem adequate for some contexts and application domains. This inspired many researchers to propose extensions and generalizations to AGM. Among these extensions, one of the most important are belief bases. Belief bases have more expressivity than belief sets, as explicit and implicit beliefs have different statuses. In this paper, we present reformulation, a belief change operation that allows us to reformulate a belief base making some particular sentences explicit without modifying the consequences of the belief base. We provide a constructive method and its axiomatic characterization.
The profusion of knowledge encoded in large language models (LLMs) and their ability to apply this knowledge zero-shot in a range of settings makes them promising candidates for use in decision-making. However, they are currently limited by their inability to provide outputs which can be faithfully explained and effectively contested to correct mistakes. In this paper, we attempt to reconcile these strengths and weaknesses by introducing argumentative LLMs (ArgLLMs), a method for augmenting LLMs with argumentative reasoning. Concretely, ArgLLMs construct argumentation frameworks, which then serve as the basis for formal reasoning in support of decision-making. The interpretable nature of these argumentation frameworks and formal reasoning means that any decision made by ArgLLMs may be explained and contested. We evaluate ArgLLMs’ performance experimentally in comparison with state-of-the-art techniques, in the context of the decision-making task of claim verification. We also define novel properties to characterise contestability and assess ArgLLMs formally in terms of these properties.
First-order linear temporal logic (FOLTL) is a flexible and expressive formalism capable of naturally describing complex behaviors and properties. Although the logic is in general highly undecidable, the idea of using it as a specification language for the verification of complex infinite-state systems is appealing. However, a missing piece, which has proved to be an invaluable tool in dealing with other temporal logics, is an automaton model capable of capturing the logic. In this paper we address this issue, by defining and studying such a model, which we call first-order automaton. We define this very general class of automata, and the corresponding notion of regular first-order language (of finite words), showing their closure under most language-theoretic operations. We show how they can capture any FOLTL formula over finite words, over any signature and theory, and provide sufficient conditions for the semi-decidability of their non-emptiness problem. Then, to show the usefulness of the formalism, we prove the decidability of monodic FOLTL, a classic result known in the literature, with a simpler and direct proof.
Structural equation models (SEM) are a standard approach to representing causal dependencies between variables. In this paper we propose a new interpretation of existing formalisms in the field of Actual Causality in which SEM's are viewed as mechanisms transforming the dynamics of exogenous variables into the dynamics of endogenous variables. This allows us to combine counterfactual causal reasoning with existing temporal logic formalizms, and to introduce a temporal logic, CPLTL, for causal reasoning about such structures. Then, we demonstrate that the standard restriction to so-called recursive models (with no cycles in the dependency graphs) is not necessary in our approach. This fact provides us extra tools for reasoning about mutually dependent processes and feedback loops. Finally, we introduce the notions of model equivalence for temporal causal models and show that CPLTL has an efficient model-checking procedure.
We study the problem of realizing strategies for an LTLf goal specification while ensuring that at least an LTLf backup specification is satisfied in case of unreliability of certain input variables. We formally define the problem and characterize its worst-case complexity as 2EXPTIME-complete, like standard LTLf synthesis. Then we devise three different solution techniques: one based on direct automata manipulation, which is 2EXPTIME, one disregarding unreliable input variables by adopting a belief construction, which is 3EXPTIME, and one leveraging second-order quantified LTLf (QLTLf), which is 2EXPTIME and allows for a direct encoding into monadic second-order logic, which in turn is worst-case nonelementary. We prove their correctness and evaluate them against each other empirically. Interestingly, theoretical worst-case bounds do not translate into observed performance; the MSO technique performs best, followed by belief construction and direct automata manipulation. As a byproduct of our study, we provide a general synthesis procedure for arbitrary QLTLf specifications.
Conditional independence is a crucial concept supporting adequate modelling and efficient reasoning in probabilistics. In knowledge representation, the idea of conditional independence has also been introduced for specific formalisms, such as propositional logic and belief revision. In this paper, the notion of conditional independence is studied in the algebraic framework of approximation fixpoint theory. This gives a language-independent account of conditional independence that can be straightforwardly applied to any logic with fixpoint semantics. It is shown how this notion allows to reduce global reasoning to parallel instances of local reasoning, leading to fixed-parameter tractability results. Furthermore, relations to existing notions of conditional independence are discussed and the framework is applied to normal logic programming.
We investigate the synthesis of policies for high-level agent programs expressed in Golog, a language based on situation calculus that incorporates nondeterministic programming constructs. Unlike traditional approaches for program realization that assume full agent control or rely on incremental search, we address scenarios where environmental nondeterminism significantly influences program outcomes. Our synthesis problem involves deriving a policy that successfully realizes a given Golog program while ensuring the satisfaction of a temporal specification, expressed in Linear Temporal Logic on finite traces (LTLf), across all possible environmental behaviors. By leveraging an expressive class of first-order action theories, we construct a finite game arena that encapsulates program executions and tracks the satisfaction of the temporal goal. A game-theoretic approach is employed to derive such a policy. Experimental results demonstrate this approach's feasibility in domains with unbounded objects and non-local effects. This work bridges agent programming and temporal logic synthesis, providing a framework for robust agent behavior in nondeterministic environments.
This paper introduces two new compilation languages restricting weak decomposable negation normal form (wDNNF) circuits and integrates them into the knowledge compilation map. Positive (resp. negative) wDNNF circuits restrict wDNNF circuits so that each variable shared among the inputs of a conjunction node can only have positive (resp. negative) occurrences in that subcircuit. Unlike wDNNF circuits, pwDNNF (resp. nwDNNF) circuits satisfy the maximum (resp. minimum) cardinality query. We present a compiler for converting CNF formulae into pwDNNF and nwDNNF circuits by extending Bella - the state-of-the-art compiler for wDNNF circuits. We introduce a new caching scheme, called Cara, that exploits isomorphism. Using that scheme, we show a new compilation method based on copying subcircuits, which may significantly speed up compilations at the expense of increasing circuit sizes. Our experiments demonstrate that nwDNNF circuits are suitable for computing most probable explanations (MPEs) in two-layer Bayesian networks (BNs) with large domains.