| Total: 58
Dung’s Argumentation Framework (AF) has been extended in several directions, including the possibility of representing unquantified uncertainty about the existence of arguments and attacks. The framework resulting from such an extension is called incomplete AF (iAF). In this paper, we first introduce three new satisfaction problems named totality, determinism and functionality, and investigate their computational complexity for both AF and iAF under several semantics. We also investigate the complexity of credulous and skeptical acceptance in iAF under semi-stable semantics—a problem left open in the literature. We then show that any iAF can be rewritten into an equivalent one where either only (unattacked) arguments or only attacks are uncertain. Finally, we relate iAF to probabilistic argumentation framework, where uncertainty is quantified.
Random forests have long been considered as powerful model ensembles in machine learning. By training multiple decision trees, whose diversity is fostered through data and feature subsampling, the resulting random forest can lead to more stable and reliable predictions than a single decision tree. This however comes at the cost of decreased interpretability: while decision trees are often easily interpretable, the predictions made by random forests are much more difficult to understand, as they involve a majority vote over multiple decision trees. In this paper, we examine different types of reasons that explain "why" an input instance is classified as positive or negative by a Boolean random forest. Notably, as an alternative to prime-implicant explanations taking the form of subset-minimal implicants of the random forest, we introduce majoritary reasons which are subset-minimal implicants of a strict majority of decision trees. For these abductive explanations, the tractability of the generation problem (finding one reason) and the optimization problem (finding one minimum-sized reason) are investigated. Unlike prime-implicant explanations, majoritary reasons may contain redundant features. However, in practice, prime-implicant explanations - for which the identification problem is DP-complete - are slightly larger than majoritary reasons that can be generated using a simple linear-time greedy algorithm. They are also significantly larger than minimum-sized majoritary reasons which can be approached using an anytime Partial MaxSAT algorithm.
Causality plays a central role in reasoning about observations. In many cases, it might be useful to define the conditions under which a non-deterministic program can be called an actual cause of an effect in a setting where a sequence of programs are executed one after another. There can be two perspectives, one where at least one execution of the program leads to the effect, and another where all executions do so. The former captures a ''weak'' notion of causation and is more general than the latter stronger notion. In this paper, we give a definition of weak potential causes. Our analysis is performed within the situation calculus basic action theories and we consider programs formulated in the logic programming language ConGolog. Within this setting, we show how one can utilize a recently developed abstraction framework to relate causes at various levels of abstraction, which facilitates reasoning about programs as causes.
A common feature of non-monotonic logics is that the classical notion of equivalence does not preserve the intended meaning in light of additional information. Consequently, the term strong equivalence was coined in the literature and thoroughly investigated. In the present paper, the knowledge representation formalism under consideration are claim-augmented argumentation frameworks (CAFs) which provide a formal basis to analyze conclusion-oriented problems in argumentation by adapting a claim-focused perspective. CAFs extend Dung AFs by associating a claim to each argument representing its conclusion. In this paper, we investigate both ordinary and strong equivalence in CAFs. Thereby, we take the fact into account that one might either be interested in the actual arguments or their claims only. The former point of view naturally yields an extension of strong equivalence for AFs to the claim-based setting while the latter gives rise to a novel equivalence notion which is genuine for CAFs. We tailor, examine and compare these notions and obtain a comprehensive study of this matter for CAFs. We conclude by investigating the computational complexity of naturally arising decision problems.
In the last few years the field of logic-based knowledge representation took a lot of inspiration from database theory. A vital example is that the finite model semantics in description logics (DLs) is reconsidered as a desirable alternative to the classical one and that query entailment has replaced knowledge-base satisfiability (KBSat) checking as the key inference problem. However, despite the considerable effort, the overall picture concerning finite query answering in DLs is still incomplete. In this work we study the complexity of finite entailment of local queries (conjunctive queries and positive boolean combinations thereof) in the Z family of DLs, one of the most powerful KR formalisms, lying on the verge of decidability. Our main result is that the DLs ZOQ and ZOI are finitely controllable, i.e. that their finite and unrestricted entailment problems for local queries coincide. This allows us to reuse recently established upper bounds on querying these logics under the classical semantics. While we will not solve finite query entailment for the third main logic in the Z family, ZIQ, we provide a generic reduction from the finite entail- ment problem to the finite KBSat problem, working for ZIQ and some of its sublogics. Our proofs unify and solidify previously established results on finite satisfiability and finite query entailment for many known DLs.
In logic-based knowledge representation, query answering has essentially replaced mere satisfiability checking as the inferencing problem of primary interest. For knowledge bases in the basic description logic ALC, the computational complexity of conjunctive query (CQ) answering is well known to be EXPTIME-complete and hence not harder than satisfiability. This does not change when the logic is extended by certain features (such as counting or role hierarchies), whereas adding others (inverses, nominals or transitivity together with role-hierarchies) turns CQ answering exponentially harder. We contribute to this line of results by showing the surprising fact that even extending ALC by just the Self operator – which proved innocuous in many other contexts – increases the complexity of CQ entailment to 2EXPTIME. As common for this type of problem, our proof establishes a reduction from alternating Turing machines running in exponential space, but several novel ideas and encoding tricks are required to make the approach work in that specific, restricted setting.
State constraints in AI Planning globally restrict the legal environment states. Standard planning languages make closed-domain and closed-world assumptions. Here we address open-world state constraints formalized by planning over a description logic (DL) ontology. Previously, this combination of DL and planning has been investigated for the light-weight DL DL-Lite. Here we propose a novel compilation scheme into standard PDDL with derived predicates, which applies to more expressive DLs and is based on the rewritability of DL queries into Datalog with stratified negation. We also provide a new rewritability result for the DL Horn-ALCHOIQ, which allows us to apply our compilation scheme to quite expressive ontologies. In contrast, we show that in the slight extension Horn-SROIQ no such compilation is possible unless the weak exponential hierarchy collapses. Finally, we show that our approach can outperform previous work on existing benchmarks for planning with DL ontologies, and is feasible on new benchmarks taking advantage of more expressive ontologies.
Tensor factorization and distanced based models play important roles in knowledge graph completion (KGC). However, the relational matrices in KGC methods often induce a high model complexity, bearing a high risk of overfitting. As a remedy, researchers propose a variety of different regularizers such as the tensor nuclear norm regularizer. Our motivation is based on the observation that the previous work only focuses on the “size” of the parametric space, while leaving the implicit semantic information widely untouched. To address this issue, we propose a new regularizer, namely, Equivariance Regularizer (ER), which can suppress overfitting by leveraging the implicit semantic information. Specifically, ER can enhance the generalization ability of the model by employing the semantic equivariance between the head and tail entities. Moreover, it is a generic solution for both distance based models and tensor factorization based models. Our experimental results indicate a clear and substantial improvement over the state-of-the-art relation prediction methods.
Knowledge graph (KG) embeddings have shown great power in learning representations of entities and relations for link prediction tasks. Previous work usually embeds KGs into a single geometric space such as Euclidean space (zero curved), hyperbolic space (negatively curved) or hyperspherical space (positively curved) to maintain their specific geometric structures (e.g., chain, hierarchy and ring structures). However, the topological structure of KGs appears to be complicated, since it may contain multiple types of geometric structures simultaneously. Therefore, embedding KGs in a single space, no matter the Euclidean space, hyperbolic space or hyperspheric space, cannot capture the complex structures of KGs accurately. To overcome this challenge, we propose Geometry Interaction knowledge graph Embeddings (GIE), which learns spatial structures interactively between the Euclidean, hyperbolic and hyperspherical spaces. Theoretically, our proposed GIE can capture a richer set of relational information, model key inference patterns, and enable expressive semantic matching across entities. Experimental results on three well-established knowledge graph completion benchmarks show that our GIE achieves the state-of-the-art performance with fewer parameters.
Learning effective representations of entities and relations for knowledge graphs (KGs) is critical to the success of many multi-relational learning tasks. Existing methods based on graph neural networks learn a deterministic embedding function, which lacks sufficient flexibility to explore better choices when dealing with the imperfect and noisy KGs such as the scarce labeled nodes and noisy graph structure. To this end, we propose a novel multi-relational graph Gaussian Process network (GGPN), which aims to improve the flexibility of deterministic methods by simultaneously learning a family of embedding functions, i.e., a stochastic embedding function. Specifically, a Bayesian Gaussian Process (GP) is proposed to model the distribution of this stochastic function and the resulting representations are obtained by aggregating stochastic function values, i.e., messages, from neighboring entities. The two problems incurred when leveraging GP in GGPN are the proper choice of kernel function and the cubic computational complexity. To address the first problem, we further propose a novel kernel function that can explicitly take the diverse relations between each pair of entities into account and be adaptively learned in a data-driven way. We address the second problem by reformulating GP as a Bayesian linear model, resulting in a linear computational complexity. With these two solutions, our GGPN can be efficiently trained in an end-to-end manner. We evaluate our GGPN in link prediction and entity classification tasks, and the experimental results demonstrate the superiority of our method. Our code is available at https://github.com/sysu-gzchen/GGPN.
We put forward Answer Set Programming (ASP) as a solution approach for three classical problems in Declarative Process Mining: Log Generation, Query Checking, and Conformance Checking. These problems correspond to different ways of analyzing business processes under execution, starting from sequences of recorded events, a.k.a. event logs. We tackle them in their data-aware variant, i.e., by considering events that carry a payload (set of attribute-value pairs), in addition to the performed activity, specifying processes declaratively with an extension of linear-time temporal logic over finite traces (LTLf). The data-aware setting is significantly more challenging than the control-flow one: Query Checking is still open, while the existing approaches for the other two problems do not scale well. The contributions of the work include an ASP encoding schema for the three problems, their solution, and experiments showing the feasibility of the approach.
Consider a bank that uses an AI system to decide which loan applications to approve. We want to ensure that the system is fair, that is, it does not discriminate against applicants based on a predefined list of sensitive attributes, such as gender and ethnicity. We expect there to be a regulator whose job it is to certify the bank’s system as fair or unfair. We consider issues that the regulator will have to confront when making such a decision, including the precise definition of fairness, dealing with proxy variables, and dealing with what we call allowed variables, that is, variables such as salary on which the decision is allowed to depend, despite being correlated with sensitive variables. We show (among other things) that the problem of deciding fairness as we have defined it is co-NP-complete, but then argue that, despite that, in practice the problem should be manageable.
In Ontology-Based Data Management (OBDM), an abstraction of a source query q is a query over the ontology capturing the semantics of q in terms of the concepts and the relations available in the ontology. Since a perfect characterization of a source query may not exist, the notions of best sound and complete approximations of an abstraction have been introduced and studied in the typical OBDM context, i.e., in the case where the ontology is expressed in DL-Lite, and source queries are expressed as unions of conjunctive queries (UCQs). Interestingly, if we restrict our attention to abstractions expressed as UCQs, even best approximations of abstractions are not guaranteed to exist. Thus, a natural question to ask is whether such limitations affect even larger classes of queries. In this paper, we answer this fundamental question for an essential class of queries, namely the class of monotone queries. We define a monotone query language based on disjunctive Datalog enriched with an epistemic operator, and show that its expressive power suffices for expressing the best approximations of monotone abstractions of UCQs.
Bottom-up knowledge compilation is a paradigm for generating representations of functions by iteratively conjoining constraints using a so-called apply function. When the input is not efficiently compilable into a language - generally a class of circuits - because optimal compiled representations are provably large, the problem is not the compilation algorithm as much as the choice of a language too restrictive for the input. In contrast, in this paper, we look at CNF formulas for which very small circuits exists and look at the efficiency of their bottom-up compilation in one of the most general languages, namely that of structured decomposable negation normal forms (str-DNNF). We prove that, while the inputs have constant size representations as str-DNNF, any bottom-up compilation in the general setting where conjunction and structure modification are allowed takes exponential time and space, since large intermediate results have to be produced. This unconditionally proves that the inefficiency of bottom-up compilation resides in the bottom-up paradigm itself.
In this paper, we present a learning-based approach to the symbolic reasoning problem of dynamic argumentation, where the knowledge about attacks between arguments is incomplete or evolving. Specifically, we employ deep reinforcement learning to learn which attack relations between arguments should be added or deleted in order to enforce the acceptability of (a set of) arguments. We show that our Graph Neural Network (GNN) architecture EGNN can learn a near optimal enforcement heuristic for all common argument-fixed enforcement problems, including problems for which no other (symbolic) solvers exist. We demonstrate that EGNN outperforms other GNN baselines and on enforcement problems with high computational complexity performs better than state-of-the-art symbolic solvers with respect to efficiency. Thus, we show our neuro-symbolic approach is able to learn heuristics without the expert knowledge of a human designer and offers a valid alternative to symbolic solvers. We publish our code at https://github.com/DennisCraandijk/DL-Abstract-Argumentation.
The complete reason behind a decision is a Boolean formula that characterizes why the decision was made. This recently introduced notion has a number of applications, which include generating explanations, detecting decision bias and evaluating counterfactual queries. Prime implicants of the complete reason are known as sufficient reasons for the decision and they correspond to what is known as PI explanations and abductive explanations. In this paper, we refer to the prime implicates of a complete reason as necessary reasons for the decision. We justify this terminology semantically and show that necessary reasons correspond to what is known as contrastive explanations. We also study the computation of complete reasons for multi-class decision trees and graphs with nominal and numeric features for which we derive efficient, closed-form complete reasons. We further investigate the computation of shortest necessary and sufficient reasons for a broad class of complete reasons, which include the derived closed forms and the complete reasons for Sentential Decision Diagrams (SDDs). We provide an algorithm which can enumerate their shortest necessary reasons in output polynomial time. Enumerating shortest sufficient reasons for this class of complete reasons is hard even for a single reason. For this problem, we provide an algorithm that appears to be quite efficient as we show empirically.
Automated persuasion systems (APS) aim to persuade a user to believe something by entering into a dialogue in which arguments and counterarguments are exchanged. To maximize the probability that an APS is successful in persuading a user, it can identify a global policy that will allow it to select the best arguments it presents at each stage of the dialogue whatever arguments the user presents. However, in real applications, such as for healthcare, it is unlikely the utility of the outcome of the dialogue will be the same, or the exact opposite, for the APS and user. In order to deal with this situation, games in extended form have been harnessed for argumentation in Bi-party Decision Theory. This opens new problems that we address in this paper: (1) How can we use Machine Learning (ML) methods to predict utility functions for different subpopulations of users? and (2) How can we identify for a new user the best utility function from amongst those that we have learned. To this extent, we develop two ML methods, EAI and EDS, that leverage information coming from the users to predict their utilities. EAI is restricted to a fixed amount of information, whereas EDS can choose the information that best detects the subpopulations of a user. We evaluate EAI and EDS in a simulation setting and in a realistic case study concerning healthy eating habits. Results are promising in both cases, but EDS is more effective at predicting useful utility functions.
We investigate the computational complexity of mining guarded clauses from clausal datasets through the framework of inductive logic programming (ILP). We show that learning guarded clauses is NP-complete and thus one step below the Sigma2-complete task of learning Horn clauses on the polynomial hierarchy. Motivated by practical applications on large datasets we identify a natural tractable fragment of the problem. Finally, we also generalise all of our results to k-guarded clauses for constant k.
Argumentation frameworks (AFs) are a core formalism in the field of formal argumentation. As most standard computational tasks regarding AFs are hard for the first or second level of the Polynomial Hierarchy, a variety of algorithmic approaches to achieve manageable runtimes have been considered in the past. Among them, the backdoor-approach and the treewidth-approach turned out to yield fixed-parameter tractable fragments. However, many applications yield high parameter values for these methods, often rendering them infeasible in practice. We introduce the backdoor-treewidth approach for abstract argumentation, combining the best of both worlds with a guaranteed parameter value that does not exceed the minimum of the backdoor- and treewidth-parameter. In particular, we formally define backdoor-treewidth and establish fixed-parameter tractability for standard reasoning tasks of abstract argumentation. Moreover, we provide systems to find and exploit backdoors of small width, and conduct systematic experiments evaluating the new parameter.
While Answer-Set Programming (ASP) is a prominent approach to declarative problem solving, optimisation problems can still be a challenge for it. Large-Neighbourhood Search (LNS) is a metaheuristic for optimisation where parts of a solution are alternately destroyed and reconstructed that has high but untapped potential for ASP solving. We present a framework for LNS optimisation in answer-set solving, in which neighbourhoods can be specified either declaratively as part of the ASP encoding, or automatically generated by code. To effectively explore different neighbourhoods, we focus on multi-shot solving as it allows to avoid program regrounding. We illustrate the framework on different optimisation problems, some of which are notoriously difficult, including shift planning and a parallel machine scheduling problem from semi-conductor production which demonstrate the effectiveness of the LNS approach.
Ontology-based query answering with existential rules is well understood and implemented for positive queries, in particular conjunctive queries. For queries with negation, however, there is no agreed-upon semantics or standard implementation. This problem is unknown for simpler rule languages, such as Datalog, where it is intuitive and practical to evaluate negative queries over the least model. This fails for existential rules, which instead of a single least model have multiple universal models that may not lead to the same results for negative queries. We therefore propose universal core models as a basis for a meaningful (non-monotonic) semantics for queries with negation. Since cores are hard to compute, we identify syntactic conditions (on rules and queries) under which our core-based semantics can equivalently be obtained for other universal models, such as those produced by practical chase algorithms. Finally, we use our findings to propose a semantics for a broad class of existential rules with negation.
The paper presents a characterization of logic programs with aggregates based on many-sorted generalization of operator SM that refers neither to grounding nor to fixpoints. This characterization introduces new symbols for aggregate operations and aggregate elements, whose meaning is fixed by adding appropriate axioms to the result of the SM transformation. We prove that for programs without positive recursion through aggregates our semantics coincides with the semantics of the answer set solver Clingo.
Combined modeling and verification of dynamic systems and the data they operate on has gained momentum in AI and in several application domains. We investigate the expressive yet concise framework of data-aware dynamic systems (DDS), extending it with linear arithmetic, and providing the following contributions. First, we introduce a new, semantic property of “finite summary”, which guarantees the existence of a faithful finite-state abstraction. We rely on this to show that checking whether a witness exists for a linear-time, finite-trace property is decidable for DDSs with finite summary. Second, we demonstrate that several decidability conditions studied in formal methods and database theory can be seen as concrete, checkable instances of this property. This also gives rise to new decidability results. Third, we show how the abstract, uniform property of finite summary leads to modularity results: a system enjoys finite summary if it can be partitioned appropriately into smaller systems that possess the property. Our results allow us to analyze systems that were out of reach in earlier approaches. Finally, we demonstrate the feasibility of our approach in a prototype implementation.
Answer set programming (ASP) is a popular declarative programming paradigm with a wide range of applications in artificial intelligence. Oftentimes, when modeling an AI problem with ASP, and in particular when we are interested beyond simple search for optimal solutions, an actual solution, differences between solutions, or number of solutions of the ASP program matter. For example, when a user aims to identify a specific answer set according to her needs, or requires the total number of diverging solutions to comprehend probabilistic applications such as reasoning in medical domains. Then, there are only certain problem specific and handcrafted encoding techniques available to navigate the solution space of ASP programs, which is oftentimes not enough. In this paper, we propose a formal and general framework for interactive navigation toward desired subsets of answer sets analogous to faceted browsing. Our approach enables the user to explore the solution space by consciously zooming in or out of sub-spaces of solutions at a certain configurable pace. We illustrate that weighted faceted navigation is computationally hard. Finally, we provide an implementation of our approach that demonstrates the feasibility of our framework for incomprehensible solution spaces.
Recent work has unveiled a theory for reasoning about the decisions made by binary classifiers: a classifier describes a Boolean function, and the reasons behind an instance being classified as positive are the prime-implicants of the function that are satisfied by the instance. One drawback of these works is that they do not explicitly treat scenarios where the underlying data is known to be constrained, e.g., certain combinations of features may not exist, may not be observable, or may be required to be disregarded. We propose a more general theory, also based on prime-implicants, tailored to taking constraints into account. The main idea is to view classifiers as describing partial Boolean functions that are undefined on instances that do not satisfy the constraints. We prove that this simple idea results in more parsimonious reasons. That is, not taking constraints into account (e.g., ignoring, or taking them as negative instances) results in reasons that are subsumed by reasons that do take constraints into account. We illustrate this improved succinctness on synthetic classifiers and classifiers learnt from real data.