| Total: 38

Dung's abstract Argumentation Framework (AF) has emerged as a central formalism for argumentation in AI. Preferences in AF allow to represent the comparative strength of arguments in a simple yet expressive way. In this paper we first investigate the complexity of the verification as well as credulous and skeptical acceptance problems in Preference-based AF (PAF) that extends AF with preferences over arguments. Next, after introducing new semantics for AF where extensions are selected using cardinality (instead of set inclusion) criteria and investigating their complexity, we introduce a framework called AF with Priority rules (AFP) that extends AF with sequences of priority rules. AFP generalizes AF with classical set-inclusion and cardinality based semantics, suggesting that argumentation semantics can be viewed as ways to express priorities among extensions. Finally, we extend AFP by proposing AF with Priority rules and Preferences (AFP^2), where also preferences over arguments can be used to define priority rules, and study the complexity of the above-mentioned problems.

``Strong-cyclic policies" were introduced to formalize trial-and-error strategies and are known to work in Markovian stochastic domains, i.e., they guarantee that the goal is reached with probability 1. We introduce ``best-effort" policies for (not necessarily Markovian) stochastic domains. These generalize strong-cyclic policies by taking advantage of stochasticity even if the goal cannot be reached with probability 1. We compare such policies with optimal policies, i.e., policies that maximize the probability that the goal is achieved, and show that optimal policies are best-effort, but that the converse is false in general. With this framework at hand, we revisit the foundational problem of what it means to plan in nondeterministic domains when the nondeterminism has a stochastic nature. We show that one can view a nondeterministic planning domain as a representation of infinitely many stochastic domains with the same support but different probabilities, and that for temporally extended goals expressed in LTL/LTLf a finite-state best-effort policy in one of these domains is best-effort in each of the domains. In particular, this gives an approach for finding such policies that reduces to solving finite-state MDPs with LTL/LTLf goals. All this shows that ``best-effort" policies are robust to changes in the probabilities, as long as the support is unchanged.

We introduce annotated sequent calculi, which are extensions of standard sequent calculi, where sequents are combined with annotations that represent their derivation statuses. Unlike in ordinary calculi, sequents that are derived in annotated calculi may still be retracted in the presence of conflicting sequents, thus inferences are made under stricter conditions. Conflicts in the resulting systems are handled like in adaptive logics and argumentation theory. The outcome is a robust family of proof systems for non-monotonic reasoning with inconsistent information, where revision considerations are fully integrated into the object level of the proofs. These systems are shown to be strongly connected to logical argumentation.

The topic of forgetting has been extensively studied in the field of knowledge representation and reasoning for many major formalisms. Quite recently it has been introduced to abstract argumentation. However, many already known as well as essential aspects about forgetting like strong persistence or strong invariance have been left unconsidered. We show that forgetting in abstract argumentation cannot be reduced to forgetting in logic programming. In addition, we deal with the more general problem of forgetting whole sets of arguments and show that iterative application of existing operators for single arguments does not necessarily yield a desirable result as it may not produce an informationally economic argumentation framework. As a consequence we provide a systematic and exhaustive study of forgetting desiderata and associated operations adapted to the intrinsics of abstract argumentation. We show the limits and shed light on the possibilities.

Answer-Set Programming (ASP) has seen tremendous progress over the last two decades and is nowadays successfully applied in many real-world domains. However, for certain types of problems, the well-known ASP grounding bottleneck still causes severe problems. This becomes virulent when grounding of rules, where the variables have to be replaced by constants, leads to a ground pro- gram that is too huge to be processed by the ASP solver. In this work, we tackle this problem by a novel method that decouples non-ground atoms in rules in order to delegate the evaluation of rule bodies to the solving process. Our procedure translates a non-ground normal program into a ground disjunctive program that is exponential only in the maximum predicate arity, and thus polynomial if this arity is assumed to be bounded by a constant. We demonstrate the feasibility of this new method experimentally by comparing it to standard ASP technology in terms of grounding size, grounding time and total runtime.

We address the problem of model checking first-order dynamic systems where new objects can be injected in the active domain during execution. Notable examples are systems induced by a first-order action theory, e.g., expressed in the Situation Calculus. Recent results have shown that, under the state-boundedness assumption, such systems, in spite of having a first-order representation of the state, admit decidable model checking for full first-order mu-calculus. However, interestingly, model checking remains undecidable in the case of first-order LTL (LTL-FO). In this paper, we show that in LTL-FOp, which is the fragment of LTL-FO in which quantification is over objects that persist along traces, model checking state-bounded systems becomes decidable over finite and infinite traces. We then employ this result to show how to handle monitoring of LTL-FOp properties against a trace stemming from an unknown state-bounded dynamic system, simultaneously considering the finite trace up to the current point, and all its possibly infinite future continuations.

An agent, or a coalition of agents, is blameable for an outcome if she had a strategy to prevent it. In this paper we introduce a notion of limited blameworthiness, with a constraint on the amount of sacrifice required to prevent the outcome. The main technical contribution is a sound and complete logical system for reasoning about limited blameworthiness in the strategic game setting.

Public observation logic (POL) is a variant of dynamic epistemic logic to reason about agent expectations and agent observations. Agents have certain expectations, regarding the situation at hand, that are actuated by the relevant protocols, and they eliminate possible worlds in which their expectations do not match with their observations. In this work, we investigate the computational complexity of the model checking problem for POL and prove its PSPACE-completeness. We also study various syntactic fragments of POL. We exemplify the applicability of POL model checking in verifying different characteristics and features of an interactive system with respect to the distinct expectations and (matching) observations of the system. Finally, we provide a discussion on the implementation of the model checking algorithms.

Knowledge sharing and model personalization are two key components in the conceptual framework of personalized federated learning (PFL). Existing PFL methods focus on proposing new model personalization mechanisms while simply implementing knowledge sharing by aggregating models from all clients, regardless of their relation graph. This paper aims to enhance the knowledge-sharing process in PFL by leveraging the graph-based structural information among clients. We propose a novel structured federated learning (SFL) framework to learn both the global and personalized models simultaneously using client-wise relation graphs and clients' private data. We cast SFL with graph into a novel optimization problem that can model the client-wise complex relations and graph-based structural topology by a unified framework. Moreover, in addition to using an existing relation graph, SFL could be expanded to learn the hidden relations among clients. Experiments on traffic and image benchmark datasets can demonstrate the effectiveness of the proposed method.

We consider the problem Enum·IP of enumerating prime implicants of Boolean functions represented by decision decomposable negation normal form (dec-DNNF) circuits. We study Enum·IP from dec-DNNF within the framework of enumeration complexity and prove that it is in OutputP, the class of output polynomial enumeration problems, and more precisely in IncP, the class of polynomial incremental time enumeration problems. We then focus on two closely related, but seemingly harder, enumeration problems where further restrictions are put on the prime implicants to be generated. In the first problem, one is only interested in prime implicants representing subset-minimal abductive explanations, a notion much investigated in AI for more than thirty years. In the second problem, the target is prime implicants representing sufficient reasons, a recent yet important notion in the emerging field of eXplainable AI, since they aim to explain predictions achieved by machine learning classifiers. We provide evidence showing that enumerating specific prime implicants corresponding to subset-minimal abductive explanations or to sufficient reasons is not in OutputP.

Synthesis techniques for temporal logic specifications are typically based on exploiting symbolic techniques, as done in model checking. These symbolic techniques typically use backward fixpoint computation. Planning, which can be seen as a specific form of synthesis, is a witness of the success of forward search approaches. In this paper, we develop a forward-search approach to full-fledged Linear Temporal Logic on finite traces (LTLf) synthesis. We show how to compute the Deterministic Finite Automaton (DFA) of an LTLf formula on-the-fly, while performing an adversarial forward search towards the final states, by considering the DFA as a sort of AND-OR graph. Our approach is characterized by branching on suitable propositional formulas, instead of individual evaluations, hence radically reducing the branching factor of the search space. Specifically, we take advantage of techniques developed for knowledge compilation, such as Sentential Decision Diagrams (SDDs), to implement the approach efficiently.

A major challenge in AI is dealing with uncertain information. While probabilistic approaches have been employed to address this issue, in many situations probabilities may not be available or may be unsuitable. As an alternative, qualitative approaches have been introduced to express that one event is no more probable than another. We provide an approach where an agent may reason deductively about notions of likelihood, and may hold beliefs where the subjective probability for a belief is less than 1. Thus, an agent can believe that p holds (with probability <1); and if the agent believes that q is more likely than p, then the agent will also believe q. Our language allows for arbitrary nesting of beliefs and qualitative likelihoods. We provide a sound and complete proof system for the logic with respect to an underlying probabilistic semantics, and show that the language is equivalent to a sublanguage with no nested modalities.

LTL on finite traces (LTLf ) is a logic that attracted much attention in recent literature, for its ability to formalize the qualitative behavior of dynamical systems in several application domains. However, its practical usage is still rather limited, as LTLf cannot deal with any quantitative aspect, such as with the costs of realizing some desired behaviour. The paper fills the gap by proposing a weighting framework for LTLf encoding such quantitative aspects in the traces over which it is evaluated. The complexity of reasoning problems on weighted traces is analyzed and compared to that of standard LTLf, by considering arbitrary formulas as well as classes of formulas defined in terms of relevant syntactic restrictions. Moreover, a reasoner for LTL on weighted finite traces is presented, and its performances are assessed on benchmark data.

In the context of probabilistic AAFs, we intro- duce AAFs with marginal probabilities (mAAFs) requiring only marginal probabilities of argu- ments/attacks to be specified and not relying on the independence assumption. Reasoning over mAAFs requires taking into account multiple probability distributions over the possible worlds, so that the probability of extensions is not determined by a unique value, but by an interval. We focus on the problems of computing the max and min probabil- ities of extensions over mAAFs under Dung’s se- mantics, characterize their complexity, and provide closed formulas for polynomial cases.

Answer set programming is a form of declarative programming widely used to solve difficult search problems. Probabilistic applications however require to go beyond simple search for one solution and need counting. One such application is plausibility reasoning, which provides more fine-grained reasoning mode between simple brave and cautious reasoning. When modeling with ASP, we oftentimes introduce auxiliary atoms in the program. If these atoms are functionally independent of the atoms of interest, we need to hide the auxiliary atoms and project the count to the atoms of interest resulting in the problem projected answer set counting. In practice, counting becomes quickly infeasible with standard systems such as clasp. In this paper, we present a novel hybrid approach for plausibility reasoning under projections, thereby relying on projected answer set counting as basis. Our approach combines existing systems with fast dynamic programming, which in our experiments shows advantages over existing ASP systems.

We study ELI queries (ELIQs) in the presence of ontologies formulated in the description logic DL-Lite. For the dialect DL-LiteH, we show that ELIQs have a frontier (set of least general generalizations) that is of polynomial size and can be computed in polynomial time. In the dialect DL-LiteF, in contrast, frontiers may be infinite. We identify a natural syntactic restriction that enables the same positive results as for DL-LiteH. We use our results on frontiers to show that ELIQs are learnable in polynomial time in the presence of a DL-LiteH / restricted DL-LiteF ontology in Angluin's framework of exact learning with only membership queries.

We study the extension of non-monotonic disjunctive logic programs with terms that represent sets of constants, called DLP(S), under the stable model semantics. This strictly increases expressive power, but keeps reasoning decidable, though cautious entailment is coNEXPTIME^NP-complete, even for data complexity. We present two new reasoning methods for DLP(S): a semantics-preserving translation of DLP(S) to logic programming with function symbols, which can take advantage of lazy grounding techniques, and a ground-and-solve approach that uses non-monotonic existential rules in the grounding stage. Our evaluation considers problems of ontological reasoning that are not in scope for traditional ASP (unless EXPTIME =ΠP2 ), and we find that our new existential-rule grounding performs well in comparison with native implementations of set terms in ASP.

This paper studies Linear Temporal Logic over Finite Traces (LTLf) where proposition letters are replaced with first-order formulas interpreted over arbitrary theories, in the spirit of Satisfiability Modulo Theories. The resulting logic, called LTLf Modulo Theories (LTLfMT), is semi-decidable. Nevertheless, its high expressiveness comes useful in a number of use cases, such as model-checking of data-aware processes and data-aware planning. Despite the general undecidability of these problems, being able to solve satisfiable instances is a compromise worth studying. After motivating and describing such use cases, we provide a sound and complete semi-decision procedure for LTLfMT based on the SMT encoding of a one-pass tree-shaped tableau system. The algorithm is implemented in the BLACK satisfiability checking tool, and an experimental evaluation shows the feasibility of the approach on novel benchmarks.

We introduce a simple model of agency that is based on the concepts of control and attempt. Both relate agents and propositional variables. Moreover, they can be nested: an agent i may control whether another agent j controls a propositional variable p; i may control whether j attempts to change p; i may attempt to change whether j controls p; i may attempt to change whether j attempts to change p; and so on. In this framework we define several modal operators of time and agency: the LTL operators on the one hand, and the Chellas and the deliberative stit operator on the other. While in the standard stit framework the model checking problem is unfeasible because its models are infinite, in our framework models are represented in a finite and compact way: they are grounded on the primitive concepts of control and attempt. This makes model checking practically feasible. We prove its PSPACE-completeness and we show how the concept of social influence can be captured.

Abstract dialectical frameworks (in short, ADFs) are one of the most general and unifying approaches to formal argumentation. As the semantics of ADFs are based on three-valued interpretations, we ask which monotonic three-valued logic allows to capture the main semantic concepts underlying ADFs. We show that possibilistic logic is the unique logic that can faithfully encode all other semantical concepts for ADFs. Based on this result, we also characterise strong equivalence and introduce possibilistic ADFs.

Lexicographic inference is a well-known and popular approach to reasoning with non-monotonic conditionals. It is a logic of very high-quality, as it extends rational closure and avoids the so-called drowning problem. It seems, however, this high quality comes at a cost, as reasoning on the basis of lexicographic inference is of high computational complexity. In this paper, we show that lexicographic inference satisfies syntax splitting, which means that we can restrict our attention to parts of the belief base that share atoms with a given query, thus seriously restricting the computational costs for many concrete queries. Furthermore, we make some observations on the relationship between c-representations and lexicographic inference, and reflect on the relation between syntax splitting and the drowning problem.

Classical instance queries over an ontology only consider explicitly named individuals. Concept referring expressions (CREs) also allow for returning answers in the form of concepts that describe implicitly given individuals in terms of their relation to an explicitly named one. Existing approaches, e.g., based on tree automata, can neither be integrated into state-of-the-art OWL reasoners nor are they directly amenable for an efficient implementation. To address this, we devise a novel algorithm that uses highly optimized OWL reasoners as a black box. In addition to the standard criteria of singularity and certainty for CREs, we devise and consider the criterion of uniqueness of CREs for Horn ALC ontologies. The evaluation of our prototypical implementation shows that computing CREs for the most general concept (Top) can be done in less than one minute for ontologies with thousands of individuals and concepts.

The paper studies preferences of agents about other agents in a social network. It proposes a logical system that captures the properties of such preferences, called "likes". The system can express nested constructions "agent likes humbled people", "agent likes those who like humbled people", etc. The main technical results are a model checking algorithm and a sound, complete, and decidable axiomatization of the proposed system.

The paper proposes a data-centred approach to reasoning about the interplay between trust and beliefs. At its core, is the modality "under the assumption that one dataset is trustworthy, another dataset informs a belief in a statement". The main technical result is a sound and complete logical system capturing the properties of this modality.

Conditional independence is a crucial concept for efficient probabilistic reasoning. For symbolic and qualitative reasoning, however, it has played only a minor role. Recently, Lynn, Delgrande, and Peppas have considered conditional independence in terms of syntactic multivalued dependencies. In this paper, we define conditional independence as a semantic property of epistemic states and present axioms for iterated belief revision operators to obey conditional independence in general. We show that c-revisions for ranking functions satisfy these axioms, and exploit the relevance of these results for iterated belief revision in general.