IJCAI.2017 - Others

| Total: 133

#1 Swift Logic for Big Data and Knowledge Graphs [PDF] [Copy] [Kimi] [REL]

Authors: Luigi Bellomarini, Georg Gottlob, Andreas Pieris, Emanuel Sallinger

Many modern companies wish to maintain knowledge in the form of a corporate knowledge graph and to use and manage this knowledge via a knowledge graph management system (KGMS). We formulate various requirements for a fully fledged KGMS. In particular, such a system must be capable of performing complex reasoning tasks but, at the same time, achieve efficient and scalable reasoning over Big Data with an acceptable computational complexity. Moreover, a KGMS needs interfaces to corporate databases, the web, and machine-learning and analytics packages. We present KRR formalisms and a system achieving these goals.


#2 Deep Learning at Alibaba [PDF] [Copy] [Kimi] [REL]

Author: Rong Jin

In this talk, I will focus on the applications and the latest development of deep learning technologies at Alibaba. More specifically, I will discuss (a) how to handle high dimensional data in DNN and its application to recommender system, (b) the development of deep learning models for transfer learning and its application to multimedia data analysis, (c) the development of combinatorial optimization techniques for DNN model compression and its application to large-scale image classification, and (d) the exploration of deep learning technique for combinatorial optimization and its application to the packing problem in shipping industry. I will conclude my talk with a discussion of new directions for deep learning that are under development at Alibaba.


#3 From Automation to Autonomous Systems: A Legal Phenomenology with Problems of Accountability [PDF] [Copy] [Kimi] [REL]

Author: Ugo Pagallo

Over the past decades a considerable amount of work has been devoted to the notion of autonomy and the intelligence of robots and of AI systems: depending on the application, several standards on the “levels of automation” have been proposed. Although current AI systems may have the intelligence of a fridge, or of a toaster, some of such autonomous systems have already challenged basic pillars of society and the law, e.g. whether lethal force should ever be permitted to be “fully automated.” The aim of this paper is to show that the normative challenges of AI entail different types of accountability that go hand-in-hand with choices of technological dependence, delegation of cognitive tasks, and trust. The stronger the social cohesion is, the higher the risks that can be socially accepted through the normative assessment of the not fully predictable consequences of tasks and decisions entrusted to AI systems and artificial agents.


#4 Super-Human AI for Strategic Reasoning: Beating Top Pros in Heads-Up No-Limit Texas Hold'em [PDF] [Copy] [Kimi] [REL]

Author: Tuomas Sandholm

Poker has been a challenge problem in AI and game theory for decades. As a game of imperfect information it involves obstacles not present in games like chess and Go, and requires totally different techniques. No program had been able to beat top players in large poker games. Until now! In January 2017, our AI, Libratus, beat a team of four top specialist professionals in heads-up no-limit Texas hold'em, which has 10^161 decision points. This game is the main benchmark challenge for imperfect-information game solving. Libratus is the only AI that has beat top humans at this game. Libratus is powered by new algorithms in each of its three main modules: 1. computing blueprint (approximate Nash equilibrium) strategies before the event, 2. novel nested endgame solving during play, and 3. fixing its own strategy to play even closer to equilibrium based on what holes the opponents have been able to identify and exploit. These domain-independent algorithms have potential applicability to a variety of real-world imperfect-information games such as negotiation, business strategy, cybersecurity, physical security, military applications, strategic pricing, product portfolio planning, certain areas of finance, auctions, political campaigns, and steering biological adaptation and evolution, for example, for medical treatment planning.


#5 When Will Negotiation Agents Be Able to Represent Us? The Challenges and Opportunities for Autonomous Negotiators [PDF] [Copy] [Kimi] [REL]

Authors: Tim Baarslag, Michael Kaisers, Enrico H. Gerding, Catholijn M. Jonker, Jonathan Gratch

Computers that negotiate on our behalf hold great promise for the future and will even become indispensable in emerging application domains such as the smart grid and the Internet of Things. Much research has thus been expended to create agents that are able to negotiate in an abundance of circumstances. However, up until now, truly autonomous negotiators have rarely been deployed in real-world applications. This paper sizes up current negotiating agents and explores a number of technological, societal and ethical challenges that autonomous negotiation systems have brought about. The questions we address are: in what sense are these systems autonomous, what has been holding back their further proliferation, and is their spread something we should encourage? We relate the automated negotiation research agenda to dimensions of autonomy and distill three major themes that we believe will propel autonomous negotiation forward: accurate representation, long-term perspective, and user trust. We argue these orthogonal research directions need to be aligned and advanced in unison to sustain tangible progress in the field.


#6 Algorithmic Bias in Autonomous Systems [PDF] [Copy] [Kimi] [REL]

Authors: David Danks, Alex John London

Algorithms play a key role in the functioning of autonomous systems, and so concerns have periodically been raised about the possibility of algorithmic bias. However, debates in this area have been hampered by different meanings and uses of the term, "bias." It is sometimes used as a purely descriptive term, sometimes as a pejorative term, and such variations can promote confusion and hamper discussions about when and how to respond to algorithmic bias. In this paper, we first provide a taxonomy of different types and sources of algorithmic bias, with a focus on their different impacts on the proper functioning of autonomous systems. We then use this taxonomy to distinguish between algorithmic biases that are neutral or unobjectionable, and those that are problematic in some way and require a response. In some cases, there are technological or algorithmic adjustments that developers can use to compensate for problematic bias. In other cases, however, responses require adjustments by the agent, whether human or autonomous system, who uses the results of the algorithm. There is no "one size fits all" solution to algorithmic bias.


#7 Responsible Autonomy [PDF] [Copy] [Kimi] [REL]

Author: Virginia Dignum

As intelligent systems are increasingly making decisions that directly affect society, perhaps the most important upcoming research direction in AI is to rethink the ethical implications of their actions. Means are needed to integrate moral, societal and legal values with technological developments in AI, both during the design process as well as part of the deliberation algorithms employed by these systems. In this paper, we describe leading ethics theories and propose alternative ways to ensure ethical behavior by artificial systems. Given that ethics are dependent on the socio-cultural context and are often only implicit in deliberation processes, methodologies are needed to elicit the values held by designers and stakeholders, and to make these explicit leading to better understanding and trust on artificial autonomous systems.


#8 Reinforcement Learning with a Corrupted Reward Channel [PDF] [Copy] [Kimi] [REL]

Authors: Tom Everitt, Victoria Krakovna, Laurent Orseau, Shane Legg

No real-world reward function is perfect. Sensory errors and software bugs may result in agents getting higher (or lower) rewards than they should. For example, a reinforcement learning agent may prefer states where a sensory error gives it the maximum reward, but where the true reward is actually small. We formalise this problem as a generalised Markov Decision Problem called Corrupt Reward MDP. Traditional RL methods fare poorly in CRMDPs, even under strong simplifying assumptions and when trying to compensate for the possibly corrupt rewards. Two ways around the problem are investigated. First, by giving the agent richer data, such as in inverse reinforcement learning and semi-supervised reinforcement learning, reward corruption stemming from systematic sensory errors may sometimes be completely managed. Second, by using randomisation to blunt the agent's optimisation, reward corruption can be partially managed under some assumptions.


#9 A Goal Reasoning Agent for Controlling UAVs in Beyond-Visual-Range Air Combat [PDF] [Copy] [Kimi] [REL]

Authors: Michael W. Floyd, Justin Karneeb, Philip Moore, David W. Aha

We describe the Tactical Battle Manager (TBM), an intelligent agent that uses several integrated artificial intelligence techniques to control an autonomous unmanned aerial vehicle in simulated beyond-visual-range (BVR) air combat scenarios. The TBM incorporates goal reasoning, automated planning, opponent behavior recognition, state prediction, and discrepancy detection to operate in a real-time, dynamic, uncertain, and adversarial environment. We describe evidence from our empirical study that the TBM significantly outperforms an expert-scripted agent in BVR scenarios. We also report the results of an ablation study which indicates that all components of our agent architecture are needed to maximize mission performance.


#10 On Automating the Doctrine of Double Effect [PDF] [Copy] [Kimi] [REL]

Authors: Naveen Sundar Govindarajulu, Selmer Bringsjord

The doctrine of double effect (DDE) is a long-studied ethical principle that governs when actions that have both positive and negative effects are to be allowed. The goal in this paper is to automate DDE. We briefly present DDE, and use a first-order modal logic, the deontic cognitive event calculus, as our framework to formalize the doctrine. We present formalizations of increasingly stronger versions of the principle, including what is known as the doctrine of triple effect. We then use our framework to simulate successfully scenarios that have been used to test the presence of the principle in human subjects. Our framework can be used in two different modes. One can use it to build DDE-compliant autonomous systems from scratch, or one can use it to verify that a given AI system is DDE-complaint, by applying a DDE layer on an existing system or model. For the latter mode, the underlying AI system can be built using any architecture (planners, deep neural networks, bayesian networks, knowledge-representation systems, or a hybrid); as long as the system exposes a few parameters in its model, such verification is possible. The role of the DDE layer here is akin to a (dynamic or static) software verifier that examines existing software modules. Finally, we end by sketching initial work on how one can apply our DDE layer to the STRIPS-style planning model, and to a modified POMDP model. This is preliminary work to illustrate the feasibility of the second mode, and we hope that our initial sketches can be useful for other researchers in incorporating DDE in their own frameworks.


#11 Achieving Coordination in Multi-Agent Systems by Stable Local Conventions under Community Networks [PDF] [Copy] [Kimi] [REL]

Authors: Shuyue Hu, Ho-fung Leung

Recently, the study of social conventions has attracted much attention in the literature. We notice that a type of interesting phenomena, local convention phenomena, may also exist in certain multi-agent systems. When agents are partitioned into compact communities, different local conventions emerge in different communities. In this paper, we provide a definition for local conventions, and propose two metrics measuring their strength and diversity. In our experimental study, we show that agents can achieve coordination via establishing diverse stable local conventions, which indicates a practical way to solve coordination problems other than the traditional global convention emergence. Moreover, we find that with smaller community sizes, denser connections and fewer available actions, diverse local conventions emerge in shorter time.


#12 Context-Based Reasoning on Privacy in Internet of Things [PDF] [Copy] [Kimi] [REL]

Authors: Nadin Kokciyan, Pinar Yolum

More and more, devices around us are being connected to each other in the realm of Internet of Things (IoT). Their communication and especially collaboration promises useful services to be provided to end users. However, the same communication channels pose important privacy concerns to be raised. It is not clear which information will be shared with whom, for which intents, under which conditions. Existing approaches to privacy advocate policies to be in place to regulate privacy. However, the scale and heterogeneity of the IoT entities make it infeasible to maintain policies among each and every entity in the system. Conversely, it is best if each entity can reason on the privacy using norms and context autonomously. Accordingly, this paper proposes an approach where each entity finds out which contexts it is in based on information it gathers from other entities in the system. The proposed approach uses argumentation to enable IoT entities to reason about their context and decide to reveal information based on it. We demonstrate the applicability of the approach over an IoT scenario.


#13 Concrete Problems for Autonomous Vehicle Safety: Advantages of Bayesian Deep Learning [PDF] [Copy] [Kimi] [REL]

Authors: Rowan McAllister, Yarin Gal, Alex Kendall, Mark van der Wilk, Amar Shah, Roberto Cipolla, Adrian Weller

Autonomous vehicle (AV) software is typically composed of a pipeline of individual components, linking sensor inputs to motor outputs. Erroneous component outputs propagate downstream, hence safe AV software must consider the ultimate effect of each component’s errors. Further, improving safety alone is not sufficient. Passengers must also feel safe to trust and use AV systems. To address such concerns, we investigate three under-explored themes for AV research: safety, interpretability, and compliance. Safety can be improved by quantifying the uncertainties of component outputs and propagating them forward through the pipeline. Interpretability is concerned with explaining what the AV observes and why it makes the decisions it does, building reassurance with the passenger. Compliance refers to maintaining some control for the passenger. We discuss open challenges for research within these themes. We highlight the need for concrete evaluation metrics, propose example problems, and highlight possible solutions.


#14 Should Robots be Obedient? [PDF] [Copy] [Kimi] [REL]

Authors: Smitha Milli, Dylan Hadfield-Menell, Anca Dragan, Stuart Russell

Intuitively, obedience -- following the order that a human gives -- seems like a good property for a robot to have. But, we humans are not perfect and we may give orders that are not best aligned to our preferences. We show that when a human is not perfectly rational then a robot that tries to infer and act according to the human's underlying preferences can always perform better than a robot that simply follows the human's literal order. Thus, there is a tradeoff between the obedience of a robot and the value it can attain for its owner. We investigate how this tradeoff is impacted by the way the robot infers the human's preferences, showing that some methods err more on the side of obedience than others. We then analyze how performance degrades when the robot has a misspecified model of the features that the human cares about or the level of rationality of the human. Finally, we study how robots can start detecting such model misspecification. Overall, our work suggests that there might be a middle ground in which robots intelligently decide when to obey human orders, but err on the side of obedience.


#15 Privacy and Autonomous Systems [PDF] [Copy] [Kimi] [REL]

Author: Jose M. Such

We discuss the problem of privacy in autonomous systems, introducing different conceptualizations and perspectives on privacy to assess the threats that autonomous systems may pose to privacy. After this, we outline socio-technical and legal measures that should be put in place to mitigate these threats. Beyond privacy threats and countermeasures, we also argue how autonomous systems may be, at the same time, the key to address some of the most challenging and pressing privacy problems nowadays and in the near future.


#16 Online Decision-Making for Scalable Autonomous Systems [PDF] [Copy] [Kimi] [REL]

Authors: Kyle Hollins Wray, Stefan J. Witwicki, Shlomo Zilberstein

We present a general formal model called MODIA that can tackle a central challenge for autonomous vehicles (AVs), namely the ability to interact with an unspecified, large number of world entities. In MODIA, a collection of possible decision-problems (DPs), known a priori, are instantiated online and executed as decision-components (DCs), unknown a priori. To combine their individual action recommendations of the DCs into a single action, we propose the lexicographic executor action function (LEAF) mechanism. We analyze the complexity of MODIA and establish LEAF’s relation to regret minimization. Finally, we implement MODIA and LEAF using collections of partially observable Markov decision process (POMDP) DPs, and use them for complex AV intersection decision-making. We evaluate the approach in six scenarios within an industry-standard vehicle simulator, and present its use on an AV prototype.


#17 Rationalisation of Profiles of Abstract Argumentation Frameworks: Extended Abstract [PDF] [Copy] [Kimi] [REL]

Authors: Stephane Airiau, Elise Bonzon, Ulle Endriss, Nicolas Maudet, Julien Rossit

We review a recently introduced model in which each of a number of agents is endowed with an abstract argumentation framework reflecting her individual views regarding a given set of arguments. A question arising in this context is whether the diversity of views observed in such a situation is consistent with the assumption that every individual argumentation framework is induced by a combination of, first, some basic factual information and, second, the personal preferences of the agent concerned. We treat this question of rationalisability of a profile as an algorithmic problem and identify tractable and intractable cases. This is useful for understanding what types of profiles can reasonably be expected to occur in a multiagent system.


#18 Unsatisfiable Core Shrinking for Anytime Answer Set Optimization [PDF] [Copy] [Kimi] [REL]

Authors: Mario Alviano, Carmine Dodaro

Efficient algorithms for the computation of optimum stable models are based on unsatisfiable core analysis. However, these algorithms essentially run to completion, providing few or even no suboptimal stable models. This drawback can be circumvented by shrinking unsatisfiable cores. Interestingly, the resulting anytime algorithm can solve more instances than the original algorithm.


#19 A Verified SAT Solver Framework with Learn, Forget, Restart, and Incrementality [PDF] [Copy] [Kimi] [REL]

Authors: Jasmin Christian Blanchette, Mathias Fleury, Christoph Weidenbach

We developed a formal framework for SAT solving using the Isabelle/HOL proof assistant. Through a chain of refinements, an abstract CDCL (conflict-driven clause learning) calculus is connected to a SAT solver that always terminates with correct answers. The framework offers a convenient way to prove theorems about the SAT solver and experiment with variants of the calculus. Compared with earlier verifications, the main novelties are the inclusion of the CDCL rules for forget, restart, and incremental solving and the use of refinement.


#20 Concerning Referring Expressions in Query Answers [PDF] [Copy] [Kimi] [REL]

Authors: Alexander Borgida, David Toman, Grant Weddell

A referring expression in linguistics is a noun phrase that identifies individuals to listeners. In the context of a query over a first order knowledge base, referring expressions to answers are usually constant symbols. This paper motivates and initiates the exploration of allowing more general formulas, called singular referring expressions, to replace constants in this role. Referring expression types play a novel and significant role in analyzing the properties of candidate expressions.


#21 Open-World Probabilistic Databases: An Abridged Report [PDF] [Copy] [Kimi] [REL]

Authors: Ismail Ilkan Ceylan, Adnan Darwiche, Guy Van den Broeck

Large-scale probabilistic knowledge bases are becoming increasingly important in academia and industry alike. They are constantly extended with new data, powered by modern information extraction tools that associate probabilities with database tuples. In this paper, we revisit the semantics underlying such systems. In particular, the closed-world assumption of probabilistic databases, that facts not in the database have probability zero, clearly conflicts with their everyday use. To address this discrepancy, we propose an open-world probabilistic database semantics, which relaxes the probabilities of open facts to default intervals. For this open-world setting, we lift the existing data complexity dichotomy of probabilistic databases, and propose an efficient evaluation algorithm for unions of conjunctive queries. We also show that query evaluation can become harder for non-monotone queries.


#22 Efficient Techniques for Crowdsourced Top-k Lists [PDF] [Copy] [Kimi] [REL]

Authors: Luca de Alfaro, Vassilis Polychronopoulos, Neoklis Polyzotis

We focus on the problem of obtaining top-k lists of items from larger itemsets, using human workers for doing comparisons among items.An example application is short-listing a large set of college applications using advanced students as workers. We describe novel efficient techniques and explore their tolerance to adversarial behavior and the tradeoffs among different measures of performance (latency, expense and quality of results). We empirically evaluate the proposed techniques against prior art using simulations as well as real crowds in Amazon Mechanical Turk. A randomized variant of the proposed algorithms achieves significant budget saves, especially for very large itemsets and large top-k lists, with negligible risk of lowering the quality of the output.


#23 Predicting Human Similarity Judgments with Distributional Models: The Value of Word Associations [PDF] [Copy] [Kimi] [REL]

Authors: Simon De Deyne, Amy Perfors, Daniel J. Navarro

To represent the meaning of a word, most models use external language resources, such as text corpora, to derive the distributional properties of word usage. In this study, we propose that internal language models, that are more closely aligned to the mental representations of words, can be used to derive new theoretical questions regarding the structure of the mental lexicon. A comparison with internal models also puts into perspective a number of assumptions underlying recently proposed distributional text-based models could provide important insights into cognitive science, including linguistics and artificial intelligence. We focus on word-embedding models which have been proposed to learn aspects of word meaning in a manner similar to humans and contrast them with internal language models derived from a new extensive data set of word associations. An evaluation using relatedness judgments shows that internal language models consistently outperform current state-of-the art text-based external language models. This suggests alternative approaches to represent word meaning using properties that aren't encoded in text.


#24 Ensuring Rapid Mixing and Low Bias for Asynchronous Gibbs Sampling [PDF] [Copy] [Kimi] [REL]

Authors: Christopher De Sa, Kunle Olukotun, Christopher Ré

Gibbs sampling is a Markov chain Monte Carlo technique commonly used for estimating marginal distributions. To speed up Gibbs sampling, there has recently been interest in parallelizing it by executing asynchronously. While empirical results suggest that many models can be efficiently sampled asynchronously, traditional Markov chain analysis does not apply to the asynchronous case, and thus asynchronous Gibbs sampling is poorly understood. In this paper, we derive a better understanding of the two main challenges of asynchronous Gibbs: bias and mixing time. We show experimentally that our theoretical results match practical outcomes.


#25 Intuitionistic Layered Graph Logic [PDF] [Copy] [Kimi] [REL]

Authors: Simon Docherty, David Pym

Models of complex systems are widely used in the physical and social sciences, and the concept of layering, typically building upon graph-theoretic structure, is a common feature. We describe an intuitionistic substructural logic that gives an account of layering. As in other bunched systems, the logic includes the usual intuitionistic connectives, together with a non-commutative, non-associative conjunction (used to capture layering) and its associated implications. We give a soundness and completeness theorem for a labelled tableaux system with respect to a Kripke semantics on graphs. To demonstrate the utility of the logic, we show how to represent systems and security examples, illuminating the relationship between services/policies and the infrastructures/architectures to which they are applied.