| Total: 8
Reasoning does not work well when done in isolation from its significance, both to the needs and interests of an agent and with respect to the wider world. Moreover, those issues may best be handled with a new sort of data structure that goes beyond the knowledge base and incorporates aspects of perceptual knowledge and even more, in which a kind of anticipatory action may be key.
In a question-and-answer format, this summary paper presents background material for the AAAI-16 Senior Member Presentation Track “Blue Sky Ideas” talk of the same name.
A key acceptability criterion for artificial agents will be the possible moral implications of their actions. In particular, intelligent persuasive systems (systems designed to influence humans via communication) constitute a highly sensitive topic because of their intrinsically social nature. Still, ethical studies in this area are rare and tend to focus on the output of the required action; instead, this work focuses on the acceptability of persuasive acts themselves.Building systems able to persuade while being ethically acceptable requires that they be capable of intervening flexibly and of taking decisions about which specific persuasive strategy to use. We show how, exploiting a behavioral approach, based on human assessment of moral dilemmas, we obtain results that will lead to more ethically appropriate systems. Experiments we have conducted address the type of persuader, the strategies adopted and the circumstances. Dimensions surfaced that can characterize the interpersonal differences concerning moral acceptability of machine performed persuasion, usable for strategy adaptation. We also show that the prevailing preconceived negative attitude toward persuasion by a machine is not predictive of actual moral acceptability judgement when subjects are confronted with specific cases.
The future will see autonomous machines acting in the same environment as humans, in areas as diverse as driving, assistive technology, and health care. Think of self-driving cars, companion robots, and medical diagnosis support systems. We also believe that humans and machines will often need to work together and agree on common decisions. Thus hybrid collective decision making systems will be in great need. In this scenario, both machines and collective decision making systems should follow some form of moral values and ethical principles (appropriate to where they will act but always aligned to humans'), as well as safety constraints. In fact, humans would accept and trust more machines that behave as ethically as other humans in the same environment. Also, these principles would make it easier for machines to determine their actions and explain their behavior in terms understandable by humans. Moreover, often machines and humans will need to make decisions together, either through consensus or by reaching a compromise. This would be facilitated by shared moral values and ethical principles.
We survey some recent research regarding strategic behaviour in resource allocation problems, focusing on the fair division of indivisible goods. We consider a number of computational questions like how a single strategic agent misreports their preferences to ensure a particular outcome, and how agents compute a Nash equilibrium when they all act strategically. We also identify a number of future directions like dealing with non-additive utilities, and partial or probabilistic information about the preferences of other agents.
Rational verification is concerned with establishing whether a given temporal logic formula φ is satisfied in some or all equilibrium computations of a multi-agent system – that is, whether the system will exhibit the behaviour φ under the assumption that agents within the system act rationally in pursuit of their preferences. After motivating and introducing the framework of rational verification, we present formal models through which rational verification can be studied, and survey the complexity of key decision problems. We give an overview of a prototype software tool for rational verification, and conclude with a discussion and related work.
Advances in natural language processing (NLP) and educational technology, as well as the availability of unprecedented amounts of educationally-relevant text and speech data, have led to an increasing interest in using NLP to address the needs of teachers and students. Educational applications differ in many ways, however, from the types of applications for which NLP systems are typically developed. This paper will organize and give an overview of research in this area, focusing on opportunities as well as challenges.
Due to the decentralized nature of the Semantic Web, the same real-world entity may be described in various data sources with different ontologies and assigned syntactically distinct identifiers. In order to facilitate data utilization and consumption in the Semantic Web, without compromising the freedom of people to publish their data, one critical problem is to appropriately interlink such heterogeneous data. This interlinking process is sometimes referred to as Entity Coreference, i.e., finding which identifiers refer to the same real-world entity. In this paper, we first summarize state-of-the-art algorithms in detecting such coreference relationships between ontology instances. We then discuss various techniques in scaling entity coreference to large-scale datasets. Finally, we present well-adopted evaluation datasets and metrics, and compare the performance of the state-of-the-art algorithms on such datasets.