IJCAI.2018 - Others

| Total: 161

#1 Language to Action: Towards Interactive Task Learning with Physical Agents [PDF] [Copy] [Kimi] [REL]

Authors: Joyce Y. Chai, Qiaozi Gao, Lanbo She, Shaohua Yang, Sari Saba-Sadiya, Guangyue Xu

Language communication plays an important role in human learning and knowledge acquisition. With the emergence of a new generation of cognitive robots, empowering these robots to learn directly from human partners becomes increasingly important. This paper gives a brief introduction to interactive task learning where humans can teach physical agents new tasks through natural language communication and action demonstration. It discusses research challenges and opportunities in language and communication grounding that are critical in this process. It further highlights the importance of commonsense knowledge, particularly the very basic physical causality knowledge, in grounding language to perception and action.


#2 Model-free, Model-based, and General Intelligence [PDF] [Copy] [Kimi] [REL]

Author: Hector Geffner

During the 60s and 70s, AI researchers explored intuitions about intelligence by writing programs that displayed intelligent behavior. Many good ideas came out from this work but programs written by hand were not robust or general. After the 80s, research increasingly shifted to the development of learners capable of inferring behavior and functions from experience and data, and solvers capable of tackling well-defined but intractable models like SAT, classical planning, Bayesian networks, and POMDPs. The learning approach has achieved considerable success but results in black boxes that do not have the flexibility, transparency, and generality of their model-based counterparts. Model-based approaches, on the other hand, require models and scalable algorithms. Model-free learners and model-based solvers have indeed close parallels with Systems 1 and 2 in current theories of the human mind: the first, a fast, opaque, and inflexible intuitive mind; the second, a slow, transparent, and flexible analytical mind. In this paper, I review developments in AI and draw on these theories to discuss the gap between model-free learners and model-based solvers, a gap that needs to be bridged in order to have intelligent systems that are robust and general.


#3 Interactive, Collaborative Robots: Challenges and Opportunities [PDF] [Copy] [Kimi] [REL]

Authors: Danica Kragic, Joakim Gustafson, Hakan Karaoguz, Patric Jensfelt, Robert Krug

Robotic technology has transformed manufacturing industry ever since the first industrial robot was put in use in the beginning of the 60s. The challenge of developing flexible solutions where production lines can be quickly re-planned, adapted and structured for new or slightly changed products is still an important open problem. Industrial robots today are still largely preprogrammed for their tasks, not able to detect errors in their own performance or to robustly interact with a complex environment and a human worker. The challenges are even more serious when it comes to various types of service robots. Full robot autonomy, including natural interaction, learning from and with human, safe and flexible performance for challenging tasks in unstructured environments will remain out of reach for the foreseeable future. In the envisioned future factory setups, home and office environments, humans and robots will share the same workspace and perform different object manipulation tasks in a collaborative manner. We discuss some of the major challenges of developing such systems and provide examples of the current state of the art.


#4 On a Scientific Discipline (Once) Named AI [PDF] [Copy] [Kimi] [REL]

Author: Wolfgang Bibel

The paper envisions a scientific discipline of fundamental importance comparable to Physics or Biology, reminding that a discipline of such a contour was originally intended by the founders of Artificial Intelligence (AI). AI today, however, is far from such an encompassing discipline sharing the respective research interests with at least half a dozen of other disciplines. After the analysis of this situation and its background we discuss the consequences of this splintering by means of selected challenges. We deliberate thereby what could be done to alleviate the disadvantages resulting from the current state of affairs and to leverage AI's current prominence in the public attention to re-engage in the field's broader mission.


#5 Towards Consumer-Empowering Artificial Intelligence [PDF] [Copy] [Kimi] [REL]

Authors: Giuseppe Contissa, Francesca Lagioia, Marco Lippi, Hans-Wolfgang Micklitz, Przemyslaw Palka, Giovanni Sartor, Paolo Torroni

Artificial Intelligence and Law is undergoing a critical transformation. Traditionally focused on the development of expert systems and on a scholarly effort to develop theories and methods for knowledge representation and reasoning in the legal domain, this discipline is now adapting to a sudden change of scenery. No longer confined to the walls of academia, it has welcomed new actors, such as businesses and companies, who are willing to play a major role and seize new opportunities offered by the same transformational impact that recent AI breakthroughs are having on many other areas. As it happens, commercial interests create new opportunities but they also represent a potential threat to consumers, as the balance of power seems increasingly determined by the availability of data. We believe that while this transformation is still in progress, time is ripe for the next frontier of this field of study, where a new shift of balance may be enabled by tools and services that can be of service not only to businesses but also to consumers and, more generally, the civil society. We call that frontier consumer-empowering AI.


#6 Artificial Intelligence Conferences Closeness [PDF] [Copy] [Kimi] [REL]

Authors: Sébastien Konieczny, Emmanuel Lonca

We study the evolution of Artificial Intelligence conference closeness, using the coscinus tool. Coscinus computes the closeness between publication supports using the co-publication habits of authors: the more authors publish in two conferences, the closer these two conferences. In this paper we perform an analysis of the main Artificial Intelligence conferences based on principal components analysis and clustering performed on this closeness relation.


#7 Quantifying Algorithmic Improvements over Time [PDF] [Copy] [Kimi] [REL]

Authors: Lars Kotthoff, Alexandre Fréchette, Tomasz Michalak, Talal Rahwan, Holger H. Hoos, Kevin Leyton-Brown

Assessing the progress made in AI and contributions to the state of the art is of major concern to the community. Recently, Frechette et al. [2016] advocated performing such analysis via the Shapley value, a concept from coalitional game theory. In this paper, we argue that while this general idea is sound, it unfairly penalizes older algorithms that advanced the state of the art when introduced, but were then outperformed by modern counterparts. Driven by this observation, we introduce the temporal Shapley value, a measure that addresses this problem while maintaining the desirable properties of the (classical) Shapley value. We use the tempo- ral Shapley value to analyze the progress made in (i) the different versions of the Quicksort algorithm; (ii) the annual SAT competitions 2007–2014; (iii) an annual competition of Constraint Programming, namely the MiniZinc challenge 2014–2016. Our analysis reveals novel insights into the development made in these important areas of research over time.


#8 Evolving AI from Research to Real Life – Some Challenges and Suggestions [PDF] [Copy] [Kimi] [REL]

Authors: Sandya Mannarswamy, Shourya Roy

Artificial Intelligence (AI) has come a long way from the stages of being just scientific fiction or academic research curiosity to a point, where it is poised to impact human life significantly. AI driven applications such as autonomous vehicles, medical diagnostics, conversational agents etc. are becoming a reality. In this position paper, we argue that there are certain challenges AI still needs to overcome in its evolution from Research to Real Life. We outline some of these challenges and our suggestions to address them. We provide pointers to similar issues and their resolutions in disciplines such as psychology and medicine from which AI community can leverage the learning. More importantly, this paper is intended to focus the attention of AI research community on translating AI research efforts into real world deployments.


#9 The Facets of Artificial Intelligence: A Framework to Track the Evolution of AI [PDF] [Copy] [Kimi] [REL]

Authors: Fernando Martínez-Plumed, Bao Sheng Loe, Peter Flach, Seán Ó hÉigeartaigh, Karina Vold, José Hernández-Orallo

We present nine facets for the analysis of the past and future evolution of AI. Each facet has also a set of edges that can summarise different trends and contours in AI. With them, we first conduct a quantitative analysis using the information from two decades of AAAI/IJCAI conferences and around 50 years of documents from AI topics, an official database from the AAAI, illustrated by several plots. We then perform a qualitative analysis using the facets and edges, locating AI systems in the intelligence landscape and the discipline as a whole. This analytical framework provides a more structured and systematic way of looking at the shape and boundaries of AI.


#10 Finite Controllability of Conjunctive Query Answering with Existential Rules: Two Steps Forward [PDF] [Copy] [Kimi] [REL]

Authors: Giovanni Amendola, Nicola Leone, Marco Manna

Reasoning with existential rules typically consists of checking whether a Boolean conjunctive query is satisfied by all models of a first-order sentence having the form of a conjunction of Datalog rules extended with existential quantifiers in rule-heads. To guarantee decidability, five basic decidable classes - linear, weakly-acyclic, guarded, sticky, and shy - have been singled out, together with several generalizations and combinations. For all basic classes, except shy, the important property of finite controllability has been proved, ensuring that a query is satisfied by all models of the sentence if, and only if, it is satisfied by all of its finite models. This paper takes two steps forward: (i) devise a general technique to facilitate the process of (dis)proving finite controllability of an arbitrary class of existential rules; and (ii) specialize the technique to complete the picture for the five mentioned classes, by showing that also shy is finitely controllable.


#11 Weighted Bipolar Argumentation Graphs: Axioms and Semantics [PDF] [Copy] [Kimi] [REL]

Authors: Leila Amgoud, Jonathan Ben-Naim

The paper studies how arguments can be evaluated in weighted bipolar argumentation graphs (i.e., graphs whose arguments have basic weights and may be supported and attacked). It introduces principles that an evaluation method (or semantics) would satisfy, analyzes existing semantics with respect to them, and finally proposes a new semantics for the class of non-maximal acyclic graphs.


#12 TensorCast: Forecasting Time-Evolving Networks with Contextual Information [PDF] [Copy] [Kimi] [REL]

Authors: Miguel Araújo, Pedro Ribeiro, Christos Faloutsos

Can we forecast future connections in a social network? Can we predict who will start using a given hashtag in Twitter, leveraging contextual information such as who follows or retweets whom to improve our predictions? In this paper we present an abridged report of TensorCast, an award winning method for forecasting time-evolving networks, that uses coupled tensors to incorporate multiple information sources. TensorCast is scalable (linearithmic on the number of connections), effective (more precise than competing methods) and general (applicable to any data source representable by a tensor). We also showcase our method when applied to forecast two large scale heterogeneous real world temporal networks, namely Twitter and DBLP.


#13 A Unifying View of Geometry, Semantics, and Data Association in SLAM [PDF] [Copy] [Kimi] [REL]

Authors: Nikolay Atanasov, Sean L. Bowman, Kostas Daniilidis, George J. Pappas

Traditional approaches for simultaneous localization and mapping (SLAM) rely on geometric features such as points, lines, and planes to infer the environment structure. They make hard decisions about the (data) association between observed features and mapped landmarks to update the environment model. This paper makes two contributions to the state of the art in SLAM. First, it generalizes the purely geometric model by introducing semantically meaningful objects, represented as structured models of mid-level part features. Second, instead of making hard, potentially wrong associations between semantic features and objects, it shows that SLAM inference can be performed efficiently with probabilistic data association. The approach not only allows building meaningful maps (containing doors, chairs, cars, etc.) but also offers significant advantages in ambiguous environments.


#14 Reduced Cost Fixing for Maximum Satisfiability [PDF] [Copy] [Kimi] [REL]

Authors: Fahiem Bacchus, Antti Hyttinen, Matti Järvisalo, Paul Saikko

Maximum satisfiability (MaxSAT) offers a competitive approach to solving NP-hard real-world optimization problems. While state-of-the-art MaxSAT solvers rely heavily on Boolean satisfiability (SAT) solvers, a recent trend, brought on by MaxSAT solvers implementing the so-called implicit hitting set (IHS) approach, is to integrate techniques from the realm of integer programming (IP) into the solving process. This allows for making use of additional IP solving techniques to further speed up MaxSAT solving. In this line of work, we investigate the integration of the technique of reduced cost fixing from the IP realm into IHS solvers, and empirically show that reduced cost fixing considerable speeds up a state-of-the-art MaxSAT solver implementing the IHS approach.


#15 Improving Information Extraction from Images with Learned Semantic Models [PDF] [Copy] [Kimi] [REL]

Authors: Stephan Baier, Yunpu Ma, Volker Tresp

Many applications require an understanding of an image that goes beyond the simple detection and classification of its objects. In particular, a great deal of semantic information is carried in the relationships between objects. We have previously shown, that the combination of a visual model and a statistical semantic prior model can improve on the task of mapping images to their associated scene description. In this paper, we review the model and compare it to a novel conditional multi-way model for visual relationship detection, which does not include an explicitly trained visual prior model. We also discuss potential relationships between the proposed methods and memory models of the human brain.


#16 Learning with Sparse and Biased Feedback for Personal Search [PDF] [Copy] [Kimi] [REL]

Authors: Michael Bendersky, Xuanhui Wang, Marc Najork, Donald Metzler

Personal search, including email, on-device, and personal media search, has recently attracted a considerable attention from the information retrieval community. In this paper, we provide an overview of challenges and opportunities of learning with implicit user feedback (e.g., click data) in personal search. Implicit user feedback provides a convenient source of supervision for ranking models in personal search. This feedback, however, has two major drawbacks: it is highly sparse and biased due to the personal nature of queries and documents. We demonstrate how these drawbacks can be overcome, and empirically demonstrate the benefits of learning with implicit feedback in the context of a large-scale email search engine.


#17 Dynamic Dependency Awareness for QBF [PDF] [Copy] [Kimi] [REL]

Authors: Joshua Blinkhorn, Olaf Beyersdorff

We provide the first proof complexity results for QBF dependency calculi. By showing that the reflexive resolution path dependency scheme admits exponentially shorter Q-resolution proofs on a known family of instances, we answer a question first posed by Slivovsky and Szeider (SAT 2014). Further, we introduce a new calculus in which a dependency scheme is applied dynamically. We demonstrate the further potential of this approach beyond that of the existing static system with an exponential separation.


#18 The Finite Model Theory of Bayesian Networks: Descriptive Complexity [PDF] [Copy] [Kimi] [REL]

Authors: Fabio Gagliardi Cozman, Denis Deratani Mauá

We adapt the theory of descriptive complexity to Bayesian networks, to quantify the expressivity of specifications based on predicates and quantifiers. We show that Bayesian network specifications that employ first-order quantification capture the complexity class PP; by allowing quantification over predicates, the resulting Bayesian network specifications capture each class in the hierarchy PP^(NP^...^NP), a result that does not seem to have equivalent in the literature.


#19 Combinatorial Cost Sharing [PDF] [Copy] [Kimi] [REL]

Authors: Shahar Dobzinski, Shahar Ovadia

We introduce a combinatorial variant of the cost sharing problem: several services can be provided to each player and each player values every combination of services differently. A publicly known cost function specifies the cost of providing every possible combination of services. A combinatorial cost sharing mechanism is a protocol that decides which services each player gets and at what price. We look for dominant strategy mechanisms that are (economically) efficient and cover the cost, ideally without overcharging (i.e., budget balanced). Note that unlike the standard cost sharing setting, combinatorial cost sharing is a multi-parameter domain. This makes designing dominant strategy mechanisms with good guarantees a challenging task. We present the Potential Mechanism -- a combination of the VCG mechanism and a well-known tool from the theory of cooperative games: Hart and Mas-Colell's potential function. The potential mechanism is a dominant strategy mechanism that always covers the incurred cost. When the cost function is subadditive the same mechanism is also approximately efficient. Our main technical contribution shows that when the cost function is submodular the potential mechanism is approximately budget balanced in three settings: supermodular valuations, symmetric cost function and general symmetric valuations, and two players with general valuations.


#20 Importance Sampling for Fair Policy Selection [PDF] [Copy] [Kimi] [REL]

Authors: Shayan Doroudi, Philip S. Thomas, Emma Brunskill

We consider the problem of off-policy policy selection in reinforcement learning: using historical data generated from running one policy to compare two or more policies. We show that approaches based on importance sampling can be unfair---they can select the worse of two policies more often than not. We then give an example that shows importance sampling is systematically unfair in a practically relevant setting; namely, we show that it unreasonably favors shorter trajectory lengths. We then present sufficient conditions to theoretically guarantee fairness. Finally, we provide a practical importance sampling-based estimator to help mitigate the unfairness due to varying trajectory lengths.


#21 Inductive Certificates of Unsolvability for Domain-Independent Planning [PDF] [Copy] [Kimi] [REL]

Authors: Salomé Eriksson, Gabriele Röger, Malte Helmert

If a planning system outputs a solution for a given problem, it is simple to verify that the solution is valid. However, if a planner claims that a task is unsolvable, we currently have no choice but to trust the planner blindly. We propose a sound and complete class of certificates of unsolvability which can be verified efficiently by an independent program. To highlight their practical use, we show how these certificates can be generated for a wide range of state-of-the-art planning techniques with only polynomial overhead for the planner.


#22 Reducing Controversy by Connecting Opposing Views [PDF] [Copy] [Kimi] [REL]

Authors: Kiran Garimella, Gianmarco De Francisci Morales, Aristides Gionis, Michael Mathioudakis

Controversial issues often split the population into groups with opposing views. When such issues emerge on social media, we often observe the creation of "echo chambers," i.e., situations where like-minded people reinforce each other’s opinion, but do not get exposed to the views of the opposing side. In this paper we study algorithmic techniques for bridging these chambers, and thus reduce controversy. Specifically, we represent discussions as graphs, and cast our objective as an edge-recommendation problem. The goal of the recommendation is to reduce the controversy score of the graph, measured by a recently-developed metric based on random walks. At the same time, we take into account the acceptance probability of the recommended edges, which represent the probability that the recommended edges materialize in the graph.


#23 Toeplitz Inverse Covariance-based Clustering of Multivariate Time Series Data [PDF] [Copy] [Kimi] [REL]

Authors: David Hallac, Sagar Vare, Stephen Boyd, Jure Leskovec

Subsequence clustering of multivariate time series is a useful tool for discovering repeated patterns in temporal data. Once these patterns have been discovered, seemingly complicated datasets can be interpreted as a temporal sequence of only a small number of states, or clusters. However, discovering these patterns is challenging because it requires simultaneous segmentation and clustering of the time series. Here we propose a new method of model-based clustering, which we call Toeplitz Inverse Covariance-based Clustering (TICC). Each cluster in the TICC method is defined by a correlation network, or Markov random field (MRF), characterizing the interdependencies between different observations in a typical subsequence of that cluster. Based on this graphical representation, TICC simultaneously segments and clusters the time series data. We solve the TICC problem through a scalable algorithm that is able to efficiently solve for tens of millions of observations. We validate our approach by comparing TICC to several state-of-the-art baselines in a series of synthetic experiments, and we then demonstrate on an automobile dataset how TICC can be used to learn interpretable clusters in real-world scenarios.


#24 A Model of Distributed Query Computation in Client-Server Scenarios on the Semantic Web [PDF] [Copy] [Kimi] [REL]

Authors: Olaf Hartig, Ian Letter, Jorge Pérez

This paper provides an overview of a model for capturing properties of client-server-based query computation setups. This model can be used to formally analyze different combinations of client and server capabilities, and compare them in terms of various fine-grain complexity measures. While the motivations and the focus of the presented work are related to querying the Semantic Web, the main concepts of the model are general enough to be applied in other contexts as well.


#25 Translation-based Recommendation: A Scalable Method for Modeling Sequential Behavior [PDF] [Copy] [Kimi] [REL]

Authors: Ruining He, Wang-Cheng Kang, Julian McAuley

Modeling the complex interactions between users and items is at the core of designing successful recommender systems. One key task consists of predicting users’ personalized sequential behavior, where the challenge mainly lies in modeling ‘third-order’ interactions between a user, her previously visited item(s), and the next item to consume. In this paper, we propose a unified method, TransRec, to model such interactions for largescale sequential prediction. Methodologically, we embed items into a ‘transition space’ where users are modeled as translation vectors operating on item sequences. Empirically, this approach outperforms the state-of-the-art on a wide spectrum of real-world datasets.