| Total: 723
Using personalized explanations to support recommendations has been shown to increase trust and perceived quality. However, to actually obtain better recommendations, there needs to be a means for users to modify the recommendation criteria by interacting with the explanation. We present a novel technique using aspect markers that learns to generate personalized explanations of recommendations from review texts, and we show that human users significantly prefer these explanations over those produced by state-of-the-art techniques. Our work's most important innovation is that it allows users to react to a recommendation by critiquing the textual explanation: removing (symmetrically adding) certain aspects they dislike or that are no longer relevant (symmetrically that are of interest). The system updates its user model and the resulting recommendations according to the critique. This is based on a novel unsupervised critiquing method for single- and multi-step critiquing with textual explanations. Empirical results show that our system achieves good performance in adapting to the preferences expressed in multi-step critiquing and generates consistent explanations.
Several methods have recently been developed for computing attributions of a neural network's prediction over the input features. However, these existing approaches for computing attributions are noisy and not robust to small perturbations of the input. This paper uses the recently identified connection between dynamical systems and residual neural networks to show that the attributions computed over neural stochastic differential equations (SDEs) are less noisy, visually sharper, and quantitatively more robust. Using dynamical systems theory, we theoretically analyze the robustness of these attributions. We also experimentally demonstrate the efficacy of our approach in providing smoother, visually sharper and quantitatively robust attributions by computing attributions for ImageNet images using ResNet-50, WideResNet-101 models and ResNeXt-101 models.
Location prediction is of great importance in location-based applications for the construction of the smart city. To our knowledge, existing models for location prediction focus on the users' preference on POIs from the perspective of the human side. However, modeling users' interests from the historical trajectory is still limited by the data sparsity. Additionally, most of existing methods predict the next location according to the individual data independently. But the data sparsity makes it difficult to mine explicit mobility patterns or capture the casual behavior for each user. To address the issues above, we propose a novel Bi-direction Speculation and Dual-level Association method (BSDA), which considers both users' interests in POIs and POIs' appeal to users. Furthermore, we develop the cross-user and cross-POI association to alleviate the data sparsity by similar users and POIs to enrich the candidates. Experimental results on two public datasets demonstrate that BSDA achieves significant improvements over state-of-the-art methods.
Machine Learning (ML) increasingly informs the allocation of opportunities to individuals and communities in areas such as lending, education, employment, and beyond. Such decisions often impact their subjects' future characteristics and capabilities in an a priori unknown fashion. The decision-maker, therefore, faces exploration-exploitation dilemmas akin to those in multi-armed bandits. Following prior work, we model communities as arms. To capture the long-term effects of ML-based allocation decisions, we study a setting in which the reward from each arm evolves every time the decision-maker pulls that arm. We focus on reward functions that are initially increasing in the number of pulls but may become (and remain) decreasing after a certain point. We argue that an acceptable sequential allocation of opportunities must take an arm's potential for growth into account. We capture these considerations through the notion of policy regret, a much stronger notion than the often-studied external regret, and present an algorithm with provably sub-linear policy regret for sufficiently long time horizons. We empirically compare our algorithm with several baselines and find that it consistently outperforms them, in particular for long time horizons.
AI research is being challenged with ensuring that autonomous agents learn to behave ethically, namely in alignment with moral values. A common approach, founded on the exploitation of Reinforcement Learning techniques, is to design environments that incentivise agents to behave ethically. However, to the best of our knowledge, current approaches do not theoretically guarantee that an agent will learn to behave ethically. Here, we make headway along this direction by proposing a novel way of designing environments wherein it is formally guaranteed that an agent learns to behave ethically while pursuing its individual objectives. Our theoretical results develop within the formal framework of Multi-Objective Reinforcement Learning to ease the handling of an agent's individual and ethical objectives. As a further contribution, we leverage on our theoretical results to introduce an algorithm that automates the design of ethical environments.
Word embedding models reflect bias towards genders, ethnicities, and other social groups present in the underlying training data. Metrics such as ECT, RNSB, and WEAT quantify bias in these models based on predefined word lists representing social groups and bias-conveying concepts. How suitable these lists actually are to reveal bias - let alone the bias metrics in general - remains unclear, though. In this paper, we study how to assess the quality of bias metrics for word embedding models. In particular, we present a generic method, Bias Silhouette Analysis (BSA), that quantifies the accuracy and robustness of such a metric and of the word lists used. Given a biased and an unbiased reference embedding model, BSA applies the metric systematically for several subsets of the lists to the models. The variance and rate of convergence of the bias values of each model then entail the robustness of the word lists, whereas the distance between the models' values gives indications of the general accuracy of the metric with the word lists. We demonstrate the behavior of BSA on two standard embedding models for the three mentioned metrics with several word lists from existing research.
Many agencies release datasets and statistics about groups of individuals that are used as input to a number of critical decision processes. To conform with privacy and confidentiality requirements, these agencies are often required to release privacy-preserving versions of the data. This paper studies the release of differentially private datasets and analyzes their impact on some critical resource allocation tasks under a fairness perspective. The paper shows that, when the decisions take as input differentially private data, the noise added to achieve privacy disproportionately impacts some groups over others. The paper analyzes the reasons for these disproportionate impacts and proposes guidelines to mitigate these effects. The proposed approaches are evaluated on critical decision problems that use differentially private census data.
Recent studies have demonstrated that deep learning models can discriminate based on protected classes like race and gender. In this work, we evaluate bias present in deepfake datasets and detection models across protected subgroups. Using facial datasets balanced by race and gender, we examine three popular deepfake detectors and find large disparities in predictive performances across races, with up to 10.7% difference in error rate between subgroups. A closer look reveals that the widely used FaceForensics++ dataset is overwhelmingly composed of Caucasian subjects, with the majority being female Caucasians. Our investigation of the racial distribution of deepfakes reveals that the methods used to create deepfakes as positive training signals tend to produce ``irregular" faces - when a person’s face is swapped onto another person of a different race or gender. This causes detectors to learn spurious correlations between the foreground faces and fakeness. Moreover, when detectors are trained with the Blended Image (BI) dataset from Face X-Rays, we find that those detectors develop systematic discrimination towards certain racial subgroups, primarily female Asians.
This paper proposes Characteristic Examples for effectively fingerprinting deep neural networks, featuring high-robustness to the base model against model pruning as well as low-transferability to unassociated models. This is the first work taking both robustness and transferability into consideration for generating realistic fingerprints, whereas current methods lack practical assumptions and may incur large false positive rates. To achieve better trade-off between robustness and transferability, we propose three kinds of characteristic examples: vanilla C-examples, RC-examples, and LTRC-example, to derive fingerprints from the original base model. To fairly characterize the trade-off between robustness and transferability, we propose Uniqueness Score, a comprehensive metric that measures the difference between robustness and transferability, which also serves as an indicator to the false alarm problem. Extensive experiments demonstrate that the proposed characteristic examples can achieve superior performance when compared with existing fingerprinting methods. In particular, for VGG ImageNet models, using LTRC-examples gives 4X higher uniqueness score than the baseline method and does not incur any false positives.
In polymatrix coordination games, each player x is a node of a graph and must select an action in her strategy set. Nodes are playing separate bimatrix games with their neighbors in the graph. Namely, the utility of x is given by the preference she has for her action plus, for each neighbor y, a payoff which strictly depends on the mutual actions played by x and y. We propose the new class of distance polymatrix coordination games, properly generalizing polymatrix coordination games, in which the overall utility of player x further depends on the payoffs arising by mutual actions of players v,z that are the endpoints of edges at any distance h Keywords: Agent-based and Multi-agent Systems: Algorithmic Game Theory Agent-based and Multi-agent Systems: Computational Social Choice Agent-based and Multi-agent Systems: Noncooperative Games
In its most traditional setting, the main concern of optimization theory is the search for optimal solutions for instances of a given computational problem. A recent trend of research in artificial intelligence, called solution diversity, has focused on the development of notions of optimality that may be more appropriate in settings where subjectivity is essential. The idea is that instead of aiming at the development of algorithms that output a single optimal solution, the goal is to investigate algorithms that output a small set of sufficiently good solutions that are sufficiently diverse from one another. In this way, the user has the opportunity to choose the solution that is most appropriate to the context at hand. It also displays the richness of the solution space. When combined with techniques from parameterized complexity theory, the paradigm of diversity of solutions offers a powerful algorithmic framework to address problems of practical relevance. In this work, we investigate the impact of this combination in the field of Kemeny Rank Aggregation, a well-studied class of problems lying in the intersection of order theory and social choice theory and also in the field of order theory itself. In particular, we show that KRA is fixed-parameter tractable with respect to natural parameters providing natural formalizations of the notions of diversity and of the notion of a sufficiently good solution. Our main results work both when considering the traditional setting of aggregation over linearly ordered votes, and in the more general setting where votes are partially ordered.
We present a new and rich model of school choice with flexible diversity goals and specialized seats. The model also applies to other settings such as public housing allocation with diversity objectives. Our method of expressing flexible diversity goals is also applicable to other settings in moral multi-agent decision making where competing policies need to be balanced when allocating scarce resources. For our matching model, we present a polynomial-time algorithm that satisfies desirable properties, including strategyproofness and stability under several natural subdomains of our problem. We complement the results by providing a clear understanding about what results do not extend when considering the general model.
We study the classic problem of fairly allocating a set of indivisible goods among a group of agents, and focus on the notion of approximate proportionality known as PROPm. Prior work showed that there exists an allocation that satisfies this notion of fairness for instances involving up to five agents, but fell short of proving that this is true in general. We extend this result to show that a PROPm allocation is guaranteed to exist for all instances, independent of the number of agents or goods. Our proof is constructive, providing an algorithm that computes such an allocation and, unlike prior work, the running time of this algorithm is polynomial in both the number of agents and the number of goods.
We develop a new framework for designing truthful, high-revenue (combinatorial) auctions for limited supply. Our mechanism learns within an instance. It generalizes and improves over previously-studied random-sampling mechanisms. It first samples a participatory group of bidders, then samples several learning groups of bidders from the remaining pool of bidders, learns a high-revenue auction from the learning groups, and finally runs that auction on the participatory group. Previous work on random-sampling mechanisms focused primarily on unlimited supply. Limited supply poses additional significant technical challenges, since allocations of items to bidders must be feasible. We prove guarantees on the performance of our mechanism based on a market-shrinkage term and a new complexity measure we coin partition discrepancy. Partition discrepancy simultaneously measures the intrinsic complexity of the mechanism class and the uniformity of the set of bidders. We then introduce new auction classes that can be parameterized in a way that does not depend on the number of bidders participating, and prove strong guarantees for these classes. We show how our mechanism can be implemented efficiently by leveraging practically-efficient routines for solving winner determination. Finally, we show how to use structural revenue maximization to decide what auction class to use with our framework when there is a constraint on the number of learning groups.
We consider the problem of the conjoint selection and allocation of projects to a population of agents, e.g. students are assigned papers and shall present them to their peers. The selection can be constrained either by quotas over subcategories of projects, or by the preferences of the agents themselves. We explore fairness and optimality issues and refine the analysis of the rank-maximality and popularity optimality concepts. We show that they are compatible with reasonable fairness requirements related to rank-based envy-freeness and can be adapted to select globally good projects according to the preferences of the agents.
To address the dynamic nature of real-world networks, we generalize competitive diffusion games and Voronoi games from static to temporal graphs, where edges may appear or disappear over time. This establishes a new direction of studies in the area of graph games, motivated by applications such as influence spreading. As a first step, we investigate the existence of Nash equilibria in competitive diffusion and Voronoi games on different temporal graph classes. Even when restricting our studies to temporal paths and cycles, this turns out to be a challenging undertaking, revealing significant differences between the two games in the temporal setting. Notably, both games are equivalent on static paths and cycles. Our two main technical results are (algorithmic) proofs for the existence of Nash equilibria in temporal competitive diffusion and temporal Voronoi games when the edges are restricted not to disappear over time.
We study the parameterized complexity of counting variants of Swap- and Shift-Bribery, focusing on the parameterizations by the number of swaps and the number of voters. Facing several computational hardness results, using sampling we show experimentally that Swap-Bribery offers a new approach to the robustness analysis of elections.
In their AAMAS 2020 paper, Szufa et al. presented a "map of elections" that visualizes a set of 800 elections generated from various statistical cultures. While similar elections are grouped together on this map, there is no obvious interpretation of the elections' positions. We provide such an interpretation by introducing four canonical “extreme” elections, acting as a compass on the map. We use them to analyze both a dataset provided by Szufa et al. and a number of real-life elections. In effect, we find a new parameterization of the Mallows model, based on measuring the expected swap distance from the central preference order, and show that it is useful for capturing real-life scenarios.
A common theme of decision making in multi-agent systems is to assign utilities to alternatives, which individuals seek to maximize. This rationale is questionable in coalition formation where agents are affected by other members of their coalition. Based on the assumption that agents are benevolent towards other agents they like to form coalitions with, we propose loyalty in hedonic games, a binary relation dependent on agents' utilities. Given a hedonic game, we define a loyal variant where agents' utilities are defined by taking the minimum of their utility and the utilities of agents towards which they are loyal. This process can be iterated to obtain various degrees of loyalty, terminating in a locally egalitarian variant of the original game. We investigate axioms of group stability and efficiency for different degrees of loyalty. Specifically, we consider the problem of finding coalition structures in the core and of computing best coalitions, obtaining both positive and intractability results. In particular, the limit game possesses Pareto optimal coalition structures in the core.
The Shapley value is a well recognised method for dividing the value of joint effort in cooperative games. However, computing the Shapley value is known to be computationally hard, so stratified sample-based estimation is sometimes used. For this task, we provide two contributions to the state of the art. First, we derive a novel concentration inequality that is tailored to stratified Shapley value estimation using sample variance information. Second, by sequentially choosing samples to minimize our inequality, we develop a new and more efficient method of sampling to estimate the Shapley value. We evaluate our sampling method on a suite of test cooperative games, and our results demonstrate that it outperforms or is competitive with existing stratified sample-based estimation approaches to computing the Shapley value.
We study the problem of fairly allocating indivisible items to agents with different entitlements, which captures, for example, the distribution of ministries among political parties in a coalition government. Our focus is on picking sequences derived from common apportionment methods, including five traditional divisor methods and the quota method. We paint a complete picture of these methods in relation to known envy-freeness and proportionality relaxations for indivisible items as well as monotonicity properties with respect to the resource, population, and weights. In addition, we provide characterizations of picking sequences satisfying each of the fairness notions, and show that the well-studied maximum Nash welfare solution fails resource- and population-monotonicity even in the unweighted setting. Our results serve as an argument in favor of using picking sequences in weighted fair division problems.
We study generalizations of stable matching in which agents may be matched fractionally; this models time-sharing assignments. We focus on the so-called ordinal stability and cardinal stability, and investigate the computational complexity of finding an ordinally stable or cardinally stable fractional matching which either maximizes the social welfare (i.e., the overall utilities of the agents) or the number of fully matched agents (i.e., agents whose matching values sum up to one). We complete the complexity classification of both optimization problems for both ordinal stability and cardinal stability, distinguishing between the marriage (bipartite) and roommates (non-bipartite) cases and the presence or absence of ties in the preferences. In particular, we prove a surprising result that finding a cardinally stable fractional matching with maximum social welfare is NP-hard even for the marriage case without ties. This answers an open question and exemplifies a rare variant of stable marriage that remains hard for preferences without ties. We also complete the picture of the relations of the stability notions and derive structural properties.
One practical requirement in solving dynamic games is to ensure that the players play well from any decision point onward. To satisfy this requirement, existing efforts focus on equilibrium refinement, but the scalability and applicability of existing techniques are limited. In this paper, we propose Temporal-Induced Self-Play (TISP), a novel reinforcement learning-based framework to find strategies with decent performances from any decision point onward. TISP uses belief-space representation, backward induction, policy learning, and non-parametric approximation. Building upon TISP, we design a policy-gradient-based algorithm TISP-PG. We prove that TISP-based algorithms can find approximate Perfect Bayesian Equilibrium in zero-sum one-sided stochastic Bayesian games with finite horizon. We test TISP-based algorithms in various games, including finitely repeated security games and a grid-world game. The results show that TISP-PG is more scalable than existing mathematical programming-based methods and significantly outperforms other learning-based methods.
When can cooperation arise from self-interested decisions in public goods games? And how can we help agents to act cooperatively? We examine these classical questions in a pivotal participation game, a variant of public good games, where heterogeneous agents make binary participation decisions on contributing their endowments, and the public project succeeds when it has enough contributions. We prove it is NP-complete to decide the existence of a cooperative Nash equilibrium such that the project succeeds. We demonstrate that the decision problem becomes easy if agents are homogeneous enough. We then propose two algorithms to help cooperation in the game. Our first algorithm adds an external investment to the public project, and our second algorithm uses matching funds. We show the cost to induce a cooperative Nash equilibrium is near-optimal for both algorithms. Finally, the cost of matching funds can always be smaller than the cost of adding an external investment. Intuitively, matching funds provide a greater incentive for cooperation than adding an external investment does.
We study learning dynamics in distributed production economies such as blockchain mining, peer-to-peer file sharing and crowdsourcing. These economies can be modelled as multi-product Cournot competitions or all-pay auctions (Tullock contests) when individual firms have market power, or as Fisher markets with quasi-linear utilities when every firm has negligible influence on market outcomes. In the former case, we provide a formal proof that Gradient Ascent (GA) can be Li-Yorke chaotic for a step size as small as Θ(1/n), where n is the number of firms. In stark contrast, for the Fisher market case, we derive a Proportional Response (PR) protocol that converges to market equilibrium. The positive results on the convergence of the PR dynamics are obtained in full generality, in the sense that they hold for Fisher markets with any quasi-linear utility functions. Conversely, the chaos results for the GA dynamics are established even in the simplest possible setting of two firms and one good, and they hold for a wide range of price functions with different demand elasticities. Our findings suggest that by considering multi-agent interactions from a market rather than a game-theoretic perspective, we can formally derive natural learning protocols which are stable and converge to effective outcomes rather than being chaotic.