IJCAI.2021 - AI Ethics, Trust, Fairness

| Total: 9

#1 Interacting with Explanations through Critiquing [PDF] [Copy] [Kimi1] [REL]

Authors: Diego Antognini, Claudiu Musat, Boi Faltings

Using personalized explanations to support recommendations has been shown to increase trust and perceived quality. However, to actually obtain better recommendations, there needs to be a means for users to modify the recommendation criteria by interacting with the explanation. We present a novel technique using aspect markers that learns to generate personalized explanations of recommendations from review texts, and we show that human users significantly prefer these explanations over those produced by state-of-the-art techniques. Our work's most important innovation is that it allows users to react to a recommendation by critiquing the textual explanation: removing (symmetrically adding) certain aspects they dislike or that are no longer relevant (symmetrically that are of interest). The system updates its user model and the resulting recommendations according to the critique. This is based on a novel unsupervised critiquing method for single- and multi-step critiquing with textual explanations. Empirical results show that our system achieves good performance in adapting to the preferences expressed in multi-step critiquing and generates consistent explanations.


#2 On Smoother Attributions using Neural Stochastic Differential Equations [PDF] [Copy] [Kimi] [REL]

Authors: Sumit Jha, Rickard Ewetz, Alvaro Velasquez, Susmit Jha

Several methods have recently been developed for computing attributions of a neural network's prediction over the input features. However, these existing approaches for computing attributions are noisy and not robust to small perturbations of the input. This paper uses the recently identified connection between dynamical systems and residual neural networks to show that the attributions computed over neural stochastic differential equations (SDEs) are less noisy, visually sharper, and quantitatively more robust. Using dynamical systems theory, we theoretically analyze the robustness of these attributions. We also experimentally demonstrate the efficacy of our approach in providing smoother, visually sharper and quantitatively robust attributions by computing attributions for ImageNet images using ResNet-50, WideResNet-101 models and ResNeXt-101 models.


#3 Location Predicts You: Location Prediction via Bi-direction Speculation and Dual-level Association [PDF] [Copy] [Kimi] [REL]

Authors: Xixi Li, Ruimin Hu, Zheng Wang, Toshihiko Yamasaki

Location prediction is of great importance in location-based applications for the construction of the smart city. To our knowledge, existing models for location prediction focus on the users' preference on POIs from the perspective of the human side. However, modeling users' interests from the historical trajectory is still limited by the data sparsity. Additionally, most of existing methods predict the next location according to the individual data independently. But the data sparsity makes it difficult to mine explicit mobility patterns or capture the casual behavior for each user. To address the issues above, we propose a novel Bi-direction Speculation and Dual-level Association method (BSDA), which considers both users' interests in POIs and POIs' appeal to users. Furthermore, we develop the cross-user and cross-POI association to alleviate the data sparsity by similar users and POIs to enrich the candidates. Experimental results on two public datasets demonstrate that BSDA achieves significant improvements over state-of-the-art methods.


#4 Addressing the Long-term Impact of ML Decisions via Policy Regret [PDF] [Copy] [Kimi] [REL]

Authors: David Lindner, Hoda Heidari, Andreas Krause

Machine Learning (ML) increasingly informs the allocation of opportunities to individuals and communities in areas such as lending, education, employment, and beyond. Such decisions often impact their subjects' future characteristics and capabilities in an a priori unknown fashion. The decision-maker, therefore, faces exploration-exploitation dilemmas akin to those in multi-armed bandits. Following prior work, we model communities as arms. To capture the long-term effects of ML-based allocation decisions, we study a setting in which the reward from each arm evolves every time the decision-maker pulls that arm. We focus on reward functions that are initially increasing in the number of pulls but may become (and remain) decreasing after a certain point. We argue that an acceptable sequential allocation of opportunities must take an arm's potential for growth into account. We capture these considerations through the notion of policy regret, a much stronger notion than the often-studied external regret, and present an algorithm with provably sub-linear policy regret for sufficiently long time horizons. We empirically compare our algorithm with several baselines and find that it consistently outperforms them, in particular for long time horizons.


#5 Multi-Objective Reinforcement Learning for Designing Ethical Environments [PDF] [Copy] [Kimi] [REL]

Authors: Manel Rodriguez-Soto, Maite Lopez-Sanchez, Juan A. Rodriguez Aguilar

AI research is being challenged with ensuring that autonomous agents learn to behave ethically, namely in alignment with moral values. A common approach, founded on the exploitation of Reinforcement Learning techniques, is to design environments that incentivise agents to behave ethically. However, to the best of our knowledge, current approaches do not theoretically guarantee that an agent will learn to behave ethically. Here, we make headway along this direction by proposing a novel way of designing environments wherein it is formally guaranteed that an agent learns to behave ethically while pursuing its individual objectives. Our theoretical results develop within the formal framework of Multi-Objective Reinforcement Learning to ease the handling of an agent's individual and ethical objectives. As a further contribution, we leverage on our theoretical results to introduce an algorithm that automates the design of ethical environments.


#6 Bias Silhouette Analysis: Towards Assessing the Quality of Bias Metrics for Word Embedding Models [PDF] [Copy] [Kimi] [REL]

Authors: Maximilian Spliethöver, Henning Wachsmuth

Word embedding models reflect bias towards genders, ethnicities, and other social groups present in the underlying training data. Metrics such as ECT, RNSB, and WEAT quantify bias in these models based on predefined word lists representing social groups and bias-conveying concepts. How suitable these lists actually are to reveal bias - let alone the bias metrics in general - remains unclear, though. In this paper, we study how to assess the quality of bias metrics for word embedding models. In particular, we present a generic method, Bias Silhouette Analysis (BSA), that quantifies the accuracy and robustness of such a metric and of the word lists used. Given a biased and an unbiased reference embedding model, BSA applies the metric systematically for several subsets of the lists to the models. The variance and rate of convergence of the bias values of each model then entail the robustness of the word lists, whereas the distance between the models' values gives indications of the general accuracy of the metric with the word lists. We demonstrate the behavior of BSA on two standard embedding models for the three mentioned metrics with several word lists from existing research.


#7 Decision Making with Differential Privacy under a Fairness Lens [PDF] [Copy] [Kimi] [REL]

Authors: Cuong Tran, Ferdinando Fioretto, Pascal Van Hentenryck, Zhiyan Yao

Many agencies release datasets and statistics about groups of individuals that are used as input to a number of critical decision processes. To conform with privacy and confidentiality requirements, these agencies are often required to release privacy-preserving versions of the data. This paper studies the release of differentially private datasets and analyzes their impact on some critical resource allocation tasks under a fairness perspective. The paper shows that, when the decisions take as input differentially private data, the noise added to achieve privacy disproportionately impacts some groups over others. The paper analyzes the reasons for these disproportionate impacts and proposes guidelines to mitigate these effects. The proposed approaches are evaluated on critical decision problems that use differentially private census data.


#8 An Examination of Fairness of AI Models for Deepfake Detection [PDF] [Copy] [Kimi] [REL]

Authors: Loc Trinh, Yan Liu

Recent studies have demonstrated that deep learning models can discriminate based on protected classes like race and gender. In this work, we evaluate bias present in deepfake datasets and detection models across protected subgroups. Using facial datasets balanced by race and gender, we examine three popular deepfake detectors and find large disparities in predictive performances across races, with up to 10.7% difference in error rate between subgroups. A closer look reveals that the widely used FaceForensics++ dataset is overwhelmingly composed of Caucasian subjects, with the majority being female Caucasians. Our investigation of the racial distribution of deepfakes reveals that the methods used to create deepfakes as positive training signals tend to produce ``irregular" faces - when a person’s face is swapped onto another person of a different race or gender. This causes detectors to learn spurious correlations between the foreground faces and fakeness. Moreover, when detectors are trained with the Blended Image (BI) dataset from Face X-Rays, we find that those detectors develop systematic discrimination towards certain racial subgroups, primarily female Asians.


#9 Characteristic Examples: High-Robustness, Low-Transferability Fingerprinting of Neural Networks [PDF] [Copy] [Kimi] [REL]

Authors: Siyue Wang, Xiao Wang, Pin-Yu Chen, Pu Zhao, Xue Lin

This paper proposes Characteristic Examples for effectively fingerprinting deep neural networks, featuring high-robustness to the base model against model pruning as well as low-transferability to unassociated models. This is the first work taking both robustness and transferability into consideration for generating realistic fingerprints, whereas current methods lack practical assumptions and may incur large false positive rates. To achieve better trade-off between robustness and transferability, we propose three kinds of characteristic examples: vanilla C-examples, RC-examples, and LTRC-example, to derive fingerprints from the original base model. To fairly characterize the trade-off between robustness and transferability, we propose Uniqueness Score, a comprehensive metric that measures the difference between robustness and transferability, which also serves as an indicator to the false alarm problem. Extensive experiments demonstrate that the proposed characteristic examples can achieve superior performance when compared with existing fingerprinting methods. In particular, for VGG ImageNet models, using LTRC-examples gives 4X higher uniqueness score than the baseline method and does not incur any false positives.