IJCAI.2022

| Total: 862

#1 Detecting Out-Of-Context Objects Using Graph Contextual Reasoning Network [PDF] [Copy] [Kimi] [REL]

Authors: Manoj Acharya ; Anirban Roy ; Kaushik Koneripalli ; Susmit Jha ; Christopher Kanan ; Ajay Divakaran

This paper presents an approach for detecting out-of-context (OOC) objects in images. Given an image with a set of objects, our goal is to determine if an object is inconsistent with the contextual relations and detect the OOC object with a bounding box. In this work, we consider common contextual relations such as co-occurrence relations, the relative size of an object with respect to other objects, and the position of the object in the scene. We posit that contextual cues are useful to determine object labels for in-context objects and inconsistent context cues are detrimental to determining object labels for out-of-context objects. To realize this hypothesis, we propose a graph contextual reasoning network (GCRN) to detect OOC objects. GCRN consists of two separate graphs to predict object labels based on the contextual cues in the image: 1) a representation graph to learn object features based on the neighboring objects and 2) a context graph to explicitly capture contextual cues from the neighboring objects. GCRN explicitly captures the contextual cues to improve the detection of in-context objects and identify objects that violate contextual relations. In order to evaluate our approach, we create a large-scale dataset by adding OOC object instances to the COCO images. We also evaluate on recent OCD benchmark. Our results show that GCRN outperforms competitive baselines in detecting OOC objects and correctly detecting in-context objects. Code and data: https://nusci.csl.sri.com/project/trinity-ooc

#2 Axiomatic Foundations of Explainability [PDF] [Copy] [Kimi] [REL]

Authors: Leila Amgoud ; Jonathan Ben-Naim

Improving trust in decisions made by classification models is becoming crucial for the acceptance of automated systems, and an important way of doing that is by providing explanations for the behaviour of the models. Different explainers have been proposed in the recent literature for that purpose, however their formal properties are under-studied. This paper investigates theoretically explainers that provide reasons behind decisions independently of instances. Its contributions are fourfold. The first is to lay the foundations of such explainers by proposing key axioms, i.e., desirable properties they would satisfy. Two axioms are incompatible leading to two subsets. The second contribution consists of demonstrating that the first subset of axioms characterizes a family of explainers that return sufficient reasons while the second characterizes a family that provides necessary reasons. This sheds light on the axioms which distinguish the two types of reasons. As a third contribution, the paper introduces various explainers of both families, and fully characterizes some of them. Those explainers make use of the whole feature space. The fourth contribution is a family of explainers that generate explanations from finite datasets (subsets of the feature space). This family, seen as an abstraction of Anchors and LIME, violates some axioms including one which prevents incorrect explanations.

#3 On Preferred Abductive Explanations for Decision Trees and Random Forests [PDF] [Copy] [Kimi] [REL]

Authors: Gilles Audemard ; Steve Bellart ; Louenas Bounia ; Frederic Koriche ; Jean-Marie Lagniez ; Pierre Marquis

Abductive explanations take a central place in eXplainable Artificial Intelligence (XAI) by clarifying with few features the way data instances are classified. However, instances may have exponentially many minimum-size abductive explanations, and this source of complexity holds even for ``intelligible'' classifiers, such as decision trees. When the number of such abductive explanations is huge, computing one of them, only, is often not informative enough. Especially, better explanations than the one that is derived may exist. As a way to circumvent this issue, we propose to leverage a model of the explainee, making precise her / his preferences about explanations, and to compute only preferred explanations. In this paper, several models are pointed out and discussed. For each model, we present and evaluate an algorithm for computing preferred majoritary reasons, where majoritary reasons are specific abductive explanations suited to random forests. We show that in practice the preferred majoritary reasons for an instance can be far less numerous than its majoritary reasons.

#4 Individual Fairness Guarantees for Neural Networks [PDF] [Copy] [Kimi] [REL]

Authors: Elias Benussi ; Andrea Patane' ; Matthew Wicker ; Luca Laurenti ; Marta Kwiatkowska

We consider the problem of certifying the individual fairness (IF) of feed-forward neural networks (NNs). In particular, we work with the epsilon-delta-IF formulation, which, given a NN and a similarity metric learnt from data, requires that the output difference between any pair of epsilon-similar individuals is bounded by a maximum decision tolerance delta >= 0. Working with a range of metrics, including the Mahalanobis distance, we propose a method to overapproximate the resulting optimisation problem using piecewise-linear functions to lower and upper bound the NN's non-linearities globally over the input space. We encode this computation as the solution of a Mixed-Integer Linear Programming problem and demonstrate that it can be used to compute IF guarantees on four datasets widely used for fairness benchmarking. We show how this formulation can be used to encourage models' fairness at training time by modifying the NN loss, and empirically confirm our approach yields NNs that are orders of magnitude fairer than state-of-the-art methods.

#5 How Does Frequency Bias Affect the Robustness of Neural Image Classifiers against Common Corruption and Adversarial Perturbations? [PDF] [Copy] [Kimi] [REL]

Authors: Alvin Chan ; Yew Soon Ong ; Clement Tan

Model robustness is vital for the reliable deployment of machine learning models in real-world applications. Recent studies have shown that data augmentation can result in model over-relying on features in the low-frequency domain, sacrificing performance against low-frequency corruptions, highlighting a connection between frequency and robustness. Here, we take one step further to more directly study the frequency bias of a model through the lens of its Jacobians and its implication to model robustness. To achieve this, we propose Jacobian frequency regularization for models' Jacobians to have a larger ratio of low-frequency components. Through experiments on four image datasets, we show that biasing classifiers towards low (high)-frequency components can bring performance gain against high (low)-frequency corruption and adversarial perturbation, albeit with a tradeoff in performance for low (high)-frequency corruption. Our approach elucidates a more direct connection between the frequency bias and robustness of deep learning models.

#6 Learn to Reverse DNNs from AI Programs Automatically [PDF] [Copy] [Kimi] [REL]

Authors: Simin Chen ; Hamed Khanpour ; Cong Liu ; Wei Yang

With the privatization deployment of DNNs on edge devices, the security of on-device DNNs has raised significant concern. To quantify the model leakage risk of on-device DNNs automatically, we propose NNReverse, the first learning-based method which can reverse DNNs from AI programs without domain knowledge. NNReverse trains a representation model to represent the semantics of binary code for DNN layers. By searching the most similar function in our database, NNReverse infers the layer type of a given function’s binary code. To represent assembly instructions semantics precisely, NNReverse proposes a more fine-grained embedding model to represent the textual and structural-semantic of assembly functions.

#7 CAT: Customized Adversarial Training for Improved Robustness [PDF] [Copy] [Kimi] [REL]

Authors: Minhao Cheng ; Qi Lei ; Pin-Yu Chen ; Inderjit Dhillon ; Cho-Jui Hsieh

Adversarial training has become one of the most effective methods for improving robustness of neural networks. However, it often suffers from poor generalization on both clean and perturbed data. Current robust training method always use a uniformed perturbation strength for every samples to generate adversarial examples during model training for improving adversarial robustness. However, we show it would lead worse training and generalizaiton error and forcing the prediction to match one-hot label. In this paper, therefore, we propose a new algorithm, named Customized Adversarial Training (CAT), which adaptively customizes the perturbation level and the corresponding label for each training sample in adversarial training. We first show theoretically the CAT scheme improves the generalization. Also, through extensive experiments, we show that the proposed algorithm achieves better clean and robust accuracy than previous adversarial training methods. The full version of this paper is available at https://arxiv.org/abs/2002.06789.

#8 PPT: Backdoor Attacks on Pre-trained Models via Poisoned Prompt Tuning [PDF] [Copy] [Kimi] [REL]

Authors: Wei Du ; Yichun Zhao ; Boqun Li ; Gongshen Liu ; Shilin Wang

Recently, prompt tuning has shown remarkable performance as a new learning paradigm, which freezes pre-trained language models (PLMs) and only tunes some soft prompts. A fixed PLM only needs to be loaded with different prompts to adapt different downstream tasks. However, the prompts associated with PLMs may be added with some malicious behaviors, such as backdoors. The victim model will be implanted with a backdoor by using the poisoned prompt. In this paper, we propose to obtain the poisoned prompt for PLMs and corresponding downstream tasks by prompt tuning. We name this Poisoned Prompt Tuning method "PPT". The poisoned prompt can lead a shortcut between the specific trigger word and the target label word to be created for the PLM. So the attacker can simply manipulate the prediction of the entire model by just a small prompt. Our experiments on various text classification tasks show that PPT can achieve a 99% attack success rate with almost no accuracy sacrificed on original task. We hope this work can raise the awareness of the possible security threats hidden in the prompt.

#9 SoFaiR: Single Shot Fair Representation Learning [PDF] [Copy] [Kimi] [REL]

Authors: Xavier Gitiaux ; Huzefa Rangwala

To avoid discriminatory uses of their data, organizations can learn to map them into a representation that filters out information related to sensitive attributes. However, all existing methods in fair representation learning generate a fairness-information trade-off. To achieve different points on the fairness-information plane, one must train different models. In this paper, we first demonstrate that fairness-information trade-offs are fully characterized by rate-distortion trade-offs. Then, we use this key result and propose SoFaiR, a single shot fair representation learning method that generates with one trained model many points on the fairness-information plane. Besides its computational saving, our single-shot approach is, to the extent of our knowledge, the first fair representation learning method that explains what information is affected by changes in the fairness / distortion properties of the representation. Empirically, we find on three datasets that SoFaiR achieves similar fairness information trade-offs as its multi-shot counterparts.

#10 Fairness without the Sensitive Attribute via Causal Variational Autoencoder [PDF] [Copy] [Kimi] [REL]

Authors: Vincent Grari ; Sylvain Lamprier ; Marcin Detyniecki

In recent years, most fairness strategies in machine learning have focused on mitigating unwanted biases by assuming that the sensitive information is available. However, in practice this is not always the case: due to privacy purposes and regulations such as RGPD in EU, many personal sensitive attributes are frequently not collected. Yet, only a few prior works address the issue of mitigating bias in such a difficult setting, in particular to meet classical fairness objectives such as Demographic Parity and Equalized Odds. By leveraging recent developments for approximate inference, we propose in this paper an approach to fill this gap. To infer a sensitive information proxy, we introduce a new variational auto-encoding-based framework named SRCVAE that relies on knowledge of the underlying causal graph. The bias mitigation is then done in an adversarial fairness approach. Our proposed method empirically achieves significant improvements over existing works in the field. We observe that the generated proxy’s latent space correctly recovers sensitive information and that our approach achieves a higher accuracy while obtaining the same level of fairness on two real datasets.

#11 Taking Situation-Based Privacy Decisions: Privacy Assistants Working with Humans [PDF] [Copy] [Kimi] [REL]

Authors: Nadin Kökciyan ; Pinar Yolum

Privacy on the Web is typically managed by giving consent to individual Websites for various aspects of data usage. This paradigm requires too much human effort and thus is impractical for Internet of Things (IoT) applications where humans interact with many new devices on a daily basis. Ideally, software privacy assistants can help by making privacy decisions in different situations on behalf of the users. To realize this, we propose an agent-based model for a privacy assistant. The model identifies the contexts that a situation implies and computes the trustworthiness of these contexts. Contrary to traditional trust models that capture trust in an entity by observing large number of interactions, our proposed model can assess the trustworthiness even if the user has not interacted with the particular device before. Moreover, our model can decide which situations are inherently ambiguous and thus can request the human to make the decision. We evaluate various aspects of the model using a real-life data set and report adjustments that are needed to serve different types of users well.

#12 Model Stealing Defense against Exploiting Information Leak through the Interpretation of Deep Neural Nets [PDF] [Copy] [Kimi] [REL]

Authors: Jeonghyun Lee ; Sungmin Han ; Sangkyun Lee

Model stealing techniques allow adversaries to create attack models that mimic the functionality of black-box machine learning models, querying only class membership or probability outcomes. Recently, interpretable AI is getting increasing attention, to enhance our understanding of AI models, provide additional information for diagnoses, or satisfy legal requirements. However, it has been recently reported that providing such additional information can make AI models more vulnerable to model stealing attacks. In this paper, we propose DeepDefense, the first defense mechanism that protects an AI model against model stealing attackers exploiting both class probabilities and interpretations. DeepDefense uses a misdirection model to hide the critical information of the original model against model stealing attacks, with minimal degradation on both the class probability and the interpretability of prediction output. DeepDefense is highly applicable for any model stealing scenario since it makes minimal assumptions about the model stealing adversary. In our experiments, DeepDefense shows significantly higher defense performance than the existing state-of-the-art defenses on various datasets and interpreters.

#13 Investigating and Explaining the Frequency Bias in Image Classification [PDF] [Copy] [Kimi] [REL]

Authors: Zhiyu Lin ; Yifei Gao ; Jitao Sang

CNNs exhibit many behaviors different from humans, one of which is the capability of employing high-frequency components. This paper discusses the frequency bias phenomenon in image classification tasks: the high-frequency components are actually much less exploited than the low- and mid- frequency components. We first investigate the frequency bias phenomenon by presenting two observations on feature discrimination and learning priority. Furthermore, we hypothesize that (1) the spectral density, (2) class consistency directly affect the frequency bias. Specifically, our investigations verify that the spectral density of datasets mainly affects the learning priority, while the class consistency mainly affects the feature discrimination.

#14 AttExplainer: Explain Transformer via Attention by Reinforcement Learning [PDF] [Copy] [Kimi1] [REL]

Authors: Runliang Niu ; Zhepei Wei ; Yan Wang ; Qi Wang

Transformer and its variants, built based on attention mechanisms, have recently achieved remarkable performance in many NLP tasks. Most existing works on Transformer explanation tend to reveal and utilize the attention matrix with human subjective intuitions in a qualitative manner. However, the huge size of dimensions directly challenges these methods to quantitatively analyze the attention matrix. Therefore, in this paper, we propose a novel reinforcement learning (RL) based framework for Transformer explanation via attention matrix, namely AttExplainer. The RL agent learns to perform step-by-step masking operations by observing the change in attention matrices. We have adapted our method to two scenarios, perturbation-based model explanation and text adversarial attack. Experiments on three widely used text classification benchmarks validate the effectiveness of the proposed method compared to state-of-the-art baselines. Additional studies show that our method is highly transferable and consistent with human intuition. The code of this paper is available at https://github.com/niuzaisheng/AttExplainer .

#15 Counterfactual Interpolation Augmentation (CIA): A Unified Approach to Enhance Fairness and Explainability of DNN [PDF] [Copy] [Kimi] [REL]

Authors: Yao Qiang ; Chengyin Li ; Marco Brocanelli ; Dongxiao Zhu

Bias in the training data can jeopardize fairness and explainability of deep neural network prediction on test data. We propose a novel bias-tailored data augmentation approach, Counterfactual Interpolation Augmentation (CIA), attempting to debias the training data by d-separating the spurious correlation between the target variable and the sensitive attribute. CIA generates counterfactual interpolations along a path simulating the distribution transitions between the input and its counterfactual example. CIA as a pre-processing approach enjoys two advantages: First, it couples with either plain training or debiasing training to markedly increase fairness over the sensitive attribute. Second, it enhances the explainability of deep neural networks by generating attribution maps via integrating counterfactual gradients. We demonstrate the superior performance of the CIA-trained deep neural network models using qualitative and quantitative experimental results. Our code is available at: https://github.com/qiangyao1988/CIA

#16 BayCon: Model-agnostic Bayesian Counterfactual Generator [PDF] [Copy] [Kimi] [REL]

Authors: Piotr Romashov ; Martin Gjoreski ; Kacper Sokol ; Maria Vanina Martinez ; Marc Langheinrich

Generating counterfactuals to discover hypothetical predictive scenarios is the de facto standard for explaining machine learning models and their predictions. However, building a counterfactual explainer that is time-efficient, scalable, and model-agnostic, in addition to being compatible with continuous and categorical attributes, remains an open challenge. To complicate matters even more, ensuring that the contrastive instances are optimised for feature sparsity, remain close to the explained instance, and are not drawn from outside of the data manifold, is far from trivial. To address this gap we propose BayCon: a novel counterfactual generator based on probabilistic feature sampling and Bayesian optimisation. Such an approach can combine multiple objectives by employing a surrogate model to guide the counterfactual search. We demonstrate the advantages of our method through a collection of experiments based on six real-life datasets representing three regression tasks and three classification tasks.

#17 What Does My GNN Really Capture? On Exploring Internal GNN Representations [PDF] [Copy] [Kimi] [REL]

Authors: Luca Veyrin-Forrer ; Ataollah Kamal ; Stefan Duffner ; Marc Plantevit ; Céline Robardet

Graph Neural Networks (GNNs) are very efficient at classifying graphs but their internal functioning is opaque which limits their field of application. Existing methods to explain GNN focus on disclosing the relationships between input graphs and model decision. In this article, we propose a method that goes further and isolates the internal features, hidden in the network layers, that are automatically identified by the GNN and used in the decision process. We show that this method makes possible to know the parts of the input graphs used by GNN with much less bias that SOTA methods and thus to bring confidence in the decision process.

#18 Shielding Federated Learning: Robust Aggregation with Adaptive Client Selection [PDF] [Copy] [Kimi] [REL]

Authors: Wei Wan ; Shengshan Hu ; jianrong Lu ; Leo Yu Zhang ; Hai Jin ; Yuanyuan He

Federated learning (FL) enables multiple clients to collaboratively train an accurate global model while protecting clients' data privacy. However, FL is susceptible to Byzantine attacks from malicious participants. Although the problem has gained significant attention, existing defenses have several flaws: the server irrationally chooses malicious clients for aggregation even after they have been detected in previous rounds; the defenses perform ineffectively against sybil attacks or in the heterogeneous data setting. To overcome these issues, we propose MAB-RFL, a new method for robust aggregation in FL. By modelling the client selection as an extended multi-armed bandit (MAB) problem, we propose an adaptive client selection strategy to choose honest clients that are more likely to contribute high-quality updates. We then propose two approaches to identify malicious updates from sybil and non-sybil attacks, based on which rewards for each client selection decision can be accurately evaluated to discourage malicious behaviors. MAB-RFL achieves a satisfying balance between exploration and exploitation on the potential benign clients. Extensive experimental results show that MAB-RFL outperforms existing defenses in three attack scenarios under different percentages of attackers.

#19 Anti-Forgery: Towards a Stealthy and Robust DeepFake Disruption Attack via Adversarial Perceptual-aware Perturbations [PDF] [Copy] [Kimi] [REL]

Authors: Run Wang ; Ziheng Huang ; Zhikai Chen ; Li Liu ; Jing Chen ; Lina Wang

DeepFake is becoming a real risk to society and brings potential threats to both individual privacy and political security due to the DeepFaked multimedia are realistic and convincing. However, the popular DeepFake passive detection is an ex-post forensics countermeasure and failed in blocking the disinformation spreading in advance. To address this limitation, researchers study the proactive defense techniques by adding adversarial noises into the source data to disrupt the DeepFake manipulation. However, the existing studies on proactive DeepFake defense via injecting adversarial noises are not robust, which could be easily bypassed by employing simple image reconstruction revealed in a recent study MagDR. In this paper, we investigate the vulnerability of the existing forgery techniques and propose a novel anti-forgery technique that helps users protect the shared facial images from attackers who are capable of applying the popular forgery techniques. Our proposed method generates perceptual-aware perturbations in an incessant manner which is vastly different from the prior studies by adding adversarial noises that is sparse. Experimental results reveal that our perceptual-aware perturbations are robust to diverse image transformations, especially the competitive evasion technique, MagDR via image reconstruction. Our findings potentially open up a new research direction towards thorough understanding and investigation of perceptual-aware adversarial attack for protecting facial images against DeepFakes in a proactive and robust manner. Code is available at https://github.com/AbstractTeen/AntiForgery.

#20 Cluster Attack: Query-based Adversarial Attacks on Graph with Graph-Dependent Priors [PDF] [Copy] [Kimi] [REL]

Authors: Zhengyi Wang ; Zhongkai Hao ; Ziqiao Wang ; Hang Su ; Jun Zhu

While deep neural networks have achieved great success in graph analysis, recent work has shown that they are vulnerable to adversarial attacks. Compared with adversarial attacks on image classification, performing adversarial attacks on graphs is more challenging because of the discrete and non-differential nature of the adjacent matrix for a graph. In this work, we propose Cluster Attack --- a Graph Injection Attack (GIA) on node classification, which injects fake nodes into the original graph to degenerate the performance of graph neural networks (GNNs) on certain victim nodes while affecting the other nodes as little as possible. We demonstrate that a GIA problem can be equivalently formulated as a graph clustering problem; thus, the discrete optimization problem of the adjacency matrix can be solved in the context of graph clustering. In particular, we propose to measure the similarity between victim nodes by a metric of Adversarial Vulnerability, which is related to how the victim nodes will be affected by the injected fake node, and to cluster the victim nodes accordingly. Our attack is performed in a practical and unnoticeable query-based black-box manner with only a few nodes on the graphs that can be accessed. Theoretical analysis and extensive experiments demonstrate the effectiveness of our method by fooling the node classifiers with only a small number of queries.

#21 MetaFinger: Fingerprinting the Deep Neural Networks with Meta-training [PDF] [Copy] [Kimi] [REL]

Authors: Kang Yang ; Run Wang ; Lina Wang

As deep neural networks (DNNs) play a critical role in various fields, the models themselves hence are becoming an important asset that needs to be protected. To achieve this, various neural network fingerprint methods have been proposed. However, existing fingerprint methods fingerprint the decision boundary by adversarial examples, which is not robust to model modification and adversarial defenses. To fill this gap, we propose a robust fingerprint method MetaFinger, which fingerprints the inner decision area of the model by meta-training, rather than the decision boundary. Specifically, we first generate many shadow models with DNN augmentation as meta-data. Then we optimize some images by meta-training to ensure that only models derived from the protected model can recognize them. To demonstrate the robustness of our fingerprint approach, we evaluate our method against two types of attacks including input modification and model modification. Experiments show that our method achieves 99.34% and 97.69% query accuracy on average, surpassing existing methods over 30%, 25% on CIFAR-10 and Tiny-ImageNet, respectively. Our code is available at https://github.com/kangyangWHU/MetaFinger.

#22 Approximately EFX Allocations for Indivisible Chores [PDF] [Copy] [Kimi] [REL]

Authors: Shengwei Zhou ; Xiaowei Wu

In this paper we study how to fairly allocate a set of m indivisible chores to a group of n agents, each of which has a general additive cost function on the items. Since envy-free (EF) allocation is not guaranteed to exist, we consider the notion of envy-freeness up to any item (EFX). In contrast to the fruitful results regarding the (approximation of) EFX allocations for goods, very little is known for the allocation of chores. Prior to our work, for the allocation of chores, it is known that EFX allocations always exist for two agents, or general number of agents with identical ordering cost functions. For general instances, no non-trivial approximation result regarding EFX allocation is known. In this paper we make some progress in this direction by showing that for three agents we can always compute a 5-approximation of EFX allocation in polynomial time. For n>=4 agents, our algorithm always computes an allocation that achieves an approximation ratio of 3n^2 regarding EFX. We also study the bi-valued instances, in which agents have at most two cost values on the chores, and provide polynomial time algorithms for the computation of EFX allocation when n=3, and (n-1)-approximation of EFX allocation when n>=4.

#23 Anytime Capacity Expansion in Medical Residency Match by Monte Carlo Tree Search [PDF] [Copy] [Kimi] [REL]

Authors: Kenshi Abe ; Junpei Komiyama ; Atsushi Iwasaki

This paper considers the capacity expansion problem in two-sided matchings, where the policymaker is allowed to allocate some extra seats as well as the standard seats. In medical residency match, each hospital accepts a limited number of doctors. Such capacity constraints are typically given in advance. However, such exogenous constraints can compromise the welfare of the doctors; some popular hospitals inevitably dismiss some of their favorite doctors. Meanwhile, it is often the case that the hospitals are also benefited to accept a few extra doctors. To tackle the problem, we propose an anytime method that the upper confidence tree searches the space of capacity expansions, each of which has a resident-optimal stable assignment that the deferred acceptance method finds. Constructing a good search tree representation significantly boosts the performance of the proposed method. Our simulation shows that the proposed method identifies an almost optimal capacity expansion with a significantly smaller computational budget than exact methods based on mixed-integer programming.

#24 Socially Intelligent Genetic Agents for the Emergence of Explicit Norms [PDF] [Copy] [Kimi] [REL]

Authors: Rishabh Agrawal ; Nirav Ajmeri ; Munindar Singh

Norms help regulate a society. Norms may be explicit (represented in structured form) or implicit. We address the emergence of explicit norms by developing agents who provide and reason about explanations for norm violations in deciding sanctions and identifying alternative norms. These agents use a genetic algorithm to produce norms and reinforcement learning to learn the values of these norms. We find that applying explanations leads to norms that provide better cohesion and goal satisfaction for the agents. Our results are stable for societies with differing attitudes of generosity.

#25 An EF2X Allocation Protocol for Restricted Additive Valuations [PDF] [Copy] [Kimi] [REL]

Authors: Hannaneh Akrami ; Rojin Rezvan ; Masoud Seddighin

We study the problem of fairly allocating a set of indivisible goods to a set of n agents. Envy-freeness up to any good (EFX) criterion (which requires that no agent prefers the bundle of another agent after the removal of any single good) is known to be a remarkable analogue of envy-freeness when the resource is a set of indivisible goods. In this paper, we investigate EFX for restricted additive valuations, that is, every good has a non-negative value, and every agent is interested in only some of the goods. We introduce a natural relaxation of EFX called EFkX which requires that no agent envies another agent after the removal of any k goods. Our main contribution is an algorithm that finds a complete (i.e., no good is discarded) EF2X allocation for restricted additive valuations. In our algorithm we devise new concepts, namely configuration and envy-elimination that might be of independent interest. We also use our new tools to find an EFX allocation for restricted additive valuations that discards at most n/2 -1 goods.