IJCAI.2024 - Game Theory and Economic Paradigms

| Total: 47

#1 Optimizing Viscous Democracy [PDF] [Copy] [Kimi] [REL]

Authors: Ben Armstrong ; Shiri Alouf-Heffetz ; Nimrod Talmon

Viscous democracy is a generalization of liquid democracy, a social choice framework in which voters may transitively delegate their votes. In viscous democracy, a "viscosity" factor decreases the weight of a delegation the further it travels, reducing the chance of excessive weight flowing between ideologically misaligned voters. We demonstrate that viscous democracy often significantly improves the quality of group decision-making over liquid democracy. We first show that finding optimal delegations within a viscous setting is NP-hard. However, simulations allow us to explore the practical effects of viscosity. Across social network structures, competence distributions, and delegation mechanisms we find high viscosity reduces the chance of ``super-voters'' attaining large amounts of weight and increases the number of voters that are able to affect the outcome of elections. This, in turn, improves group accuracy as a whole. As a result, we argue that viscosity should be considered a core component of liquid democracy.

#2 Facility Location Problems with Capacity Constraints: Two Facilities and Beyond [PDF] [Copy] [Kimi] [REL]

Authors: Gennaro Auricchio ; Zihe Wang ; Jie Zhang

In this paper, we investigate the Mechanism Design aspects of the m-Capacitated Facility Location Problem (m-CFLP) on a line. We focus on two frameworks. In the first framework, the number of facilities is arbitrary, all facilities have the same capacity, and the number of agents is equal to the total capacity of all facilities. In the second framework, we aim to place two facilities, each with a capacity of at least half of the total agents. For both of these frameworks, we propose truthful mechanisms with bounded approximation ratios with respect to the Social Cost (SC) and the Maximum Cost (MC). When m>2, the result sharply contrasts with the impossibility results known for the classic m-Facility Location Problem, where capacity constraints are not considered. Furthermore, all our mechanisms are optimal with respect to the MC and optimal or nearly optimal with respect to the SC among anonymous mechanisms. For both frameworks, we provide a lower bound on the approximation ratio that any truthful and deterministic mechanism can achieve with respect to the SC and MC.

#3 Welfare Loss in Connected Resource Allocation [PDF] [Copy] [Kimi] [REL]

Authors: Xiaohui Bei ; Alexander Lam ; Xinhang Lu ; Warut Suksompong

We study the allocation of indivisible goods that form an undirected graph and investigate the worst-case welfare loss when requiring that each agent must receive a connected subgraph. Our focus is on both egalitarian and utilitarian welfare. Specifically, we introduce the concept of egalitarian (resp., utilitarian) price of connectivity, which captures the worst-case ratio between the optimal egalitarian (resp., utilitarian) welfare among all allocations and that among the connected allocations. We provide tight or asymptotically tight bounds on the price of connectivity for various large classes of graphs when there are two agents, and for paths, stars and cycles in the general case. Many of our results are supplemented with algorithms which find connected allocations with a welfare guarantee corresponding to the price of connectivity.

#4 Computing Optimal Equilibria in Repeated Games with Restarts [PDF] [Copy] [Kimi] [REL]

Authors: Ratip Emin Berker ; Vincent Conitzer

Infinitely repeated games can support cooperative outcomes that are not equilibria in the one-shot game. The idea is to make sure that any gains from deviating will be offset by retaliation in future rounds. However, this model of cooperation fails in anonymous settings with many strategic agents that interact in pairs. Here, a player can defect and then avoid penalization by immediately switching partners. In this paper, we focus on a specific set of equilibria that avoids this pitfall. In them, agents follow a designated sequence of actions, and restart if their opponent ever deviates. We show that the socially-optimal sequence of actions consists of an infinitely repeating goal value, preceded by a hazing period. We introduce an equivalence relation on sequences and prove that the computational problem of finding a representative from the optimal equivalence class is (weakly) NP-hard. Nevertheless, we present a pseudo-polynomial time dynamic program for this problem, as well as an integer linear program, and show they are efficient in practice. Lastly, we introduce a fully polynomial-time approximation scheme that outputs a hazing sequence with arbitrarily small approximation ratio.

#5 Evaluation of Project Performance in Participatory Budgeting [PDF] [Copy] [Kimi] [REL]

Authors: Niclas Boehmer ; Piotr Faliszewski ; Łukasz Janeczko ; Dominik Peters ; Grzegorz Pierczyński ; Šimon Schierreich ; Piotr Skowron ; Stanisław Szufa

We study ways of evaluating the performance of losing projects in participatory budgeting (PB) elections by seeking actions that would make them win. We focus on lowering their costs, obtaining additional approvals, and removing approvals for competing projects: The larger a change is needed, the less successful is the given project. We seek efficient algorithms for computing our measures and we analyze them experimentally, focusing on GreedyAV, Phragmen, and Equal-Shares PB rules.

#6 Randomized Learning-Augmented Auctions with Revenue Guarantees [PDF] [Copy] [Kimi] [REL]

Authors: Ioannis Caragiannis ; Georgios Kalantzis

We consider the fundamental problem of designing a truthful single-item auction with the challenging objective of extracting a large fraction of the highest agent valuation as revenue. Following a recent trend in algorithm design, we assume that the agent valuations belong to a known interval, and a prediction for the highest valuation is available. Then, auction design aims for high consistency and robustness, meaning that, for appropriate pairs of values γ and ρ, the extracted revenue should be at least a γ- or ρ-fraction of the highest valuation when the prediction is correct for the input instance or not. We characterize all pairs of parameters γ and ρ so that a randomized γ-consistent and ρ-robust auction exists. Furthermore, for the setting in which robustness can be a function of the prediction error, we give sufficient and necessary conditions for the existence of robust auctions and present randomized auctions that extract a revenue that is only a polylogarithmic (in terms of the prediction error) factor away from the highest agent valuation.

#7 Layered Graph Security Games [PDF] [Copy] [Kimi] [REL]

Authors: Jakub Cerny ; Chun Kai Ling ; Christian Kroer ; Garud Iyengar

Security games model strategic interactions in adversarial real-world applications. Such applications often involve extremely large but highly structured strategy sets (e.g., selecting a distribution over all patrol routes in a given graph). In this paper, we represent each player's strategy space using a layered graph whose paths represent an exponentially large strategy space. Our formulation entails not only classic pursuit-evasion games, but also other security games, such as those modeling anti-terrorism and logistical interdiction. We study two-player zero-sum games under two distinct utility models: linear and binary utilities. We show that under linear utilities, Nash equilibrium can be computed in polynomial time, while binary utilities may lead to situations where even computing a best-response is computationally intractable. To this end, we propose a practical algorithm based on incremental strategy generation and mixed integer linear programs. We show through extensive experiments that our algorithm efficiently computes epsilon-equilibrium for many games of interest. We find that target values and graph structure often have a larger influence on running times as compared to the size of the graph per se.

#8 Parameterized Analysis of Bribery in Challenge the Champ Tournaments [PDF] [Copy] [Kimi] [REL]

Authors: Juhi Chaudhary ; Hendrik Molter ; Meirav Zehavi

Challenge the champ tournaments are one of the simplest forms of competition, where a (initially selected) champ is repeatedly challenged by other players. If a player beats the champ, then that player is considered the new (current) champ. Each player in the competition challenges the current champ once in a fixed order. The champ of the last round is considered the winner of the tournament. We investigate a setting where players can be bribed to lower their winning probability against the initial champ. The goal is to maximize the probability of the initial champ winning the tournament by bribing the other players, while not exceeding a given budget for the bribes. In previous work is was shown that the problem can be solved in pseudo-polynomial time, and that it is in XP when parameterized by the number of players. We show that the problem is weakly NP-hard and W[1]-hard when parameterized by the number of players. On the algorithmic side, we show that the problem is fixed-parameter tractable when parameterized either by the number of different bribe values or the number of different probability values. To this end, we establish several results that are of independent interest. In particular, we show that the product knapsack problem is W[1]-hard when parameterized by the number of items in the knapsack, and that constructive bribery for cup tournaments is W[1]-hard when parameterized by the number of players. Furthermore, we present a novel way of designing mixed integer linear programs, ensuring optimal solutions where all variables are integers.

#9 On the Pursuit of EFX for Chores: Non-existence and Approximations [PDF] [Copy] [Kimi] [REL]

Authors: Vasilis Christoforidis ; Christodoulos Santorinaios

We study the problem of fairly allocating a set of chores to a group of agents. The existence of envy-free up to any item (EFX) allocations is a long-standing open question for both goods and chores. We resolve this question by providing a negative answer for the latter, presenting a simple construction that admits no EFX solutions for allocating six items to three agents equipped with superadditive cost functions, thus proving a separation result between goods and bads. In fact, we uncover a deeper insight, showing that the instance has unbounded approximation ratio. Moreover, we show that deciding whether an EFX allocation exists is NP-complete. On the positive side, we establish the existence of EFX allocations under general monotone cost functions when the number of items is at most n + 2. We then shift our attention to additive cost functions. We employ a general framework in order to improve the approximation guarantees in the well-studied case of three additive agents, and provide several conditional approximation bounds that leverage ordinal information.

#10 Online Learning of Partitions in Additively Separable Hedonic Games [PDF] [Copy] [Kimi] [REL]

Authors: Saar Cohen ; Noa Agmon

Coalition formation involves partitioning agents into disjoint coalitions based on their preferences over other agents. In reality, agents may lack enough information to assess their preferences before interacting with others. This motivates us to initiate the research on coalition formation from the viewpoint of online learning. At each round, a possibly different subset of a given set of agents arrives, that a learner then partitions into coalitions. Only afterwards, the agents' preferences, which possibly change over time, are revealed. The learner's goal is optimizing social cost by minimizing his (static or dynamic) regret. We show that even no-static regret is hard to approximate, and constant approximation in polynomial time is unattainable. Yet, for a fractional relaxation of our problem, we devise an algorithm that simultaneously gives the optimal static and dynamic regret. We then present a rounding scheme with an optimal dynamic regret, which converts our algorithm's output into a solution for our original problem.

#11 Couples Can Be Tractable: New Algorithms and Hardness Results for the Hospitals/Residents Problem with Couples [PDF1] [Copy] [Kimi] [REL]

Authors: Gergely Csáji ; David Manlove ; Iain McBride ; James Trimble

In this paper we study the Hospitals/Residents problem with Couples (HRC), where a solution is a stable matching or a report that none exists. We present a novel polynomial-time algorithm that can find a near-feasible stable matching (adjusting the hospitals' capacities by at most 1) in an HRC instance where the couples' preferences are sub-responsive (i.e., if one member switches to a better hospital, than the couple also improves if the new pair is also acceptable) and sub-complete (i.e., each pair of hospitals that are individually acceptable to both members are jointly acceptable for the couple) by reducing it to an instance of the Stable Fixtures problem. We also present a polynomial-time algorithm for HRC in a sub-responsive, sub-complete instance that is a Dual Market, or where all couples are one of several possible types. Our polynomial-time solvability results greatly expand the class of known tractable instances of HRC. We complement our algorithms with several hardness results. We show that HRC with sub-responsive and sub-complete couples is NP-hard, even with other strong restrictions. We also show that HRC with a Dual Market is NP-hard under several simultaneous restrictions.

#12 Popular and Dominant Matchings with Uncertain and Multimodal Preferences [PDF] [Copy] [Kimi] [REL]

Author: Gergely Csáji

We study the Popular Matching (PM) problem in multiple models, where the preferences of the agents in the instance may change or may be unknown or uncertain. In particular, we study an Uncertainty model, where each agent has a possible set of preference lists, a Multilayer model, where there are layers of preference profiles, and a Robust popularity model, where any agent may move some other agents up or down some places in his preference list. Our goal is always to find a matching that is popular in any possible preference profile. We study both one-sided (only one class of the agents have preferences) and two-sided bipartite markets. In the one-sided model, we show that all our problems can be solved in polynomial time by utilizing the structure of popular matchings. We also obtain nice structural results. With two-sided preferences, we show that all three above models lead to NP-hard questions for popular matchings. By using the connection between dominant matchings and stable matchings, we show that in the robust and uncertainty models, a certainly dominant matching in all possible preference profiles can be found in polynomial time, whereas in the multilayer model, the problem remains NP-hard for dominant matchings too. We also answer an open question about d-robust stable matchings.

#13 Aggregation of Continuous Preferences in One Dimension [PDF] [Copy] [Kimi] [REL]

Authors: Alberto Del Pia ; Dušan Knop ; Alexandra Lassota ; Krzysztof Sornat ; Nimrod Talmon

We develop a general, formal model of social choice in which voters have continuous preferences over a one-dimensional space. Our model is parameterized by different restrictions that we introduce regarding the way voter preferences change in time as well as the optimization criteria (that correspond to a normative continuum of fairness definitions) desired from an aggregation method---that outputs a continuous, one-dimensional curve---given such inputs. We discuss the applicability of the model to different real-world situations and, as a first step towards an analysis of the different model realizations, we concentrate on identifying those cases that are computationally feasible to compute.

#14 Comparing Ways of Obtaining Candidate Orderings from Approval Ballots [PDF] [Copy] [Kimi] [REL]

Authors: Théo Delemazure ; Chris Dong ; Dominik Peters ; Magdalena Tydrichova

To understand and summarize approval preferences and other binary evaluation data, it is useful to order the items on an axis which explains the data. In a political election using approval voting, this could be an ideological left-right axis such that each voter approves adjacent candidates, an analogue of single-peakedness. In a perfect axis, every approval set would be an interval, which is usually not possible, and so we need to choose an axis that gets closest to this ideal. The literature has developed algorithms for optimizing several objective functions (e.g., minimize the number of added approvals needed to get a perfect axis), but provides little help with choosing among different objectives. In this paper, we take a social choice approach and compare 5 different axis selection rules axiomatically, by studying the properties they satisfy. We establish some impossibility theorems, and characterize (within the class of scoring rules) the rule that chooses the axes that maximize the number of votes that form intervals, using the axioms of ballot monotonicity and resistance to cloning. Finally, we study the behavior of the rules on data from French election surveys, on the votes of justices of the US Supreme Court, and on synthetic data.

#15 Selecting the Most Conflicting Pair of Candidates [PDF] [Copy] [Kimi] [REL]

Authors: Théo Delemazure ; Łukasz Janeczko ; Andrzej Kaczmarczyk ; Stanisław Szufa

We study committee elections from a perspective of finding the most conflicting candidates, that is, candidates that imply the largest amount of conflict, as per voter preferences. By proposing basic axioms to capture this objective, we show that none of the prominent multiwinner voting rules meet them. Consequently, we design committee voting rules compliant with our desiderata, introducing conflictual voting rules. A subsequent deepened analysis sheds more light on how they operate. Our investigation identifies various aspects of conflict, for which we come up with relevant axioms and quantitative measures, which may be of independent interest. We support our theoretical study with experiments on both real-life and synthetic data.

#16 Truthful Interval Covering [PDF] [Copy] [Kimi] [REL]

Authors: Argyrios Deligkas ; Aris Filos-Ratsikas ; Alexandros A. Voudouris

We initiate the study of a novel problem in mechanism design without money, which we term Truthful Interval Covering (TIC). An instance of TIC consists of a set of agents each associated with an individual interval on a line, and the objective is to decide where to place a covering interval to minimize the total social or egalitarian cost of the agents, which is determined by the intersection of this interval with their individual ones. This fundamental problem can model situations of provisioning a public good, such as the use of power generators to prevent or mitigate load shedding in developing countries. In the strategic version of the problem, the agents wish to minimize their individual costs, and might misreport the position and/or length of their intervals to achieve that. Our goal is to design truthful mechanisms to prevent such strategic misreports and achieve good approximations to the best possible social or egalitarian cost. We consider the fundamental setting of known intervals with equal lengths and provide tight bounds on the approximation ratios achieved by truthful deterministic mechanisms. For the social cost, we also design a randomized truthful mechanism that outperforms all possible deterministic ones. Finally, we highlight a plethora of natural extensions of our model for future work, as well as some natural limitations of those settings.

#17 Individual Rationality in Topological Distance Games Is Surprisingly Hard [PDF] [Copy] [Kimi] [REL]

Authors: Argyrios Deligkas ; Eduard Eiben ; Dušan Knop ; Šimon Schierreich

In the recently introduced topological distance games, strategic agents need to be assigned to a subset of vertices of a topology. In the assignment, the utility of an agent depends on both the agent's inherent utilities for other agents and its distance from them on the topology. We study the computational complexity of finding individually-rational outcomes; this notion is widely assumed to be the very minimal stability requirement and requires that the utility of every agent in a solution is non-negative. We perform a comprehensive study of the problem's complexity, and we prove that even in very basic cases, deciding whether an individually-rational solution exists is intractable. To reach at least some tractability, one needs to combine multiple restrictions of the input instance, including the number of agents and the topology and the influence of distant agents on the utility.

#18 Metric Distortion with Elicited Pairwise Comparisons [PDF] [Copy] [Kimi] [REL]

Authors: Soroush Ebadian ; Daniel Halpern ; Evi Micha

In many social choice applications, information about individuals' preferences can only be elicited using a limited number of pairwise comparisons. In these cases, the task is twofold: we must first choose the queries, and then second, we must aggregate the responses to choose an outcome. We study the problem of designing algorithms for this setting. To compare the effectiveness of different outcomes, we use the metric distortion framework. In addition, we consider various constraints on the query algorithms, namely, placing restrictions on how the choice of the next query may depend on previous answers. Our main contributions are nearly optimal algorithms for all settings considered.

#19 Weighted EF1 and PO Allocations with Few Types of Agents or Chores [PDF] [Copy] [Kimi] [REL]

Authors: Jugal Garg ; Aniket Murhekar ; John Qin

We investigate the existence of fair and efficient allocations of indivisible chores to asymmetric agents who have unequal entitlements or weights. We consider the fairness notion of weighted envy-freeness up to one chore (wEF1) and the efficiency notion of Pareto-optimality (PO). The existence of EF1 and PO allocations of chores to symmetric agents is a major open problem in discrete fair division, and positive results are known only for certain structured instances. In this paper, we study this problem for a more general setting of asymmetric agents and show that an allocation that is wEF1 and PO exists and can be computed in polynomial time for instances with: - Three types of agents where agents with the same type have identical preferences but can have different weights. - Two types of chores For symmetric agents, our results establish that EF1 and PO allocations exist for three types of agents and also generalize known results for three agents, two types of agents, and two types of chores. Our algorithms use a weighted picking sequence algorithm as a subroutine; we expect this idea and our analysis to be of independent interest.

#20 Getting More by Knowing Less: Bayesian Incentive Compatible Mechanisms for Fair Division [PDF] [Copy] [Kimi] [REL]

Authors: Vasilis Gkatzelis ; Alexandros Psomas ; Xizhi Tan ; Paritosh Verma

We study fair resource allocation with strategic agents. It is well-known that, across multiple fundamental problems in this domain, truthfulness and fairness are incompatible. For example, when allocating indivisible goods, no truthful and deterministic mechanism can guarantee envy-freeness up to one item (EF1), even for two agents with additive valuations. Or, in cake-cutting, no truthful and deterministic mechanism always outputs a proportional allocation, even for two agents with piecewise constant valuations. Our work stems from the observation that, in the context of fair division, truthfulness is used as a synonym for Dominant Strategy Incentive Compatibility (DSIC), requiring that an agent prefers reporting the truth, no matter what other agents report. In this paper, we instead focus on Bayesian Incentive Compatible (BIC) mechanisms, requiring that agents are better off reporting the truth in expectation over other agents' reports. We prove that, when agents know a bit less about each other, a lot more is possible: using BIC mechanisms we can achieve fairness notions that are unattainable by DSIC mechanisms in both the fundamental problems of allocation of indivisible goods and cake-cutting. We prove that this is the case even for an arbitrary number of agents, as long as the agents' priors about each others' types satisfy a neutrality condition. Notably, for the case of indivisible goods, we significantly strengthen the state-of-the-art negative result for efficient DSIC mechanisms, while also highlighting the limitations of BIC mechanisms, by showing that a very general class of welfare objectives is incompatible with Bayesian Incentive Compatibility. Combined, these results give a near-complete picture of the power and limitations of BIC and DSIC mechanisms for the problem of allocating indivisible goods.

#21 Determining Winners in Elections with Absent Votes [PDF] [Copy] [Kimi] [REL]

Authors: Qishen Han ; Amelie Marian ; Lirong Xia

An important question in elections is determining whether a candidate can be a winner when some votes are absent. We study this determining winner with absent votes (WAV) problem with elections that take top-truncated ballots. We show that the WAV problem is NP-complete for single transferable vote, Maximin, and Copeland, and propose a special case of positional scoring rule such that the problem can be computed in polynomial time. Our results for top-truncated rankings differ from the results in full rankings as their hardness results still hold when the number of candidates or the number of missing votes are bounded, while we show that the problem can be solved in polynomial time in either case.

#22 Fair Distribution of Delivery Orders [PDF] [Copy] [Kimi] [REL]

Authors: Hadi Hosseini ; Shivika Narang ; Tomasz Wąs

We initiate the study of fair distribution of delivery tasks among a set of agents wherein delivery jobs are placed along the vertices of a graph. Our goal is to fairly distribute delivery costs (modeled as a submodular function) among a fixed set of agents while satisfying some desirable notions of economic efficiency. We adopt well-established fairness concepts—such as envy-freeness up to one item (EF1) and minimax share (MMS)—to our setting and show that fairness is often incompatible with the efficiency notion of social optimality. Yet, we characterize instances that admit fair and socially optimal solutions by exploiting graph structures. We further show that achieving fairness along with Pareto optimality is computationally intractable. Nonetheless, we design an XP algorithm (parameterized by the number of agents) for finding MMS and Pareto optimal solutions on every tree instance, and show that the same algorithm can be modified to find efficient solutions along with EF1, when such solutions exist. We complement these results by theoretically and experimentally analyzing the price of fairness.

#23 Sampling Winners in Ranked Choice Voting [PDF] [Copy] [Kimi] [REL]

Authors: Matthew Iceland ; Anson Kahng ; Joseph Saber

Ranked choice voting (RCV) is a voting rule that iteratively eliminates least-popular candidates until there is a single winner with a majority of all remaining votes. In this work, we explore three central questions about predicting the outcome of RCV on an election given a uniform sample of votes. First, in theory, how poorly can RCV sampling predict RCV outcomes? Second, can we use insights from the recently-proposed map of elections to better predict RCV outcomes? Third, is RCV the best rule to use on a sample to predict the outcome of RCV in real-world elections? We find that although RCV can do quite poorly in the worst case and it may be better to use other rules to predict RCV winners on synthetic data from the map of elections, RCV generally predicts itself well on real-world data, further contributing to its appeal as a theoretically-flawed but practicable voting process. We further supplement our work by exploring the effect of margin of victory (MoV) on sampling accuracy.

#24 Equilibria in Two-Stage Facility Location with Atomic Clients [PDF] [Copy] [Kimi] [REL]

Authors: Simon Krogmann ; Pascal Lenzner ; Alexander Skopalik ; Marc Uetz ; Marnix C. Vos

We consider competitive facility location as a two-stage multi-agent system with two types of clients. For a given host graph with weighted clients on the vertices, first facility agents strategically select vertices for opening their facilities. Then, the clients strategically select which of the opened facilities in their neighborhood to patronize. Facilities want to attract as much client weight as possible, clients want to minimize congestion on the chosen facility. All recently studied versions of this model assume that clients can split their weight strategically. We consider clients with unsplittable weights, but allow mixed strategies. So clients may randomize over which facility to patronize. Besides modeling a natural client behavior, this subtle change yields drastic changes, e.g., for a given facility placement, qualitatively different client equilibria are possible. As our main result, we show that pure subgame perfect equilibria always exist if all client weights are identical. For this, we use a novel potential function argument, employing a hierarchical classification of the clients and sophisticated rounding in each step. In contrast, for non-identical clients, we show that deciding the existence of even approximately stable states is computationally intractable. On the positive side, we give a tight bound of 2 on the price of anarchy which implies high social welfare of equilibria, if they exist.

#25 The Distortion of Threshold Approval Matching [PDF] [Copy] [Kimi] [REL]

Authors: Mohamad Latifian ; Alexandros A. Voudouris

We study matching settings in which a set of agents have private utilities over a set of items. Each agent reports a partition of the items into approval sets of different threshold utility levels. Given this limited information on input, the goal is to compute an assignment of the items to the agents (subject to cardinality constraints depending on the application) that (approximately) maximizes the social welfare (the total utility of the agents for their assigned items). We first consider the well-known, simple one-sided matching problem in which each of a set of agents is to be assigned exactly one item. We show tight bounds on distortion of deterministic and randomized matching algorithms that are functions of the number of threshold utility levels. We further show that our distortion bounds extend to a more general setting in which there are multiple copies of the items, each agent can be assigned a number of items (even copies of the same one) up to a capacity, and the utility of an agent for an item depends on the number of its copies that the agent is given.