| Total: 51
We consider a two-player resource allocation polytope game, in which the strategy of a player is restricted by the strategy of the other player, with common coupled constraints. With respect to such a game, we formally introduce the notions of independent optimal strategy profile, which is the profile when players play optimally in the absence of the other player; and common contiguous set, which is the set of top nodes in the preference orderings of both the players that are exhaustively invested on in the independent optimal strategy profile. We show that for the game to have a unique PSNE, it is a necessary and sufficient condition that the independent optimal strategies of the players do not conflict, and either the common contiguous set consists of at most one node or all the nodes in the common contiguous set are invested on by only one player in the independent optimal strategy profile. We further derive a socially optimal strategy profile, and show that the price of anarchy cannot be bound by a common universal constant. We hence present an efficient algorithm to compute the price of anarchy and the price of stability, given an instance of the game. Under reasonable conditions, we show that the price of stability is 1. We encounter a paradox in this game that higher budgets may lead to worse outcomes.
Feedback centralities are one of the key classes of centrality measures. They assess the importance of a vertex recursively, based on the importance of its neighbours. Feedback centralities includes the Eigenvector Centrality, as well as its variants, such as the Katz Centrality or the PageRank, and are used in various AI applications, such as ranking the importance of websites on the Internet and most influential users in the Twitter social network. In this paper, we study the theoretical underpinning of the feedback centralities. Specifically, we propose a novel axiomatization of the Eigenvector Centrality and the Katz Centrality based on six simple requirements. Our approach highlights the similarities and differences between both centralities which may help in choosing the right centrality for a specific application.
Balancing fairness and efficiency in resource allocation is a classical economic and computational problem. The price of fairness measures the worst-case loss of economic efficiency when using an inefficient but fair allocation rule; for indivisible goods in many settings, this price is unacceptably high. One such setting is kidney exchange, where needy patients swap willing but incompatible kidney donors. In this work, we close an open problem regarding the theoretical price of fairness in modern kidney exchanges. We then propose a general hybrid fairness rule that balances a strict lexicographic preference ordering over classes of agents, and a utilitarian objective that maximizes economic efficiency. We develop a utility function for this rule that favors disadvantaged groups lexicographically; but if cost to overall efficiency becomes too high, it switches to a utilitarian objective. This rule has only one parameter which is proportional to a bound on the price of fairness, and can be adjusted by policymakers. We apply this rule to real data from a large kidney exchange and show that our hybrid rule produces more reliable outcomes than other fairness rules.
We propose a simple model of interaction for resource-conscious agents. The resources involved are expressed in fragments of Linear Logic. We investigate a few problems relevant to cooperative games, such as deciding whether a group of agents can form a coalition and act together in a way that satisfies all of them. In terms of solution concepts, we study the computational aspects of the core of a game. The main contributions are a formal link with the existing literature, and complexity results for several classes of models.
We consider a market setting in which buyers are individuals of a population, whose relationships are represented by an underlying social graph. Given buyers valuations for the items being sold, an outcome consists of a pricing of the objects and an allocation of bundles to the buyers. An outcome is social envy-free if no buyer strictly prefers the bundles of her neighbors in the social graph. We focus on the revenue maximization problem in multi-unit markets, in which there are multiple copies of a same item being sold and each buyer is assigned a set of identical items. We consider the four different cases arising by considering different buyers valuations, i.e., single-minded or general, and by adopting different forms of pricing, that is item- or bundle-pricing. For all the above cases we show the hardness of the revenue maximization problem and give corresponding approximation results. All our approximation bounds are optimal or nearly optimal. Moreover, we provide an optimal allocation algorithm for general valuations with item-pricing, under the assumption of social graphs of bounded treewidth. Finally, we determine optimal bounds on the corresponding price of envy-freeness, that is on the worst case ratio between the maximum revenue that can be achieved without envy-freeness constraints, and the one obtainable in case of social relationships. Some of our results close hardness open questions or improve already known ones in the literature concerning the classical setting without sociality.
We show that the problem of finding an approximate Nash equilibrium with a polynomial precision is PPAD-hard even for two-player sparse win-lose games (i.e., games with {0,1}-entries such that each row and column of the two n×n payoff matrices have at most O(log n) many ones). The proof is mainly based on a new class of prototype games called Chasing Games, which we think is of independent interest in understanding the complexity of Nash equilibrium.
We introduce a formal model of iterative judgment aggregation, enabling the analysis of scenarios in which agents repeatedly update their individual positions on a set of issues, before a final decision is made by applying an aggregation rule to these individual positions. Focusing on two popular aggregation rules, the premise-based rule and the plurality rule, we study under what circumstances convergence to an equilibrium can be guaranteed. We also analyse the quality, in social terms, of the final decisions obtained. Our results not only shed light on the parameters that determine whether iteration converges and is socially beneficial, but they also clarify important differences between iterative judgment aggregation and the related framework of iterative voting.
We study the class of distance-based centralities that consists of centrality measures that depend solely on distances to other nodes in the graph. This class encompasses a number of centrality measures, including the classical Degree and Closeness Centralities, as well as their extensions: the Harmonic, Reach and Decay Centralities. We axiomatize the class of distance-based centralities and study what conditions are imposed by the axioms proposed in the literature. Building upon our analysis, we propose the class of additive distance-based centralities and pin-point properties which combined with the axiomatic characterization of the whole class uniquely characterize a number of centralities from the literature.
Apart from the principles and methodologies inherited from Economics and Game Theory, the studies in Algorithmic Mechanism Design typically employ the worst-case analysis and approximation schemes of Theoretical Computer Science. For instance, the approximation ratio, which is the canonical measure of evaluating how well an incentive-compatible mechanism approximately optimizes the objective, is defined in the worst-case sense. It compares the performance of the optimal mechanism against the performance of a truthful mechanism, for all possible inputs. In this paper, we take the average-case analysis approach, and tackle one of the primary motivating problems in Algorithmic Mechanism Design -- the scheduling problem [Nisan and Ronen 1999]. One version of this problem which includes a verification component is studied by [Koutsoupias 2014]. It was shown that the problem has a tight approximation ratio bound of (n+1)/2 for the single-task setting, where n is the number of machines. We show, however, when the costs of the machines to executing the task follow any independent and identical distribution, the average-case approximation ratio of the mechanism given in [Koutsoupias 2014] is upper bounded by a constant. This positive result asymptotically separates the average-case ratio from the worst-case ratio, and indicates that the optimal mechanism for the problem actually works well on average, although in the worst-case the expected cost of the mechanism is Theta(n) times that of the optimal cost.
A paper by Deng and Conitzer in AAAI'17 introduces disarmament games, in which players alternatingly commit not to play certain pure strategies. However, in practice, disarmament usually does not consist in removing a strategy, but rather in removing a resource (and doing so rules out all the strategies in which that resource is used simultaneously). In this paper, we introduce a model of disarmament games in which resources, rather than strategies, are removed. We prove NP-completeness of several formulations of the problem of achieving desirable outcomes via disarmament. We then study the case where resources can be fractionally removed, and prove a result analogous to the folk theorem that all desirable outcomes can be achieved. We show that we can approximately achieve any desirable outcome in a polynomial number of rounds, though determining whether a given outcome can be obtained in a given number of rounds remains NP-complete.
This paper focuses on two commonly used path assignment policies for agents traversing a congested network: self-interested routing, and system-optimum routing. In the self-interested routing policy each agent selects a path that optimizes its own utility, while in the system-optimum routing, agents are assigned paths with the goal of maximizing system performance. This paper considers a scenario where a centralized network manager wishes to optimize utilities over all agents, i.e., implement a system-optimum routing policy. In many real-life scenarios, however, the system manager is unable to influence the route assignment of all agents due to limited influence on route choice decisions. Motivated by such scenarios, a computationally tractable method is presented that computes the minimal amount of agents that the system manager needs to influence (compliant agents) in order to achieve system optimal performance. Moreover, this methodology can also determine whether a given set of compliant agents is sufficient to achieve system optimum and compute the optimal route assignment for the compliant agents to do so. Experimental results are presented showing that in several large-scale, realistic traffic networks optimal flow can be achieved with as low as 13% of the agent being compliant and up to 54%.
We cast the problem of combinatorial auction design in a Bayesian framework in order to incorporate prior information into the auction process and minimize the number of rounds to convergence. We first develop a generative model of agent valuations and market prices such that clearing prices become maximum a posteriori estimates given observed agent valuations. This generative model then forms the basis of an auction process which alternates between refining estimates of agent valuations and computing candidate clearing prices. We provide an implementation of the auction using assumed density filtering to estimate valuations and expectation maximization to compute prices. An empirical evaluation over a range of valuation domains demonstrates that our Bayesian auction mechanism is highly competitive against the combinatorial clock auction in terms of rounds to convergence, even under the most favorable choices of price increment for this baseline.
A wealth of algorithms centered around (integer) linear programming have been proposed to compute equilibrium strategies in security games with discrete states and actions. However, in practice many domains possess continuous state and action spaces. In this paper, we consider a continuous space security game model with infinite-size action sets for players and present a novel deep learning based approach to extend the existing toolkit for solving security games. Specifically, we present (i) OptGradFP, a novel and general algorithm that searches for the optimal defender strategy in a parameterized continuous search space, and can also be used to learn policies over multiple game states simultaneously; (ii) OptGradFP-NN, a convolutional neural network based implementation of OptGradFP for continuous space security games. We demonstrate the potential to predict good defender strategies via experiments and analysis of OptGradFP and OptGradFP-NN on discrete and continuous game settings.
Traditional security games concern the optimal randomized allocation of human patrollers, who can directly catch attackers or interdict attacks. Motivated by the emerging application of utilizing mobile sensors (e.g., UAVs) for patrolling, in this paper we propose the novel Sensor-Empowered security Game (SEG) model which captures the joint allocation of human patrollers and mobile sensors. Sensors differ from patrollers in that they cannot directly interdict attacks, but they can notify nearby patrollers (if any). Moreover, SEGs incorporate mobile sensors' natural functionality of strategic signaling. On the technical side, we first prove that solving SEGs is NP-hard even in zero-sum cases. We then develop a scalable algorithm SEGer based on the branch-and-price framework with two key novelties: (1) a novel MILP formulation for the slave; (2) an efficient relaxation of the problem for pruning. To further accelerate SEGer, we design a faster combinatorial algorithm for the slave problem, which is provably a constant-approximation to the slave problem in zero-sum cases and serves as a useful heuristic for general-sum SEGs. Our experiments demonstrate the significant benefit of utilizing mobile sensors.
When aggregating preferences of agents via voting, two desirable goals are to incentivize agents to participate in the voting process and then identify outcomes that are Pareto efficient. We consider participation as formalized by Brandl, Brandt, and Hofbauer (2015) based on the stochastic dominance (SD) relation. We formulate a new rule called RMEC (Rank Maximal Equal Contribution) that is polynomial-time computable, ex post efficient and satisfies the strongest notion of participation. It also satisfies many other desirable fairness properties. The rule suggests a general approach to achieving very strong participation, ex post efficiency and fairness.
We present a new autoencoder-type architecture that is trainable in an unsupervised mode, sustains both generation and inference, and has the quality of conditional and unconditional samples boosted by adversarial learning. Unlike previous hybrids of autoencoders and adversarial networks, the adversarial game in our approach is set up directly between the encoder and the generator, and no external mappings are trained in the process of learning.The game objective compares the divergences of each of the real and the generated data distributions with the prior distribution in the latent space. We show that direct generator-vs-encoder game leads to a tight coupling of the two components, resulting in samples and reconstructions of a comparable quality to some recently-proposed more complex architectures.
In a seminal paper, McAfee (1992) presented a truthful mechanism for double auctions, attaining asymptotically-optimal gain-from-trade without any prior information on the valuations of the traders. McAfee's mechanism handles single-parametric agents, allowing each seller to sell a single unit and each buyer to buy a single unit. This paper presents a double-auction mechanism that handles multi-parametric agents and allows multiple units per trader, as long as the valuation functions of all traders have decreasing marginal returns. The mechanism is prior-free, ex-post individually-rational, dominant-strategy truthful and strongly-budget-balanced. Its gain-from-trade approaches the optimum when the market size is sufficiently large.
We revisit the well-studied problem of constructing strategyproof approximation mechanisms for facility location games, but offer a fundamentally new perspective by considering risk averse designers. Specifically, we are interested in the tradeoff between a randomized strategyproof mechanism's approximation ratio, and its variance (which has long served as a proxy for risk). When there is just one facility, we observe that the social cost objective is trivial, and derive the optimal tradeoff with respect to the maximum cost objective. When there are multiple facilities, the main challenge is the social cost objective, and we establish a surprising impossibility result: under mild assumptions, no smooth approximation-variance tradeoff exists. We also discuss the implications of our work for computational mechanism design at large.
In large e-commerce websites, sellers have been observed to engage in fraudulent behaviour, faking historical transactions in order to receive favourable treatment from the platforms, specifically through the allocation of additional buyer impressions which results in higher revenue for them, but not for the system as a whole. This emergent phenomenon has attracted considerable attention, with previous approaches focusing on trying to detect illicit practices and to punish the miscreants. In this paper, we employ the principles of reinforcement mechanism design, a framework that combines the fundamental goals of classical mechanism design, i.e. the consideration of agents' incentives and their alignment with the objectives of the designer, with deep reinforcement learning for optimizing the performance based on these incentives. In particular, first we set up a deep-learning framework for predicting the sellers' rationality, based on real data from any allocation algorithm. We use data from one of largest e-commerce platforms worldwide and train a neural network model to predict the extent to which the sellers will engage in fraudulent behaviour. Using this rationality model, we employ an algorithm based on deep reinforcement learning to optimize the objectives and compare its performance against several natural heuristics, including the platform's implementation and incentive-based mechanisms from the related literature.
We consider ordinal approximation algorithms for a broad class of utility maximization problems for multi-agent systems. In these problems, agents have utilities for connecting to each other, and the goal is to compute a maximum-utility solution subject to a set of constraints. We represent these as a class of graph optimization problems, including matching, spanning tree problems, TSP, maximum weight planar subgraph, and many others. We study these problems in the ordinal setting: latent numerical utilities exist, but we only have access to ordinal preference information, i.e., every agent specifies an ordering over the other agents by preference. We prove that for the large class of graph problems we identify, ordinal information is enough to compute solutions which are close to optimal, thus demonstrating there is no need to know the underlying numerical utilities. For example, for problems in this class with bounded degree b a simple ordinal greedy algorithm always produces a (b + 1)-approximation; we also quantify how the quality of ordinal approximation depends on the sparsity of the resulting graphs. In particular, our results imply that ordinal information is enough to obtain a 2-approximation for Maximum Spanning Tree; a 4-approximation for Max Weight Planar Subgraph; a 2-approximation for Max-TSP; and a 2- approximation for various Matching problems.
It is well-known that the Gale-Shapley algorithm is not truthful for all agents. Previous studies in this category concentrate on manipulations using incomplete preference lists by a single woman and by the set of all women. Little is known about manipulations by a subset of women. In this paper, we consider manipulations by any subset of women with arbitrary preferences. We show that a strong Nash equilibrium of the induced manipulation game always exists among the manipulators and the equilibrium outcome is unique and Pareto-dominant. In addition, the set of matchings achievable by manipulations has a lattice structure. We also examine the super-strong Nash equilibrium in the end.
We define a class of zero-sum games with combinatorial structure, where the best response problem of one player is to maximize a submodular function. For example, this class includes security games played on networks, as well as the problem of robustly optimizing a submodular function over the worst case from a set of scenarios. The challenge in computing equilibria is that both players' strategy spaces can be exponentially large. Accordingly, previous algorithms have worst-case exponential runtime and indeed fail to scale up on practical instances. We provide a pseudopolynomial-time algorithm which obtains a guaranteed (1 - 1/e)^2-approximate mixed strategy for the maximizing player. Our algorithm only requires access to a weakened version of a best response oracle for the minimizing player which runs in polynomial time. Experimental results for network security games and a robust budget allocation problem confirm that our algorithm delivers near-optimal solutions and scales to much larger instances than was previously possible.
Leadership games provide a powerful paradigm to model many real-world settings. Most literature focuses on games with a single follower who acts optimistically, breaking ties in favour of the leader. Unfortunately, for real-world applications, this is unlikely. In this paper, we look for efficiently solvable games with multiple followers who play either optimistically or pessimistically, i.e., breaking ties in favour or against the leader. We study the computational complexity of finding or approximating an optimistic or pessimistic leader-follower equilibrium in specific classes of succinct games—polymatrix like—which are equivalent to 2-player Bayesian games with uncertainty over the follower, with interdependent or independent types. Furthermore, we provide an exact algorithm to find a pessimistic equilibrium for those game classes. Finally, we show that in general polymatrix games the computation is harder even when players are forced to play pure strategies.
We develop a model of multiwinner elections that combines performance-based measures of the quality of the committee (such as, e.g., Borda scores of the committee members) with diversity constraints. Specifically, we assume that the candidates have certain attributes (such as being a male or a female, being junior or senior, etc.) and the goal is to elect a committee that, on the one hand, has as high a score regarding a given performance measure, but that, on the other hand, meets certain requirements (e.g., of the form "at least 30% of the committee members are junior candidates and at least 40% are females"). We analyze the computational complexity of computing winning committees in this model, obtaining polynomial-time algorithms (exact and approximate) and NP-hardness results. We focus on several natural classes of voting rules and diversity constraints.
In this paper, we propose a fractional preference model for the facility location game with two facilities that serve the similar purpose on a line where each agent has his location information as well as fractional preference to indicate how well they prefer the facilities. The preference for each facility is in the range of [0, L] such that the sum of the preference for all facilities is equal to 1. The utility is measured by subtracting the sum of the cost of both facilities from the total length L where the cost of facilities is defined as the multiplication of the fractional preference and the distance between the agent and the facilities. We first show that the lower bound for the objective of minimizing total cost is at least Ω(n^1/3). Hence, we use the utility function to analyze the agents' satification. Our objective is to place two facilities on [0, L] to maximize the social utility or the minimum utility. For each objective function, we propose deterministic strategy-proof mechanisms. For the objective of maximizing the social utility, we present an optimal deterministic strategy-proof mechanism in the case where agents can only misreport their locations. In the case where agents can only misreport their preferences, we present a 2-approximation deterministic strategy-proof mechanism. Finally, we present a 4-approximation deterministic strategy-proof mechanism and a randomized strategy-proof mechanism with an approximation ratio of 2 where agents can misreport both the preference and location information. Moreover, we also give a lower-bound of 1.06. For the objective of maximizing the minimum utility, we give a lower-bound of 1.5 and present a 2-approximation deterministic strategy-proof mechanism where agents can misreport both the preference and location.