| Total: 79
Norms describe the social architecture of a society and govern the interactions of its member agents. It may be appropriate for an agent to deviate from a norm; the deviation being indicative of a specialized norm applying under a specific context. Existing approaches for norm emergence assume simplified interactions wherein deviations are negatively sanctioned. We investigate via simulation the benefits of enriched interactions where deviating agents share selected elements of their contexts. We find that as a result (1) the norms are learned better with fewer sanctions, indicating improved social cohesion; and (2) the agents are better able to satisfy their individual goals. These results are robust under societies of varying sizes and characteristics reflecting pragmatic, considerate, and selfish agents.
In Rational Synthesis, we consider a multi-agent system in which some of the agents are controllable and some are not. All agents have objectives, and the goal is to synthesize strategies for the controllable agents so that their objectives are satisfied, assuming rationality of the uncontrollable agents. Previous work on rational synthesis considers objectives in LTL, namely ones that describe on-going behaviors, and in Objective-LTL, which allows ranking of LTL formulas. In this paper, we extend rational synthesis to LTL[F] -- an extension of LTL by quality operators. The satisfaction value of an LTL[F] formula is a real value in [0,1], where the higher the value is, the higher is the quality in which the computation satisfies the specification. The extension significantly strengthens the framework of rational synthesis and enables a study its game- and social-choice theoretic aspects. In particular, we study the price of stability and price of anarchy of the rational-synthesis game and use them to explain the cooperative and non-cooperative settings of rational synthesis. Our algorithms make use of strategy logic and decision procedures for it. Thus, we are able to handle the richer quantitative setting using existing tools. In particular, we show that the cooperative and non-cooperative versions of quantitative rational synthesis are 2EXPTIME-complete and in 3EXPTIME, respectively -- not harder than the complexity known for their Boolean analogues.
In fair division problems with indivisible goods it is well known that one cannot have any guarantees for the classic fairness notions of envy-freeness and proportionality. As a result, several relaxations have been introduced, most of which in quite recent works. We focus on four such notions, namely envy-freeness up to one good (EF1), envy-freeness up to any good (EFX), maximin share fairness (MMS), and pairwise maximin share fairness (PMMS). Since obtaining these relaxations also turns out to be problematic in several scenarios, approximate versions of them have also been considered. In this work, we investigate further the connections between the four notions mentioned above and their approximate versions. We establish several tight or almost tight results concerning the approximation quality that any of these notions guarantees for the others, providing an almost complete picture of this landscape. Some of our findings reveal interesting and surprising consequences regarding the power of these notions, e.g., PMMS and EFX provide the same worst-case guarantee for MMS, despite PMMS being a strictly stronger notion than EFX. We believe such implications provide further insight on the quality of approximately fair solutions.
Opinion diffusion is studied on social graphs where agents hold binary opinions and where social pressure leads them to conform to the opinion manifested by their neighbors. Within this setting, questions related to whether a minority/majority can spread the opinion it supports to all the other agents are considered.It is shown that, no matter of the graph given at hand, there always exists a group formed by a half of the agents that can annihilate the opposite opinion. Instead, the influence power of minorities depends on certain features of the underlying graphs, which are NP-hard to be identified. Deciding whether the two opinions can coexist in some stable configuration is NP-hard, too.
We introduce and study the class of egalitarian variants of committee scoring rules, where instead of summing up the scores that voters assign to committees---as is done in the utilitarian variants---the score of a committee is taken to be the lowest score assigned to it by any voter. We focus on five rules, which are egalitarian analogues of SNTV, the k-Borda rule, the Chamberlin--Courant rule, the Bloc rule, and the Pessimist rule. We establish their computational complexity, provide their initial axiomatic study, and perform experiments to represent the action of these rules graphically.
We study the problem of fairly dividing a heterogeneous resource, commonly known as cake cutting and chore division, in the presence of strategic agents. While a number of results in this setting have been established in previous works, they rely crucially on the free disposal assumption, meaning that the mechanism is allowed to throw away part of the resource at no cost. In the present work, we remove this assumption and focus on mechanisms that always allocate the entire resource. We exhibit a truthful envy-free mechanism for cake cutting and chore division for two agents with piecewise uniform valuations, and we complement our result by showing that such a mechanism does not exist when certain additional assumptions are made. Moreover, we give truthful mechanisms for multiple agents with restricted classes of valuations.
We consider decision situations in which a set of points of view (voters, criteria) are to sort a set of candidates to ordered categories (Good/Bad). Candidates are judged good, when approved by a sufficient set of points of view; this corresponds to NonCompensatory Sorting. To be accountable, such approval sorting should provide guarantees about the decision process and decisions concerning specific candidates. We formalize accountability using a feasibility problem expressed as a boolean satisfiability formulation. We illustrate different forms of accountability when a committee decides with approval sorting and study the information that should be disclosed by the committee.
We develop a logic-based technique to analyse finite interactions in multi-agent systems. We introduce a semantics for Alternating-time Temporal Logic (for both perfect and imperfect recall) and its branching-time fragments in which paths are finite instead of infinite. We study validities of these logics and present optimal algorithms for their model-checking problems in the perfect recall case.
In multi-agent temporal planning, individual agents cannot know a priori when other agents will execute their actions and so treat those actions as uncertain. Only when others communicate the results of their actions is that uncertainty resolved. If a full communication protocol is specified ahead of time, then delay controllability can be used to assess the feasibility of the temporal plan. However, agents often have flexibility in choosing when to communicate the results of their action. In this paper, we address the question of how to choose communication protocols that guarantee the feasibility of the original temporal plan subject to some cost associated with that communication. To do so, we introduce a means of extracting delay controllability conflicts and show how we can use these conflicts to more efficiently guide our search. We then present three conflict-directed search algorithms and explore the theoretical and empirical trade-offs between the different approaches.
We consider the problem of fairly allocating indivisible goods, among agents, under cardinality constraints and additive valuations. In this setting, we are given a partition of the entire set of goods---i.e., the goods are categorized---and a limit is specified on the number of goods that can be allocated from each category to any agent. The objective here is to find a fair allocation in which the subset of goods assigned to any agent satisfies the given cardinality constraints. This problem naturally captures a number of resource-allocation applications, and is a generalization of the well-studied unconstrained fair division problem. The two central notions of fairness, in the context of fair division of indivisible goods, are envy freeness up to one good (EF1) and the (approximate) maximin share guarantee (MMS). We show that the existence and algorithmic guarantees established for these solution concepts in the unconstrained setting can essentially be achieved under cardinality constraints. Furthermore, focusing on the case wherein all the agents have the same additive valuation, we establish that EF1 allocations exist even under matroid constraints.
Gerrymandering is the process by which parties manipulate boundaries of electoral districts in order to maximize the number of districts they can win. Demographic trends show an increasingly strong correlation between residence and party affiliation; some party’s supporters congregate in cities, while others stay in more rural areas. We investigate both theoretically and empirically the effect of this trend on a party's ability to gerrymander in a two-party model ("urban party" and "rural party"). Along the way, we propose a definition of the gerrymandering power of a party, and an algorithmic approach for near-optimal gerrymandering in large instances. Our results suggest that beyond a fairly small concentration of urban party's voters, the gerrymandering power of a party depends almost entirely on the level of concentration, and not on the party's share of the population. As partisan separation grows, the gerrymandering power of both parties converge so that each party can gerrymander to get only slightly more than what its voting share warrants, bringing about, ultimately, a more representative outcome. Moreover, there seems to be an asymmetry between the gerrymandering power of the parties, with the rural party being more capable of gerrymandering.
Combinatorial auctions are used to allocate resources in domains where bidders have complex preferences over bundles of goods. However, the behavior of bidders under different payment rules is not well understood, and there has been limited success in finding Bayes-Nash equilibria of such auctions due to the computational difficulties involved. In this paper, we introduce non-decreasing payment rules. Under such a rule, the payment of a bidder cannot decrease when he increases his bid, which is a natural and desirable property. VCG-nearest, the payment rule most commonly used in practice, violates this property and can thus be manipulated in surprising ways. In contrast, we show that many other payment rules are non-decreasing. We also show that a non-decreasing payment rule imposes a structure on the auction game that enables us to search for an approximate Bayes-Nash equilibrium much more efficiently than in the general case. Finally, we introduce the utility planes BNE algorithm, which exploits this structure and outperforms a state-of-the-art algorithm by multiple orders of magnitude.
Randomized voting rules are gaining increasing attention in computational and non-computational social choice. A particularly interesting class of such rules are maximal lottery (ML) schemes, which were proposed by Peter Fishburn in 1984 and have been repeatedly recommended for practical use. However, the subtle differences between different ML schemes are often ignored. Two canonical subsets of ML schemes are C1-ML schemes (which only depend on unweighted majority comparisons) and C2-ML schemes (which only depend on weighted majority comparisons). We prove that C2-ML schemes are the only Pareto efficient---but also among the most manipulable---ML schemes. Furthermore, we evaluate the frequency of manipulable preference profiles and the degree of randomization of ML schemes via extensive computer simulations. In general, ML schemes are rarely manipulable and often do not randomize at all, especially when there are only few alternatives. For up to 21 alternatives, the average support size of ML schemes lies below 4 under reasonable assumptions. The average degree of randomization (in terms of Shannon entropy) of C2-ML schemes is significantly lower than that of C1-ML schemes.
We propose an algorithm for constructing efficient patrolling strategies in the Internet environment, where the protected targets are nodes connected to the network and the patrollers are software agents capable of detecting/preventing undesirable activities on the nodes. The algorithm is based on a novel compositional principle designed for a special class of strategies, and it can quickly construct (sub)optimal solutions even if the number of targets reaches hundreds of millions.
Combinatorial auctions (CAs) are used to allocate multiple items among bidders with complex valuations. Since the value space grows exponentially in the number of items, it is impossible for bidders to report their full value function even in medium-sized settings. Prior work has shown that current designs often fail to elicit the most relevant values of the bidders, thus leading to inefficiencies. We address this problem by introducing a machine learning-based elicitation algorithm to identify which values to query from the bidders. Based on this elicitation paradigm we design a new CA mechanism we call PVM, where payments are determined so that bidders’ incentives are aligned with allocative efficiency. We validate PVM experimentally in several spectrum auction domains, and we show that it achieves high allocative efficiency even when only few values are elicited from the bidders.
In a liquid democracy, voters can either vote directly or delegate their vote to another voter of their choice. We consider ordinal elections, and study a model of liquid democracy in which voters specify partial orders and use several delegates to refine them. This flexibility, however, comes at a price, as individual rationality (in the form of transitive preferences) can no longer be guaranteed. We discuss ways to detect and overcome such complications. Based on the framework of distance rationalization, we introduce novel variants of voting rules that are tailored to the liquid democracy context.
Multiwinner voting rules are used to select a small representative subset of candidates or items from a larger set given the preferences of voters. However, if candidates have sensitive attributes such as gender or ethnicity (when selecting a committee), or specified types such as political leaning (when selecting a subset of news items), an algorithm that chooses a subset by optimizing a multiwinner voting rule may be unbalanced in its selection -- it may under or over represent a particular gender or political orientation in the examples above. We introduce an algorithmic framework for multiwinner voting problems when there is an additional requirement that the selected subset should be ``fair'' with respect to a given set of attributes. Our framework provides the flexibility to (1) specify fairness with respect to multiple, non-disjoint attributes (e.g., ethnicity and gender) and (2) specify a score function. We study the computational complexity of this constrained multiwinner voting problem for monotone and submodular score functions and present several approximation algorithms and matching hardness of approximation results for various attribute group structure and types of score functions. We also present simulations that suggest that adding fairness constraints may not affect the scores significantly when compared to the unconstrained case.
We consider the problem of computing a mixed-strategy Nash equilibrium (MSNE) in resource graph games (RGGs), a compact representation for games with an exponential number of strategies. In an RGG, each player's pure strategy is a subset of resources, represented by a binary vector, and her pure strategy set is represented compactly using a set of linear inequality constraints. Given the pure strategies of the players, each player's utility depends on the resource graph and the numbers of times the neighboring resources are used. RGGs are general enough to capture a wide variety of games studied in literature, including congestion games and security games.In this paper, we provide the first Fully Polytnomial Time Approximation Scheme (FPTAS) for computing an MSNE in any symmetric multilinear RGG where its constraint moralized resource graph (a graph formed between the moralized resource graph and the constraints defining the strategy polytope) has bounded treewidth. Our FPTAS can be generalized to compute optimal MSNE, and to games with a constant number of player types. As a consequence, our FPTAS provides new approximation results for security games, network congestion games, and bilinear games.
Collaboration between heterogeneous agents typically requires the ability to communicate meaningfully. This can be challenging in open environments where participants may use different languages. Previous work proposed a technique to infer alignments between different vocabularies that uses only information about the tasks being executed, without any external resource. Until now, this approach has only been evaluated with artificially created data. We adapt this technique to protocols written by humans in natural language, which we extract from instructional webpages. In doing so, we show how to take into account challenges that arise when working with natural language labels.The quality of the alignments obtained with our technique is evaluated in terms of their effectiveness in enabling successful collaborations, using a translation dictionary as a baseline. We show how our technique outperforms the dictionary when used to interact.
In a Periodic Double Auction (PDA), there are multiple discrete trading periods for a single type of good. PDAs are commonly used in real-world energy markets to trade energy in specific time slots to balance demand on the power grid. Strategically, bidding in a PDA is complicated because the bidder must predict and plan for future auctions that may influence the bidding strategy for the current auction. We present a general bidding strategy for PDAs based on forecasting clearing prices and using Monte Carlo Tree Search (MCTS) to plan a bidding strategy across multiple time periods. In addition, we present a fast heuristic strategy that can be used either as a standalone method or as an initial set of bids to seed the MCTS policy. We evaluate our bidding strategies using a PDA simulator based on the wholesale market implemented in the Power Trading Agent Competition (PowerTAC) competition. We demonstrate that our strategies outperform state-of-the-art bidding strategies designed for that competition.
We seek to understand when heterogeneity in agent preferences yields improved outcomes in terms of overall cost. That this might be hoped for is based on the common belief that diversity is advantageous in many multi-agent settings. We investigate this in the context of routing. Our main result is a sharp characterization of the network settings in which diversity always helps, versus those in which it is sometimes harmful. Specifically, we consider routing games, where diversity arises in the way that agents trade-off two criteria (such as time and money, or, in the case of stochastic delays, expectation and variance of delay). Our main contributions are: 1) A participant-oriented measure of cost in the presence of agent diversity; 2) A full characterization of those network topologies for which diversity always helps, for all latency functions and demands.
The Schulze method is a voting rule widely used in practice and enjoys many positive axiomatic properties. While it is computable in polynomial time, its straight-forward implementation does not scale well for large elections. In this paper, we develop a highly optimised algorithm for computing the Schulze method with Pregel, a framework for massively parallel computation of graph problems, and demonstrate its applicability for large preference data sets. In addition, our theoretic analysis shows that the Schulze method is indeed particularly well-suited for parallel computation, in stark contrast to the related ranked pairs method. More precisely we show that winner determination subject to the Schulze method is NL-complete, whereas this problem is P-complete for the ranked pairs method.
We consider a multi-stage facility reallocation problems on the real line, where a facility is being moved between stages based on the locations reported by n agents. The aim of the reallocation mechanism is to minimize the social cost, i.e., the sum over the total distance between the facility and all agents at all stages, plus the cost incurred for moving the facility. We also study this problem both in the offline setting and online setting. In the offline case the mechanism has full knowledge of the agent locations in all future stages, and in the online setting the mechanism does not know these future locations and must decide the location of the facility on a stage-per-stage basis. For both cases, we derive the optimal mechanism, where for the online setting we show that its competitive ratio is (n+2)/(n+1). As neither of these mechanisms turns out to be strategyproof, we propose another strategyproof mechanism which has a competitive ratio of (n+3)/(n+1) for odd n and (n+4)/n for even n, which we conjecture to be the best possible. We also consider a generalization with multiple facilities and weighted agents, for which we show that the optimum can be computed in polynomial time for a fixed number of facilities.
The Procedural Reasoning System (PRS) is arguably the first implementation of the Belief--Desire--Intention (BDI) approach to agent programming. PRS remains extremely influential, directly or indirectly inspiring the development of subsequent BDI agent programming languages. However, perhaps surprisingly given its centrality in the BDI paradigm, PRS lacks a formal operational semantics, making it difficult to determine its expressive power relative to other agent programming languages. This paper takes a first step towards closing this gap, by giving a formal semantics for a significant fragment of PRS. We prove key properties of the semantics relating to PRS-specific programming constructs, and show that even the fragment of PRS we consider is strictly more expressive than the plan constructs found in typical BDI languages.
The general task of finding an assignment of agents to activities under certain stability and rationality constraints has led to the introduction of two prominent problems in the area of computational social choice: Group Activity Selection (GASP) and Stable Invitations (SIP). Here we introduce and study the Comprehensive Activity Selection Problem, which naturally generalizes both of these problems. In particular, we apply the parameterized complexity paradigm, which has already been successfully employed for SIP and GASP. While previous work has focused strongly on parameters such as solution size or number of activities, here we focus on parameters which capture the complexity of agent-to-agent interactions. Our results include a comprehensive complexity map for CAS under various restrictions on the number of activities in combination with restrictions on the complexity of agent interactions.