Voting Rules As Error-Correcting Codes
|
|
- Beverly Lamb
- 6 years ago
- Views:
Transcription
1 Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence Voting Rules As Error-Correcting Codes Ariel D. Procaccia Carnegie Mellon University Nisarg Shah Carnegie Mellon University Yair Zick Carnegie Mellon University Abstract We present the first model of optimal voting under adversarial noise. From this viewpoint, voting rules are seen as errorcorrecting codes: their goal is to correct errors in the input rankings and recover a ranking that is close to the ground truth. We derive worst-case bounds on the relation between the average accuracy of the input votes, and the accuracy of the output ranking. Empirical results from real data show that our approach produces significantly more accurate rankings than alternative approaches. Introduction Social choice theory develops and analyzes methods for aggregating the opinions of individuals into a collective decision. The prevalent approach is motivated by situations in which opinions are subjective, such as political elections, and focuses on the design of voting rules that satisfy normative properties (Arrow 95. An alternative approach, which was proposed by the marquis de Condorcet in the 8th Century, had confounded scholars for centuries (due to Condorcet s ambiguous writing until it was finally elucidated by Young (988. The underlying assumption is that the alternatives can be objectively compared according to their true quality. In particular, it is typically assumed that there is a ground truth ranking of the alternatives. Votes can be seen as noisy estimates of the ground truth, drawn from a specific noise model. For example, Condorcet proposed a noise model where roughly speaking each voter (hereinafter, agent compares every pair of alternatives, and orders them correctly (according to the ground truth with probability p > /; today an equivalent model is attributed to Mallows (957. Here, it is natural to employ a voting rule that always returns a ranking that is most likely to coincide with the ground truth, that is, the voting rule should be a maximum likelihood estimator (MLE. Although Condorcet could have hardly foreseen this, his MLE approach is eminently applicable to crowdsourcing and human computation systems, which often employ voting to aggregate noisy estimates; EteRNA (Lee et al. 04 is a wonderful example, as explained by Procaccia et al. (0. Consequently, the study of voting rules as MLEs has been Copyright c 05, Association for the Advancement of Artificial Intelligence ( All rights reserved. gaining steam in the last decade (Conitzer and Sandholm 005; Conitzer et al. 009; Elkind et al. 00; Xia et al. 00; Xia and Conitzer 0; Lu and Boutilier 0; Procaccia et al. 0; Azari Soufiani et al. 0; 03; 04; Mao et al. 03; Caragiannis et al. 03; 04. Despite its conceptual appeal, a major shortcoming of the MLE approach is that the MLE voting rule is specific to a noise model, and that noise model even if it exists for a given setting may be difficult to pin down (Mao, Procaccia, and Chen 03. Caragiannis et al. (03; 04 have addressed this problem by relaxing the MLE constraint: they only ask that the probability of the voting rule returning the ground truth go to one as the number of votes goes to infinity. This allows them to design voting rules that elicit the ground truth in a wide range of noise models; however, they may potentially require an infinite amount of information. Our approach. In this paper, we propose a fundamentally different approach to aggregating noisy votes. We assume the noise to be adversarial instead of probabilistic, and wish to design voting rules that do well under worst-case assumptions. From this viewpoint, our approach is closely related to the extensive literature on error-correcting codes. One can think of the votes as a repetition code: each vote is a transmitted noisy version of a message (the ground truth. How many errors can be corrected using this code? In more detail, let d be a distance metric on the space of rankings. As an example, the well-known Kendall tau (KT distance between two rankings measures the number of pairs of alternatives on which the two rankings disagree. Suppose that we receive n votes over the set of alternatives {a, b, c, d}, for an even n, and we know that the average KT distance between the votes and the ground truth is at most /. Can we always recover the ground truth? No: in the worst-case, exactly n/ agents swap the two highestranked alternatives and the rest report the ground truth. In this case, we observe two distinct rankings (each n/ times that only disagree on the order of the top two alternatives. Both rankings have an average distance of / from the input votes, making it impossible to determine which of them is the ground truth. Let us, therefore, cast a larger net. Inspired by list decoding of error-correcting codes (see, e.g., Guruswami 005, our main research question is: 000
2 Fix a distance metric d. Suppose that we are given n noisy rankings, and that the average distance between these rankings and the ground truth is at most t. We wish to recover a ranking that is guaranteed to be at distance at most k from the ground truth. How small can k be, as a function of n and t? Our results. We observe that for any metric d, one can always recover a ranking that is at distance at most t from the ground truth, i.e., k t. Under an extremely mild assumption on the distance metric, we complement this result by proving a lower bound of (roughly k t. Next, we consider the four most popular distance metrics used in the social choice literature, and prove a tight lower bound of (roughly k t for each metric. This lower bound is our main theoretical result; the construction makes unexpected use of Fermat s Polygonal Number Theorem. The worst-case optimal voting rule in our framework is defined with respect to a known upper bound t on the average distance between the given rankings and the ground truth. However, we show that the voting rule which returns the ranking minimizing the total distance from the given rankings which has strong theoretical support in the literature serves as an approximation to our worst-case optimal rule, irrespective of the value of t. We leverage this observation to provide theoretical performance guarantees for our rule in cases where the error bound t given to the rule is an underestimate or overestimate of the tightest upper bound. Finally, we test our worst-case optimal voting rules against many well-established voting rules, on two realword datasets (Mao, Procaccia, and Chen 03, and show that the worst-case optimal rules exhibit superior performance as long as the given error bound t is a reasonable overestimate of the tightest upper bound. Related work. Our work is related to the extensive literature on error-correcting codes that use permutations (see, e.g., Barg and Mazumdar 00, and the references therein, but differs in one crucial aspect. In designing error-correcting codes, the focus is on two choices: i the codewords, a subset of rankings which represent the possible ground truths, and ii the code, which converts every codeword into the message to be sent. These choices are optimized to achieve the best tradeoff between the number of errors corrected and the rate of the code (efficiency, while allowing unique identification of the ground truth. In contrast, our setting has fixed choices: i every ranking is a possible ground truth, and ii in coding theory terms, our setting constrains us to the repetition code. Both restrictions (inevitable in our setting lead to significant inefficiencies, as well as the impossibility of unique identification of the ground truth (as illustrated in the introduction. Our research question is reminiscent of coding theory settings where a bound on adversarial noise is given, and a code is chosen with the bound on the noise as an input to maximize efficiency (see, e.g., Haeupler 04. List decoding (see, e.g., Guruswami 005 relaxes classic error correction by guaranteeing that the number of possible messages does not exceed a small quota; then, the decoder simply lists all possible messages. The motivation is that one can simply scan the list and find the correct message, as all other messages on the list are likely to be gibberish. In the voting context, one cannot simply disregard some potential ground truths as nonsensical; we therefore select a ranking that is close to every possible ground truth. A bit further afield, Procaccia et al. (007 study a probabilistic noisy voting setting, and quantify the robustness of voting rules to random errors. Their results focus on the probability that the outcome would change, under a random transposition of two adjacent alternatives in a single vote from a submitted profile, in the worst-case over profiles. Their work is different from ours in many ways, but perhaps most importantly, they are interested in how frequently common voting rules make mistakes, whereas we are interested in the guarantees of optimal voting rules that avoid mistakes. Preliminaries Let A be the set of alternatives, and A = m. Let L(A be the set of rankings over A. A vote σ is a ranking in L(A, and a profile π L(A n is a collection of n rankings. A voting rule f : L(A n L(A maps every profile to a ranking. We assume that there exists an underlying ground truth ranking σ L(A of the alternatives, and the votes are noisy estimates of σ. We use a distance metric d over L(A to measure errors; the error of a vote σ with respect to σ is d(σ, σ, and the average error of a profile π with respect to σ is d(π, σ = (/n σ π d(σ, σ. We consider four popular distance metrics over rankings in this paper. The Kendall tau (KT distance, denoted d KT, measures the number of pairs of alternatives over which two rankings disagree. Equivalently, it is also the minimum number of swaps of adjacent alternatives required to convert one ranking into another. The (Spearman s Footrule (FR distance, denoted d FR, measures the total displacement of all alternatives between two rankings, i.e., the sum of the absolute differences between their positions in two rankings. The Maximum Displacement (MD distance, denoted d MD, measures the maximum of the displacements of all alternatives between two rankings. The Cayley (CY distance, denoted d CY, measures the minimum number of swaps (not necessarily of adjacent alternatives required to convert one ranking into another. All four metrics described above are neutral: A distance metric is called neutral if the distance between two rankings is independent of the labels of the alternatives; in other words, choosing a relabeling of the alternatives and applying it to two rankings keeps the distance between them invariant. Worst-Case Optimal Rules Suppose we are given a profile π of n noisy rankings that are estimates of an underlying true ranking σ. In the absence of any additional information, any ranking could potentially be the true ranking. However, because essentially all crowdsourcing methods draw their power from the often-observed They are known as social welfare functions, which differ from social choice functions that choose a single winning alternative. 00
3 fact that individual opinions are accurate on average, we can plausibly assume that while some agents may make many mistakes, the average error is fairly small. An upper bound on the average error may be inferred by observing the collected votes, or from historical data (but see the next section for the case where this bound is inaccurate. Formally, suppose we are guaranteed that the average distance between the votes in π and the ground truth σ is at most t according to a metric d, i.e., d(π, σ t. With this guarantee, the set of possible ground truths is given by the ball of radius t around π. B d t (π = {σ L(A d(π, σ t}. Note that we have σ B d t (π given our assumption; hence, B d t (π. We wish to find a ranking that is as close to the ground truth as possible. Since our approach is worst case in nature, our goal is to find the ranking that minimizes the maximum distance from the possible ground truths in B d t (π. For a set of rankings S L(A, let its minimax ranking, denoted MINIMAX d (S, be defined as follows. MINIMAX d (S = arg min σ L(A max d(σ, σ S σ. Let the minimax distance of S, denoted k d (S, be the maximum distance of MINIMAX d (S from the rankings in S according to d. Thus, given a profile π and the guarantee that d(π, σ t, the worst-case optimal voting rule OPT d returns the minimax ranking of the set of possible ground truths B d t (π. That is, for all profiles π L(A n and t > 0, OPT d (t, π = MINIMAX d ( B d t (π. Furthermore, the output ranking is guaranteed to be at distance at most k d (B d t (π from the ground truth. We overload notation, and denote k d (t, π = k d (B d t (π, and k d (t = max π L(A kd (t, π. n While k d is explicitly a function of t, it is also implicitly a function of n. Hereinafter, we omit the superscript d whenever the metric is clear from context. Let us illustrate our terminology with a simple example. Example. Let A = {a, b, c}. We are given profile π consisting of 5 votes: π = { (a b c, a c b, b a c, c a b}. The maximum distances between rankings in L(A allowed by d KT, d FR, d MD, and d CY are 3, 4,, and, respectively; let us assume that the average error limit is half the maximum distance for all four metrics. 3 Consider the Kendall tau distance with t =.5. The average distances of all 6 rankings from π are given below. d KT (π, a b c = 0.8 d KT (π, a c b =.0 d KT (π, b a c =.4 d KT (π, b c a =.0 d KT (π, c a b =.6 d KT (π, c b a =. We use MINIMAX d (S to denote a single ranking. Ties among multiple minimizers can be broken arbitrarily; our results are independent of the tie-breaking scheme. 3 Scaling by the maximum distance is not a good way of comparing distance metrics; we do so for the sake of illustration only. Thus, the set of possible ground truths is B dkt.5 (π = {a b c, a c b, b a c}. This set has a unique minimax ranking OPT dkt (.5, π = a b c, which gives k dkt (.5, π =. Table lists the sets of possible ground truths and their minimax rankings 4 under different distance metrics. Voting Rule OPT dkt (.5, π, OPT dcy (, π OPT dfr (, π, OPT dmd (, π Possible Ground Truths Bt d Output Ranking (π { a b c, } a c b, a b c b a c { a b c, } { a b c, } a c b a c b Table : Application of the optimal voting rules on π. Note that even with identical (scaled error bounds, different distance metrics lead to different sets of possible ground truths as well as different optimal rankings. This demonstrates that the choice of the distance metric is significant. Upper Bound Given a distance metric d, a profile π, and that d(π, σ t, we can bound k(t, π using the diameter of the set of possible ground truths B t (π. For a set of rankings S L(A, denote its diameter by D(S = max σ,σ S d(σ, σ. Lemma. D(B t(π k(t, π D(B t (π t. Proof. Let σ = MINIMAX(B t (π. For rankings σ, σ B t (π, we have d(σ, σ, d(σ, σ k(t, π by definition of σ. By the triangle inequality, d(σ, σ k(t, π for all σ, σ B t (π. Thus, D(B t (π k(t, π. Next, the maximum distance of σ B t (π from all rankings in B t (π is at most D(B t (π. Hence, the minimax distance k(t, π = k(b t (π cannot be greater than D(B t (π. Finally, let π = {σ,..., σ n }. For rankings σ, σ B t (π, the triangle inequality implies d(σ, σ d(σ, σ i + d(σ i, σ for every i {,..., n}. Averaging over these inequalities, we get d(σ, σ t + t = t, for all σ, σ B t (π. Thus, we have D(B t (π t, as required. Lemma implies that k(t = max π L(A n k(t, π t for all distance metrics and t > 0. In words: Theorem. Given n noisy rankings at an average distance of at most t from an unknown true ranking σ according to a distance metric d, it is always possible to find a ranking at distance at most t from σ according to d. Importantly, the bound of Theorem is independent of the number of votes n. Most statistical models of social choice restrict profiles in two ways: i the average error should be low because the probability of generating high-error votes is typically low, and ii the errors should be distributed almost evenly (in different directions from the ground truth, 4 Multiple rankings indicate a tie that can be broken arbitrarily. 00
4 which is why aggregating the votes works well. These assumptions are mainly helpful when n is large, that is, performance may be poor for small n (see, e.g., Caragiannis et al. 03. In contrast, our model restricts profiles only by making the first assumption (explicitly, allowing voting rules to perform well as long as the votes are accurate on average, independently of the number of votes n. We also remark that Theorem admits a simple proof, but the bound is nontrivial: while the average error of the profile is at most t (hence, the profile contains a ranking with error at most t, it is generally impossible to pinpoint a single ranking within the profile that has low error (say, at most t with respect to the ground truth in the worst-case (i.e., with respect to every possible ground truth in B t (π. Lower Bounds The upper bound of t (Theorem is intuitively loose we cannot expect it to be tight for every distance metric. However, we can prove a generic lower bound of (roughly speaking t for a wide class of distance metrics. Formally, let d (r denote the greatest feasible distance under distance metric d that is less than or equal to r. k(t is the minimax distance under some profile, and hence must be a feasible distance under d. Thus, we prove a lower bound of d (t, which is the best possible bound up to t. Theorem. For a neutral distance metric d, k(t d (t. Proof. For a ranking σ L(A and r 0, let B r (σ denote the set of rankings at distance at most r from σ. Neutrality of the distance metric d implies B r (σ = B r (σ for all σ, σ L(A and r 0. In particular, d (t being a feasible distance under d implies that for every σ L(A, there exists some ranking at distance exactly d (t from σ. Fix σ L(A. Consider the profile π consisting of n instances of σ. It holds that B t (π = B t (σ. We want to show that the minimax distance k(b t (σ d (t. Suppose for contradiction that there exists some σ L(A such that all rankings in B t (σ are at distance at most t from σ, i.e., B t (σ B t (σ, with t < d (t. Since there exists some ranking at distance d (t > t from σ, we have B t (σ B t (σ B t (σ, which is a contradiction because B t (σ = B t (σ. Therefore, k(t k(t, π d (t. The bound of Theorem holds for all n, m > 0 and all t [0, D], where D is the maximum possible distance under d. It can be checked easily that the bound is tight given the neutrality assumption, which is an extremely mild and in fact, a highly desirable assumption for distance metrics over rankings. Note that since k(t is a valid distance under d, the upper bound of t from Theorem actually implies a possibly better upper bound of d (t. Thus, Theorems and establish d (t k(t d (t for a wide range of distance metrics d. For the four special distance metrics considered in this paper, we close this gap by establishing a tight lower bound of d (t, for a wide range of values of n and t. Theorem 3. If d {d KT, d FR, d MD, d CY }, and the maximum distance allowed by the metric is D Θ(m α, then there exists T Θ(m α such that:. For all t T and even n, we have k(t d (t.. For all L, t T with {t} (/L, /L, and odd n Θ(L D, we have k(t d (t. Here, {x} = x x denotes the fractional part of x R. Theorem 3 is our main theoretical result. Here we only provide a proof sketch for the Kendall tau distance, which gives some of the key insights. The full proof for Kendall tau, as well as for other distance metrics, can be found in the full version of the paper. 5 Proof sketch for Kendall tau. Let d be the Kendall tau distance; thus, D = ( m and α =. First, we prove the case of even n. For a ranking τ L(A, let τ rev be its reverse. Assume t = (/ (m, and fix a ranking σ L(A. Every ranking must agree with exactly one of σ and σ rev on a given pair of alternatives. Hence, every ρ L(A satisfies d(ρ, σ + d(ρ, σ rev = ( m. Consider the profile π consisting of n/ instances of σ and n/ instances of σ rev. Then, the average distance of every ranking from rankings in π would be exactly t, i.e., B t (π = L(A. It is easy to check that k(l(a = ( m = t = d (t because every ranking has its reverse ranking in L(A at distance exactly t. Now, let us extend the proof to t (m/6. If t < 0.5, then d KT (t = 0, which is a trivial lower bound. Hence, assume t 0.5. Thus, d (t = t. We use Fermat s Polygonal Number Theorem (see, e.g., Heath 0. A special case of this remarkable theorem states that every natural number can be expressed as the sum of at most three triangular numbers, i.e., numbers of the form ( k. Let t = 3 ( mi i=. It is easy to check that 0 mi t for all i {,, 3}. Hence, 3 i= m i 6 t m. Partition the set of alternatives A into four disjoint groups A, A, A 3, and A 4 such that A i = m i for i {,, 3}, and A 4 = m 3 i= m i. Let σ A4 be an arbitrary ranking of the alternatives in A 4 ; consider the partial order P A = A A A 3 σ A4 over alternatives in A. Note that a ranking ρ is an extension of P A iff it ranks all alternatives in A i before any alternative in A i+ for i {,, 3}, and ranks alternatives in A 4 according to σ A4. Choose arbitrary σ Ai L(A i for i {,, 3} and define σ = σ A σ A σ A3 σ A4, σ = σ A rev σ A rev σ A3 rev σ A4. Note that both σ and σ are extensions of P A. Once again, take the profile π consisting of n/ instances of σ and n/ instances of σ. It is easy to check that a ranking disagrees with exactly one of σ and σ on every pair of alternatives that belong to the same group in {A, A, A 3 }. Hence, every ranking ρ L(A satisfies d(ρ, σ + d(ρ, σ 3 i= ( mi = t. ( 5 The full version is accessible from arielpro/papers.html 003
5 Clearly an equality is achieved in Equation ( if and only if ρ is an extension of P(A. Thus, every extension of P A has an average distance of t / t from π. Every ranking ρ that is not an extension of P A achieves a strict inequality in Equation (; thus, d(ρ, π ( t + / > t. Hence, B t (π is the set of extensions of P A. Given a ranking ρ L(A, consider the ranking in B t (π that reverses the partial orders over A, A, and A 3 induced by ρ. The distance of this ranking from ρ would be at least 3 ( mi i= = t, implying k(bt (π t. (In fact, it can be checked that k(b t (π = D(B t (π = t. While we assumed that n is even, the proof can be extended to odd values of n by having one more instance of σ than σ. The key idea is that with large n, the distance from the additional vote would have little effect on the average distance of a ranking from the profile. Thus, B t (π would be preserved, and the proof would follow. The impossibility result of Theorem 3 is weaker for odd values of n (in particular, covering more values of t requires larger n, which is reminiscent of the fact that repetition (error-correcting codes achieve greater efficiency with an odd number of repetitions; this is not merely a coincidence. Indeed, an extra repetition allows differentiating between tied possibilities for the ground truth; likewise, an extra vote in the profile prevents us from constructing a symmetric profile that admits a diverse set of possible ground truths (see the full version of the paper for details. Approximations for Unknown Average Error In the previous sections we derived the optimal rules when the upper bound t on the average error is given to us. In practice, the given bound may be inaccurate. We know that using an estimate t that is still an upper bound ( t t yields a ranking at distance at most t from the ground truth in the worst case. What happens if it turns out that t < t? We show that the output ranking is still at distance at most 4t from the ground truth in the worst case. Theorem 4. For a distance metric d, a profile π consisting of n noisy rankings at an average distance of at most t from the true ranking σ, and t < t, d(opt d ( t, π, σ 4t. To prove the theorem, we make a detour through minisum rules. For a distance metric d, let MINISUM d, be the voting rule that always returns the ranking minimizing the sum of distances (equivalently, average distance from the rankings in the given profile according to d. Two popular minisum rules are the Kemeny rule for the Kendall tau distance (MINISUM dkt and the minisum rule for the footrule distance (MINISUM dfr, which approximates the Kemeny rule (Dwork et al. 00. For a distance metric d (dropped from the superscripts, let d(π, σ t. We claim that the minisum ranking MINISUM(π is at distance at most min(t, k(t, π from σ. This is true because the minisum ranking and the true ranking are both in B t (π, and Lemma shows that its diameter is at most min(t, k(t, π. Returning to the theorem, if we provide an underestimate t of the true worst-case average error t, then using Lemma, d (MINIMAX(B t (π, MINISUM(π t t, d (MINISUM(π, σ D(B t (π t. By the triangle inequality, d (MINIMAX(B t (π, σ 4t. Experimental Results We compare our worst-case optimal voting rules OPT d against a plethora of voting rules used in the literature: plurality, Borda count, veto, the Kemeny rule, single transferable vote (STV, Copeland s rule, Bucklin s rule, the maximin rule, Slater s rule, Tideman s rule, and the modal ranking rule (for definitions see, e.g., Caragiannis et al. 04. Our performance measure is the distance of the output ranking from the actual ground truth. In contrast, for a given d, OPT d is designed to optimize the worst-case distance to any possible ground truth. Hence, crucially, OPT d is not guaranteed to outperform other rules in our experiments. We use two real-world datasets containing ranked preferences in domains where ground truth rankings exist. Mao, Procaccia, and Chen (03 collected these datasets dots and puzzle via Amazon Mechanical Turk. For dataset dots (resp., puzzle, human workers were asked to rank four images that contain a different number of dots (resp., different states of an 8-Puzzle according to the number of dots (resp., the distances of the states from the goal state. Each dataset has four different noise levels (i.e., levels of task difficulty, represented using a single noise parameter: for dots (resp., puzzle, higher noise corresponds to ranking images with a smaller difference between their number of dots (resp., ranking states that are all farther away from the goal state. Each dataset has 40 profiles with approximately 0 votes each, for each of the 4 noise levels. Points in our graphs are averaged over the 40 profiles in a single noise level of a dataset. First, as a sanity check, we verified (Figure that the noise parameter in the datasets positively correlates with our notion of noise the average error in the profile, denoted t (averaged over all profiles in a noise level. Strikingly, the results from the two datasets are almost identical! Average t (a Dots KT FR MD CY Average t (b Puzzle KT FR MD CY Figure : Positive correlation of t with the noise parameter Next, we compare OPT d and MINISUM d against the voting rules listed above, with distance d as the measure of error. We use the average error in a profile as the bound t given to OPT d, i.e., we compute OPT d (t, π on profile π where t = d(π, σ. While this is somewhat optimistic, note that t may not be the (optimal value of t that achieves the lowest error. Also, the experiments below show that a reasonable estimate of t also suffices. 004
6 Error Error.5 Error.5 Error (a Dots (b Puzzle r (c Dots r (d Puzzle Figure : Performance of different voting rules (Figures (a and (b, and of OPT with varying t (Figures (c and (d. Figures (a and (b show the results for the dots and puzzle datasets, respectively, under the Kendall tau distance. It can be seen that OPT dkt (solid red line significantly outperforms all other voting rules. The three other distance metrics considered in this paper generate similar results; the corresponding graphs appear in the full version. Finally, we test OPT d in the more demanding setting where only an estimate t of t is provided. To synchronize the results across different profiles, we use r = ( t MAD/(t MAD, where MAD is the minimum average distance of any ranking from the votes in a profile, that is, the average distance of the ranking returned by MINISUM d from the input votes. For all profiles, r = 0 implies t = MAD (the smallest value that admits a possible ground truth and r = implies t = t (the true average error. In our experiments we use r [0, ]; here, t is an overestimate of t for r (, ] (a valid upper bound on t, but an underestimate of t for r [0, (an invalid upper bound on t. Figures (c and (d show the results for the dots and puzzle datasets, respectively, for a representative noise level (level 3 in previous experiments and the Kendall tau distance. We can see that OPT dkt (solid red line outperforms all other voting rules as long as t is a reasonable overestimate of t (r [, ], but may or may not outperform them if t is an underestimate of t. Again, other distance metrics generate similar results (see the full version for the details. Discussion Uniformly accurate votes. Motivated by crowdsourcing settings, we considered the case where the average error in the input votes is guaranteed to be low. Instead, suppose we know that every vote in the input profile π is at distance at most t from the ground truth σ, i.e., max σ π d(σ, σ t. If t is small, this is a stronger assumption because it means that there are no outliers, which is implausible in crowdsourcing settings but plausible if the input votes are expert opinions. In this setting, it is immediate that any vote in the given profile is at distance at most d (t from the ground truth. Moreover, the proof of Theorem goes through, so this bound is tight in the worst case; however, returning a ranking from the profile is not optimal for every profile. Randomization. We did not consider randomized rules, which may return a distribution over rankings. If we take the error of a randomized rule to be the expected distance of the returned ranking from the ground truth, it is easy to obtain an upper bound of t. Again, the proof of Theorem can be extended to yield an almost matching lower bound of d (t. While randomized rules provide better guarantees, they are often impractical: low error is only guaranteed when rankings are repeatedly selected from the output distribution of the randomized rule on the same profile; however, most social choice settings see only a single outcome realized. 6 Complexity. A potential drawback of the proposed approach is computational complexity. For example, consider the Kendall tau distance. When t is small enough, only the Kemeny ranking would be a possible ground truth, and OPT dkt or any finite approximation thereof must return the Kemeny ranking, if it is unique. The N P-hardness of computing the Kemeny ranking (Bartholdi, Tovey, and Trick 989 therefore suggests that computing or approximating OPT dkt is N P-hard. That said, fixed-parameter tractable algorithms, integer programming solutions, and other heuristics are known to provide good performance for these types of computational problems in practice (see, e.g., Betzler et al. 0; 04. More importantly, the crowdsourcing settings that motivate our work inherently restrict the number of alternatives to a relatively small constant: a human would find it difficult to effectively rank more than, say, 0 alternatives. With a constant number of alternatives, we can simply enumerate all possible rankings in polynomial time, making each and every computational problem considered in this paper tractable. In fact, this is what we did in our experiments. Therefore, we do not view computational complexity as an insurmountable obstacle. 6 Exceptions include cases where randomization is used for circumventing impossibilities (Procaccia 00; Conitzer 008; Boutilier et al
7 Acknowledgements Procaccia and Shah were partially supported by the NSF under grants IIS and CCF References Arrow, K. 95. Social Choice and Individual Values. John Wiley and Sons. Azari Soufiani, H.; Parkes, D. C.; and Xia, L. 0. Random utility theory for social choice. In Proceedings of the 6th Annual Conference on Neural Information Processing Systems (NIPS, Azari Soufiani, H.; Parkes, D. C.; and Xia, L. 03. Preference elicitation for general random utility models. In Proceedings of the 9th Annual Conference on Uncertainty in Artificial Intelligence (UAI, Azari Soufiani, H.; Parkes, D. C.; and Xia, L. 04. Computing parametric ranking models via rank-breaking. In Proceedings of the 3st International Conference on Machine Learning (ICML, Barg, A., and Mazumdar, A. 00. Codes in permutations and error correction for rank modulation. IEEE Transactions on Information Theory 56(7: Bartholdi, J.; Tovey, C. A.; and Trick, M. A Voting schemes for which it can be difficult to tell who won the election. Social Choice and Welfare 6: Betzler, N.; Guo, J.; Komusiewicz, C.; and Niedermeier, R. 0. Average parameterization and partial kernelization for computing medians. Journal of Computer and System Sciences 77(4: Betzler, N.; Bredereck, R.; and Niedermeier, R. 04. Theoretical and empirical evaluation of data reduction for exact Kemeny rank aggregation. Autonomous Agents and Multi- Agent Systems 8(5: Boutilier, C.; Caragiannis, I.; Haber, S.; Lu, T.; Procaccia, A. D.; and Sheffet, O. 0. Optimal social choice functions: A utilitarian view. In Proceedings of the 3th ACM Conference on Economics and Computation (EC, Caragiannis, I.; Procaccia, A. D.; and Shah, N. 03. When do noisy votes reveal the truth? In Proceedings of the 4th ACM Conference on Economics and Computation (EC, Caragiannis, I.; Procaccia, A. D.; and Shah, N. 04. Modal ranking: A uniquely robust voting rule. In Proceedings of the 8th AAAI Conference on Artificial Intelligence (AAAI, Conitzer, V., and Sandholm, T Communication complexity of common voting rules. In Proceedings of the 6th ACM Conference on Economics and Computation (EC, Conitzer, V.; Rognlie, M.; and Xia, L Preference functions that score rankings and maximum likelihood estimation. In Proceedings of the st International Joint Conference on Artificial Intelligence (IJCAI, Conitzer, V Anonymity-proof voting rules. In Proceedings of the 4th International Workshop on Internet and Network Economics (WINE, Dwork, C.; Kumar, R.; Naor, M.; and Sivakumar, D. 00. Rank aggregation methods for the web. In Proceedings of the 0th International World Wide Web Conference (WWW, Elkind, E.; Faliszewski, P.; and Slinko, A. 00. Good rationalizations of voting rules. In Proceedings of the 4th AAAI Conference on Artificial Intelligence (AAAI, Guruswami, V List Decoding of Error-Correcting Codes. Springer. Haeupler, B. 04. Interactive channel capacity revisited. In Proceedings of the 55th Symposium on Foundations of Computer Science (FOCS. Forthcoming. Heath, T. L. 0. Diophantus of Alexandria: A study in the history of Greek algebra. CUP Archive. Lee, J.; Kladwang, W.; Lee, M.; Cantu, D.; Azizyan, M.; Kim, H.; Limpaecher, A.; Yoon, S.; Treuille, A.; and Das, R. 04. RNA design rules from a massive open laboratory. Proceedings of the National Academy of Sciences. Forthcoming. Lu, T., and Boutilier, C. 0. Robust approximation and incremental elicitation in voting protocols. In Proceedings of the nd International Joint Conference on Artificial Intelligence (IJCAI, Mallows, C. L Non-null ranking models. Biometrika 44:4 30. Mao, A.; Procaccia, A. D.; and Chen, Y. 03. Better human computation through principled voting. In Proceedings of the 7th AAAI Conference on Artificial Intelligence (AAAI, Procaccia, A. D.; Reddi, S. J.; and Shah, N. 0. A maximum likelihood approach for selecting sets of alternatives. In Proceedings of the 8th Annual Conference on Uncertainty in Artificial Intelligence (UAI, Procaccia, A. D.; Rosenschein, J. S.; and Kaminka, G. A On the robustness of preference aggregation in noisy environments. In Proceedings of the 6th International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS, Procaccia, A. D. 00. Can approximation circumvent Gibbard-Satterthwaite? In Proceedings of the 4th AAAI Conference on Artificial Intelligence (AAAI, Xia, L., and Conitzer, V. 0. A maximum likelihood approach towards aggregating partial orders. In Proceedings of the nd International Joint Conference on Artificial Intelligence (IJCAI, Xia, L.; Conitzer, V.; and Lang, J. 00. Aggregating preferences in multi-issue domains by using maximum likelihood estimators. In Proceedings of the 9th International Conference on Autonomous Agents and Multi-Agent Systems (AA- MAS, Young, H. P Condorcet s theory of voting. The American Political Science Review 8(4:
Ranked Voting on Social Networks
Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence (IJCAI 205) Ranked Voting on Social Networks Ariel D. Procaccia Carnegie Mellon University arielpro@cs.cmu.edu
More informationRanked Voting on Social Networks
Ranked Voting on Social Networks Ariel D. Procaccia Carnegie Mellon University arielpro@cs.cmu.edu Nisarg Shah Carnegie Mellon University nkshah@cs.cmu.edu Eric Sodomka Facebook sodomka@fb.com Abstract
More informationModal Ranking: A Uniquely Robust Voting Rule
Modal Ranking: A Uniquely Robust Voting Rule Ioannis Caragiannis University of Patras caragian@ceid.upatras.gr Ariel D. Procaccia Carnegie Mellon University arielpro@cs.cmu.edu Nisarg Shah Carnegie Mellon
More informationMaking Right Decisions Based on Wrong Opinions
1 Making Right Decisions Based on Wrong Opinions GERDUS BENADE, ANSON KAHNG, and ARIEL D. PROCACCIA, Carnegie Mellon University We revisit the classic problem of designing voting rules that aggregate objective
More informationHow Bad is Selfish Voting?
How Bad is Selfish Voting? Simina Brânzei Aarhus University simina@cs.au.dk Ioannis Caragiannis University of Patras caragian@ceid.upatras.gr Jamie Morgenstern Carnegie Mellon University jamiemmt@cs.cmu.edu
More informationComplexity of Unweighted Coalitional Manipulation Under Some Common Voting Rules
Complexity of Unweighted Coalitional Manipulation Under Some Common Voting Rules Lirong Xia Michael Zuckerman Ariel D. Procaccia Vincent Conitzer Jeffrey S. Rosenschein Department of Computer Science,
More informationCombining Voting Rules Together
Combining Voting Rules Together Nina Narodytska and Toby Walsh and Lirong Xia 3 Abstract. We propose a simple method for combining together voting rules that performs a run-off between the different winners
More informationElecting the Most Probable Without Eliminating the Irrational: Voting Over Intransitive Domains
Electing the Most Probable Without Eliminating the Irrational: Voting Over Intransitive Domains Edith Elkind University of Oxford, UK elkind@cs.ox.ac.uk Nisarg Shah Carnegie Mellon University, USA nkshah@cs.cmu.edu
More informationOptimal Aggregation of Uncertain Preferences
Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence (AAAI-16) Optimal Aggregation of Uncertain Preferences Ariel D. Procaccia Computer Science Department Carnegie Mellon University
More informationElecting the Most Probable Without Eliminating the Irrational: Voting Over Intransitive Domains
Electing the Most Probable Without Eliminating the Irrational: Voting Over Intransitive Domains Edith Elkind University of Oxford, UK elkind@cs.ox.ac.uk Nisarg Shah Carnegie Mellon University, USA nkshah@cs.cmu.edu
More informationA Maximum Likelihood Approach For Selecting Sets of Alternatives
A Maximum Likelihood Approach For Selecting Sets of Alternatives Ariel D. Procaccia Computer Science Department Carnegie Mellon University arielpro@cs.cmu.edu Sashank J. Reddi Machine Learning Department
More informationApproximation Algorithms and Mechanism Design for Minimax Approval Voting
Approximation Algorithms and Mechanism Design for Minimax Approval Voting Ioannis Caragiannis RACTI & Department of Computer Engineering and Informatics University of Patras, Greece caragian@ceid.upatras.gr
More informationTies Matter: Complexity of Voting Manipulation Revisited
Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence Ties Matter: Complexity of Voting Manipulation Revisited Svetlana Obraztsova Edith Elkind School of Physical and
More informationComplexity of unweighted coalitional manipulation under some common voting rules
Complexity of unweighted coalitional manipulation under some common voting rules Lirong Xia, Vincent Conitzer, Ariel D. Procaccia, and Jeffrey S. Rosenschein Abstract In this paper, we study the computational
More informationDominating Manipulations in Voting with Partial Information
Dominating Manipulations in Voting with Partial Information Vincent Conitzer Department of Computer Science Duke University Durham, NC 27708, USA conitzer@cs.duke.edu Toby Walsh NICTA and UNSW Sydney,
More informationAn NTU Cooperative Game Theoretic View of Manipulating Elections
An NTU Cooperative Game Theoretic View of Manipulating Elections Michael Zuckerman 1, Piotr Faliszewski 2, Vincent Conitzer 3, and Jeffrey S. Rosenschein 1 1 School of Computer Science and Engineering,
More informationOptimal Aggregation of Uncertain Preferences
Optimal Aggregation of Uncertain Preferences Ariel D. Procaccia Computer Science Department Carnegie Mellon University arielpro@cs.cmu.edu Nisarg Shah Computer Science Department Carnegie Mellon University
More informationA Maximum Likelihood Approach For Selecting Sets of Alternatives
A Maximum Likelihood Approach For Selecting Sets of Alternatives Ariel D. Procaccia Computer Science Department Carnegie Mellon University arielpro@cs.cmu.edu Sashank J. Reddi Machine Learning Department
More informationHow Bad is Selfish Voting?
How Bad is Selfish Voting? Simina Brânzei Aarhus University simina@cs.au.dk Ioannis Caragiannis University of Patras caragian@ceid.upatras.gr Jamie Morgenstern Carnegie Mellon University jamiemmt@cs.cmu.edu
More informationApproximation Algorithms and Mechanism Design for Minimax Approval Voting 1
Approximation Algorithms and Mechanism Design for Minimax Approval Voting 1 Ioannis Caragiannis, Dimitris Kalaitzis, and Evangelos Markakis Abstract We consider approval voting elections in which each
More informationSocial Dichotomy Functions (extended abstract)
Social Dichotomy Functions (extended abstract) Conal Duddy, Nicolas Houy, Jérôme Lang, Ashley Piggins, and William S. Zwicker February 22, 2014 1 What is a Social Dichotomy Function? A dichotomy A = (A
More informationStatistical Properties of Social Choice Mechanisms
Statistical Properties of Social Choice Mechanisms Lirong Xia Abstract Most previous research on statistical approaches to social choice focused on the computation and characterization of maximum likelihood
More informationLecture: Aggregation of General Biased Signals
Social Networks and Social Choice Lecture Date: September 7, 2010 Lecture: Aggregation of General Biased Signals Lecturer: Elchanan Mossel Scribe: Miklos Racz So far we have talked about Condorcet s Jury
More informationAPPLIED MECHANISM DESIGN FOR SOCIAL GOOD
APPLIED MECHANISM DESIGN FOR SOCIAL GOOD JOHN P DICKERSON Lecture #21 11/8/2016 CMSC828M Tuesdays & Thursdays 12:30pm 1:45pm IMPOSSIBILITY RESULTS IN VOTING THEORY / SOCIAL CHOICE Thanks to: Tuomas Sandholm
More informationProbabilistic Possible Winner Determination
Probabilistic Possible Winner Determination Yoram Bachrach Microsoft Research Ltd. Cambridge, UK Nadja Betzler Institut für Informatik Friedrich-Schiller-Universität Jena Jena, Germany Piotr Faliszewski
More informationComplexity of Shift Bribery in Hare, Coombs, Baldwin, and Nanson Elections
Complexity of Shift Bribery in Hare, Coombs, Baldwin, and Nanson Elections Cynthia Maushagen, Marc Neveling, Jörg Rothe, and Ann-Kathrin Selker Institut für Informatik Heinrich-Heine-Universität Düsseldorf
More informationCMU Social choice 2: Manipulation. Teacher: Ariel Procaccia
CMU 15-896 Social choice 2: Manipulation Teacher: Ariel Procaccia Reminder: Voting Set of voters Set of alternatives Each voter has a ranking over the alternatives means that voter prefers to Preference
More informationMultiwinner Elections Under Preferences that Are Single-Peaked on a Tree
Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence Multiwinner Elections Under Preferences that Are Single-Peaked on a Tree Lan Yu 1, Hau Chan 2, and Edith Elkind
More informationFollow links for Class Use and other Permissions. For more information send to:
COPYRIGHT NOTICE: Ariel Rubinstein: Lecture Notes in Microeconomic Theory is published by Princeton University Press and copyrighted, c 2006, by Princeton University Press. All rights reserved. No part
More informationOptimal Social Choice Functions: A Utilitarian View
Optimal Social Choice Functions: A Utilitarian View CRAIG BOUTILIER, University of Toronto IOANNIS CARAGIANNIS, University of Patras and CTI SIMI HABER, Carnegie Mellon University TYLER LU, University
More informationCMU Social choice: Advanced manipulation. Teachers: Avrim Blum Ariel Procaccia (this time)
CMU 15-896 Social choice: Advanced manipulation Teachers: Avrim Blum Ariel Procaccia (this time) Recap A Complexity-theoretic barrier to manipulation Polynomial-time greedy alg can successfully decide
More informationPreference Functions That Score Rankings and Maximum Likelihood Estimation
Preference Functions That Score Rankings and Maximum Likelihood Estimation Vincent Conitzer Department of Computer Science Duke University Durham, NC 27708, USA conitzer@cs.duke.edu Matthew Rognlie Duke
More informationThe Axiomatic Method in Social Choice Theory:
The Axiomatic Method in Social Choice Theory: Preference Aggregation, Judgment Aggregation, Graph Aggregation Ulle Endriss Institute for Logic, Language and Computation University of Amsterdam Ulle Endriss
More informationTies Matter: Complexity of Manipulation when Tie-breaking with a Random Vote
Ties Matter: Complexity of Manipulation when Tie-breaking with a Random Vote Haris Aziz, Serge Gaspers, Nicholas Mattei, Nina Narodytska, and Toby Walsh NICTA and UNSW Sydney, Australia {haris.aziz, serge.gaspers,
More informationEfficient Algorithms for Hard Problems on Structured Electorates
Aspects of Computation 2017 Efficient Algorithms for Hard Problems on Structured Electorates Neeldhara Misra The standard Voting Setup and some problems that we will encounter. The standard Voting Setup
More informationThe Maximum Likelihood Approach to Voting on Social Networks 1
The Maximum Likelihood Approach to Voting on Social Networks 1 Vincent Conitzer Abstract One view of voting is that voters have inherently different preferences de gustibus non est disputandum and that
More informationRedistribution Mechanisms for Assignment of Heterogeneous Objects
Redistribution Mechanisms for Assignment of Heterogeneous Objects Sujit Gujar Dept of Computer Science and Automation Indian Institute of Science Bangalore, India sujit@csa.iisc.ernet.in Y Narahari Dept
More informationPossible Winners when New Candidates Are Added: The Case of Scoring Rules
Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence (AAAI-0) Possible Winners when New Candidates Are Added: The Case of Scoring Rules Yann Chevaleyre and Jérôme Lang and Nicolas
More informationSubset Selection Via Implicit Utilitarian Voting
Journal of Artificial Intelligence Research 58 (207) 23-52 Submitted 07/6; published 0/7 Subset Selection Via Implicit Utilitarian Voting Ioannis Caragiannis University of Patras, Greece Swaprava Nath
More informationComputational Aspects of Strategic Behaviour in Elections with Top-Truncated Ballots
Computational Aspects of Strategic Behaviour in Elections with Top-Truncated Ballots by Vijay Menon A thesis presented to the University of Waterloo in fulfillment of the thesis requirement for the degree
More informationPreference Elicitation For Participatory Budgeting
Preference Elicitation For Participatory Budgeting Gerdus Benadè 1, Ariel D. Procaccia 1, Swaprava Nath 2, and Nisarg Shah 3 1 Carnegie Mellon University, USA 2 IIT Kanpur, India 3 University of Toronto,
More informationMechanism Design for Bounded Agents
Chapter 8 Mechanism Design for Bounded Agents Any fool can tell the truth, but it requires a man of some sense to know how to lie well. Samuel Butler Mechanism design has traditionally taken the conservative
More informationOn the Dimensionality of Voting Games
Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence (2008) On the Dimensionality of Voting Games Edith Elkind Electronics & Computer Science University of Southampton Southampton
More informationTheoretical and Empirical Evaluation of Data Reduction for Exact Kemeny Rank Aggregation
Preprint. To appear in Auton Agent Multi Agent Syst. DOI 10.1007/s10458-013-9236-y Online available. Theoretical and Empirical Evaluation of Data Reduction for Exact Kemeny Rank Aggregation Nadja Betzler
More informationComputational Social Choice: Spring 2015
Computational Social Choice: Spring 2015 Ulle Endriss Institute for Logic, Language and Computation University of Amsterdam Ulle Endriss 1 Plan for Today Last time we introduced the axiomatic method for
More informationBorda count approximation of Kemeny s rule and pairwise voting inconsistencies
Borda count approximation of Kemeny s rule and pairwise voting inconsistencies Eric Sibony LTCI UMR No. 5141, Telecom ParisTech/CNRS Institut Mines-Telecom Paris, 75013, France eric.sibony@telecom-paristech.fr
More informationRandomized Social Choice Functions Under Metric Preferences
Journal of Artificial Intelligence Research 58 (207) 797-827 Submitted 9/6; published 04/7 Randomized Social Choice Functions Under Metric Preferences Elliot Anshelevich John Postl Rensselaer Polytechnic
More information6.207/14.15: Networks Lecture 24: Decisions in Groups
6.207/14.15: Networks Lecture 24: Decisions in Groups Daron Acemoglu and Asu Ozdaglar MIT December 9, 2009 1 Introduction Outline Group and collective choices Arrow s Impossibility Theorem Gibbard-Satterthwaite
More informationComputing Kemeny Rankings, Parameterized by the Average KT-Distance
Computing Kemeny Rankings, Parameterized by the Average KT-Distance Nadja Betzler, Michael R. Fellows, Jiong Guo, Rolf Niedermeier, and Frances A. Rosamond Abstract The computation of Kemeny rankings is
More informationA Smooth Transition From Powerlessness to Absolute Power
University of Pennsylvania ScholarlyCommons Statistics Papers Wharton Faculty Research 12-2013 A Smooth Transition From Powerlessness to Absolute Power Elchanan Mossel University of Pennsylvania Ariel
More informationUnsupervised Learning with Permuted Data
Unsupervised Learning with Permuted Data Sergey Kirshner skirshne@ics.uci.edu Sridevi Parise sparise@ics.uci.edu Padhraic Smyth smyth@ics.uci.edu School of Information and Computer Science, University
More informationarxiv: v1 [cs.gt] 29 Mar 2014
Testing Top Monotonicity Haris Aziz NICTA and UNSW Australia, Kensington 2033, Australia arxiv:1403.7625v1 [cs.gt] 29 Mar 2014 Abstract Top monotonicity is a relaxation of various well-known domain restrictions
More informationSingle-peaked consistency and its complexity
Bruno Escoffier, Jérôme Lang, Meltem Öztürk Abstract A common way of dealing with the paradoxes of preference aggregation consists in restricting the domain of admissible preferences. The most well-known
More informationTesting Problems with Sub-Learning Sample Complexity
Testing Problems with Sub-Learning Sample Complexity Michael Kearns AT&T Labs Research 180 Park Avenue Florham Park, NJ, 07932 mkearns@researchattcom Dana Ron Laboratory for Computer Science, MIT 545 Technology
More informationFrequent Manipulability of Elections: The Case of Two Voters
Frequent Manipulability of Elections: The Case of Two Voters Shahar Dobzinski 1, and Ariel D. Procaccia 2, 1 Hebrew University of Jerusalem, Givat Ram, Jerusalem 91904, Israel shahard@cs.huji.ac.il 2 Microsoft
More informationRecognizing Top-Monotonic Preference Profiles in Polynomial Time
Recognizing Top-Monotonic Preference Profiles in Polynomial Time Krzysztof Magiera and Piotr Faliszewski AGH University, Krakow, Poland krzys.magiera@gmail.com, faliszew@agh.edu.pl Abstract We provide
More informationPrioritized Sweeping Converges to the Optimal Value Function
Technical Report DCS-TR-631 Prioritized Sweeping Converges to the Optimal Value Function Lihong Li and Michael L. Littman {lihong,mlittman}@cs.rutgers.edu RL 3 Laboratory Department of Computer Science
More informationSocial Choice and Social Networks. Aggregation of General Biased Signals (DRAFT)
Social Choice and Social Networks Aggregation of General Biased Signals (DRAFT) All rights reserved Elchanan Mossel UC Berkeley 8 Sep 2010 Extending Condorect s Jury Theorem We want to consider extensions
More informationLearning Optimal Commitment to Overcome Insecurity
Learning Optimal Commitment to Overcome Insecurity Avrim Blum Carnegie Mellon University avrim@cs.cmu.edu Nika Haghtalab Carnegie Mellon University nika@cmu.edu Ariel D. Procaccia Carnegie Mellon University
More information2013 IEEE International Symposium on Information Theory
Building Consensus via Iterative Voting Farzad Farnoud Hassanzadeh), Eitan Yaakobi, Behrouz Touri, Olgica Milenkovic, and Jehoshua Bruck Department of Electrical and Computer Engineering, University of
More informationOn the Impossibility of Black-Box Truthfulness Without Priors
On the Impossibility of Black-Box Truthfulness Without Priors Nicole Immorlica Brendan Lucier Abstract We consider the problem of converting an arbitrary approximation algorithm for a singleparameter social
More informationRANK AGGREGATION AND KEMENY VOTING
RANK AGGREGATION AND KEMENY VOTING Rolf Niedermeier FG Algorithmics and Complexity Theory Institut für Softwaretechnik und Theoretische Informatik Fakultät IV TU Berlin Germany Algorithms & Permutations,
More informationGeneralized Method-of-Moments for Rank Aggregation
Generalized Method-of-Moments for Rank Aggregation Hossein Azari Soufiani SEAS Harvard University azari@fas.harvard.edu William Chen Statistics Department Harvard University wchen@college.harvard.edu David
More informationApproximate Nash Equilibria with Near Optimal Social Welfare
Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence (IJCAI 015) Approximate Nash Equilibria with Near Optimal Social Welfare Artur Czumaj, Michail Fasoulakis, Marcin
More informationA Note on the McKelvey Uncovered Set and Pareto Optimality
Noname manuscript No. (will be inserted by the editor) A Note on the McKelvey Uncovered Set and Pareto Optimality Felix Brandt Christian Geist Paul Harrenstein Received: date / Accepted: date Abstract
More informationArrow s Paradox. Prerna Nadathur. January 1, 2010
Arrow s Paradox Prerna Nadathur January 1, 2010 Abstract In this paper, we examine the problem of a ranked voting system and introduce Kenneth Arrow s impossibility theorem (1951). We provide a proof sketch
More informationStructure learning in human causal induction
Structure learning in human causal induction Joshua B. Tenenbaum & Thomas L. Griffiths Department of Psychology Stanford University, Stanford, CA 94305 jbt,gruffydd @psych.stanford.edu Abstract We use
More informationCoalitionally strategyproof functions depend only on the most-preferred alternatives.
Coalitionally strategyproof functions depend only on the most-preferred alternatives. H. Reiju Mihara reiju@ec.kagawa-u.ac.jp Economics, Kagawa University, Takamatsu, 760-8523, Japan April, 1999 [Social
More informationArrow s Impossibility Theorem and Experimental Tests of Preference Aggregation
Arrow s Impossibility Theorem and Experimental Tests of Preference Aggregation Todd Davies Symbolic Systems Program Stanford University joint work with Raja Shah, Renee Trochet, and Katarina Ling Decision
More informationSocially Desirable Approximations for Dodgson s Voting Rule
Socially Desirable Approximations for Dodgson s Voting Rule ABSTRACT Ioannis Caragiannis University of Patras & RACTI Building B, 26500 Rio, Greece caragian@ceid.upatras.gr Nikos Karanikolas University
More informationBayesian Vote Manipulation: Optimal Strategies and Impact on Welfare
Bayesian Vote Manipulation: Optimal Strategies and Impact on Welfare Tyler Lu Dept. of Computer Science University of Toronto Pingzhong Tang Computer Science Dept. Carnegie Mellon University Ariel D. Procaccia
More informationByzantine Preferential Voting
Byzantine Preferential Voting Darya Melnyk, Yuyi Wang, and Roger Wattenhofer ETH Zurich, Zurich, Switzerland {dmelnyk,yuwang,wattenhofer}@ethz.ch Abstract. In the Byzantine agreement problem, n nodes with
More informationFinite Dictatorships and Infinite Democracies
Finite Dictatorships and Infinite Democracies Iian B. Smythe October 20, 2015 Abstract Does there exist a reasonable method of voting that when presented with three or more alternatives avoids the undue
More informationA Theory of Expressiveness in Mechanisms
A Theory of Expressiveness in Mechanisms Michael Benisch, Norman Sadeh, and Tuomas Sandholm December 2007 CMU-CS-07-178 School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 This work
More informationMechanism Design for Multiagent Systems
Mechanism Design for Multiagent Systems Vincent Conitzer Assistant Professor of Computer Science and Economics Duke University conitzer@cs.duke.edu Introduction Often, decisions must be taken based on
More informationSurvey of Voting Procedures and Paradoxes
Survey of Voting Procedures and Paradoxes Stanford University ai.stanford.edu/ epacuit/lmh Fall, 2008 :, 1 The Voting Problem Given a (finite) set X of candidates and a (finite) set A of voters each of
More informationAlgorithmic Game Theory Introduction to Mechanism Design
Algorithmic Game Theory Introduction to Mechanism Design Makis Arsenis National Technical University of Athens April 216 Makis Arsenis (NTUA) AGT April 216 1 / 41 Outline 1 Social Choice Social Choice
More informationComplexity of Shift Bribery in Iterative Elections
Complexity of Shift Bribery in Iterative Elections Cynthia Maushagen, Marc Neveling, Jörg Rothe, and Ann-Kathrin Selker Heinrich-Heine-Universität Düsseldorf Düsseldorf, Germany {maushagen,neveling,rothe,selker}@cs.uni-duesseldorf.de
More informationProbabilistic Aspects of Voting
Probabilistic Aspects of Voting UUGANBAATAR NINJBAT DEPARTMENT OF MATHEMATICS THE NATIONAL UNIVERSITY OF MONGOLIA SAAM 2015 Outline 1. Introduction to voting theory 2. Probability and voting 2.1. Aggregating
More informationSocially Desirable Approximations for Dodgson s Voting Rule
Socially Desirable Approximations for Dodgson s Voting Rule Ioannis Caragiannis Christos Kalamanis Nios Karaniolas Ariel D. Procaccia Abstract In 1876 Charles Lutwidge Dodgson suggested the intriguing
More informationSequential Bidding in the Bailey-Cavallo Mechanism
Sequential Bidding in the Bailey-Cavallo Mechanism Krzysztof R. Apt 1,2 and Evangelos Markakis 2 1 CWI, Science Park 123, 1098 XG Amsterdam 2 Institute of Logic, Language and Computation, University of
More informationUnconditional Privacy in Social Choice
Unconditional Privacy in Social Choice Felix Brandt Computer Science Department Stanford University Stanford CA 94305 brandtf@cs.stanford.edu Tuomas Sandholm Computer Science Department Carnegie Mellon
More informationarxiv: v2 [math.co] 14 Apr 2011
Complete Characterization of Functions Satisfying the Conditions of Arrow s Theorem Elchanan Mossel and Omer Tamuz arxiv:0910.2465v2 [math.co] 14 Apr 2011 April 15, 2011 Abstract Arrow s theorem implies
More informationJudgment Aggregation: June 2018 Project
Judgment Aggregation: June 2018 Project Sirin Botan Institute for Logic, Language and Computation University of Amsterdam June 5, 2018 Judgment Aggregation, June 2018: Lecture 2 Sirin Botan 1 / 21 Plan
More informationCompilation Complexity of Common Voting Rules
Compilation Complexity of Common Voting Rules Lirong Xia Department of Computer Science Duke University Durham, NC 27708, USA lxia@cs.duke.edu Vincent Conitzer Department of Computer Science Duke University
More informationStackelberg Voting Games: Computational Aspects and Paradoxes
Stackelberg Voting Games: Computational Aspects and Paradoxes Lirong Xia Department of Computer Science Duke University Durham, NC 7708, USA lxia@cs.duke.edu Vincent Conitzer Department of Computer Science
More informationMultivariate Complexity of Swap Bribery
Multivariate Complexity of Swap Bribery Britta Dorn 1 joint work with Ildikó Schlotter 2 1 Eberhard-Karls-Universität Tübingen/Universität Ulm, Germany 2 Budapest University of Technology and Economics,
More informationSYSU Lectures on the Theory of Aggregation Lecture 2: Binary Aggregation with Integrity Constraints
SYSU Lectures on the Theory of Aggregation Lecture 2: Binary Aggregation with Integrity Constraints Ulle Endriss Institute for Logic, Language and Computation University of Amsterdam [ http://www.illc.uva.nl/~ulle/sysu-2014/
More informationA General Overview of Parametric Estimation and Inference Techniques.
A General Overview of Parametric Estimation and Inference Techniques. Moulinath Banerjee University of Michigan September 11, 2012 The object of statistical inference is to glean information about an underlying
More informationVolume 31, Issue 1. Manipulation of the Borda rule by introduction of a similar candidate. Jérôme Serais CREM UMR 6211 University of Caen
Volume 31, Issue 1 Manipulation of the Borda rule by introduction of a similar candidate Jérôme Serais CREM UMR 6211 University of Caen Abstract In an election contest, a losing candidate a can manipulate
More informationApproval Voting for Committees: Threshold Approaches
Approval Voting for Committees: Threshold Approaches Peter Fishburn Aleksandar Pekeč WORKING DRAFT PLEASE DO NOT CITE WITHOUT PERMISSION Abstract When electing a committee from the pool of individual candidates,
More informationAntipodality in committee selection. Abstract
Antipodality in committee selection Daniel Eckert University of Graz Christian Klamler University of Graz Abstract In this paper we compare a minisum and a minimax procedure as suggested by Brams et al.
More informationLecture 4: Proof of Shannon s theorem and an explicit code
CSE 533: Error-Correcting Codes (Autumn 006 Lecture 4: Proof of Shannon s theorem and an explicit code October 11, 006 Lecturer: Venkatesan Guruswami Scribe: Atri Rudra 1 Overview Last lecture we stated
More informationIncremental Approval Voting for Multi-agent Knapsack Problems
Incremental Approval Voting for Multi-agent Knapsack Problems Nawal Benabbou and Patrice Perny Sorbonne Universités, UPMC Univ Paris 06 CNRS, LIP6 UMR 7606 4 Place Jussieu, 75005 Paris, France nawal.benabbou@lip6.fr,
More informationShould Social Network Structure Be Taken into Account in Elections?
Should Social Network Structure Be Taken into Account in Elections? Vincent Conitzer Duke Uniersity April 2, 2011 Abstract If the social network structure among the oters in an election is known, how should
More informationPERFECTLY secure key agreement has been studied recently
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 2, MARCH 1999 499 Unconditionally Secure Key Agreement the Intrinsic Conditional Information Ueli M. Maurer, Senior Member, IEEE, Stefan Wolf Abstract
More informationLearning Hedonic Games
Learning Hedonic Games Jakub Sliwinski and Yair Zick Abstract Coalitional stability in hedonic games has usually been considered in the setting where agent preferences are fully known. We consider the
More informationColored Bin Packing: Online Algorithms and Lower Bounds
Noname manuscript No. (will be inserted by the editor) Colored Bin Packing: Online Algorithms and Lower Bounds Martin Böhm György Dósa Leah Epstein Jiří Sgall Pavel Veselý Received: date / Accepted: date
More informationWhen Does Schwartz Conjecture Hold?
Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence (IJCAI 015) When Does Schwartz Conjecture Hold? Matthias Mnich Universität Bonn Bonn, Germany Yash Raj Shrestha
More information