What is voting theory?

Voting theory, also called social choice theory, is the study of the design and evaulation of democratic voting methods (that's the activists' word; game theorists call them "voting mechanisms", engineers call them "electoral algorithms", and political scientists say "electoral formulas"). In other words, for a given list of candidates and voters, a voting method specifies a set of valid ways to fill out a ballot, and, given a valid ballot from each voter, produces an outcome.

(An "electoral system" includes a voting method, but also other implementation details, such as how the candidates and voters are validated, how often elections happen and for what offices, etc. "Voting system" is an ambiguous term that can refer to a full electoral system, just to the voting method, or even to the machinery for counting votes.)

Most voting theory limits itself to studying "democratic" voting methods. That typically has both empirical and normative implications. Empirically, "democratic" means:

  • There are many voters
  • There can be more than two candidates

In order to be considered "democratic", voting methods generally should meet various normative criteria as well. There are many possible such criteria, and on many of them theorists do not agree; but in general they do agree on this minimal set:

  • Anonymity; permuting the ballots does not change the probability of any election outcome.
  • Neutrality; permuting the candidates on all ballots does not change the probability of any election outcome.
  • Unanimity: If voters universally vote a preference for a given outcome over all others, that outcome is selected. (This is a weak criterion, and is implied by many other stronger ones; but those stronger ones are often disputed, while this one rarely is.)
  • Methods typically do not directly involve money changing hands or other enduring state-changes for individual voters. (There can be exceptions to this, but there are good reasons to want to understand "moneyless" elections.)

Why is voting theory important for rationalists?

First off, because democratic processes in the real world are important loci of power. That means that it's useful to understand the dynamics of the voting methods used in such real-world elections.

Second, because these real-world democratic processes have all been created and/or evolved in the past, and so there are likely to be opportunities to replace, reform, or add to them in the future. If you want to make political change of any kind over a medium-to-long time horizon, these systemic reforms should probably be part of your agenda. The fact is that FPTP, the voting method we use in most of the English-speaking world, is absolutely horrible, and there is reason to believe that reforming it would substantially (though not of course completely) alleviate much political dysfunction and suffering.

Third, because understanding social choice theory helps clarify ideas about how it's possible and/or desirable to resolve value disputes between multiple agents. For instance, if you believe that superintelligences should perform a "values handshake" when meeting, replacing each of their individual value functions by some common one so as to avoid the dead weight loss of a conflict, then social choice theory suggests both questions and answers about what that might look like. (Note that the ethical and practical importance of such considerations is not at all limited to "post-singularity" examples like that one.)

In fact, on that third point: my own ideas of ethics and of fun theory are deeply informed by my decades of interest in voting theory. To simplify into a few words my complex thoughts on this, I believe that voting theory elucidates "ethical incompleteness" (that is, that it's possible to put world-states into ethical preference order partially but not fully) and that this incompleteness is a good thing because it leaves room for fun even in an ethically unsurpassed world.

What are the branches of voting theory?

Generally, you can divide voting methods up into "single-winner" and "multi-winner". Single-winner methods are useful for electing offices like president, governor, and mayor. Multi-winner methods are useful for dividing up some finite, but to some extent divisible, resource, such as voting power in a legislature, between various options. Multi-winner methods can be further subdivided into seat-based (where a set of similar "seats" are assigned one winner each) or weighted (where each candidate can be given a different fraction of the voting power).

What are the basics of single-winner voting theory?

(Note: Some readers may wish to skip to the summary below, or to read the later section on multi-winner theory and proportional representation first. Either is valid.)

Some of the earliest known work in voting theory was by Ramon Llull before his death in 1315, but most of that was lost until recently. Perhaps a better place to start would be in the French Academy in the late 1700s; this allows us to couch it as a debate (American Chopper meme?) between Jean-Charles de Borda and Nicolas de Condorcet.

Condorcet: "Plurality (or 'FPTP', for First Past the Post) elections, where each voter votes for just one candidate and the candidate with the most votes wins, are often spoiled by vote-splitting."
Borda: "Better to have voters rank candidates, give candidates points for favorable rankings, and choose a winner based on points." (Borda Count)
Condorcet: "Ranking candidates, rather than voting for just one, is good. But your point system is subject to strategy. Everyone will rate some candidate they believe can't win in second place, to avoid giving points to a serious rival to their favorite. So somebody could win precisely because nobody takes them seriously!"
Borda: "My method is made for honest men!"
Condorcet: "Instead, you should use the rankings to see who would have a majority in every possible pairwise contest. If somebody wins all such contests, obviously they should be the overall winner."

In my view, Borda was the clear loser there. And most voting theorists today agree with me. The one exception is the mathematician Donald Saari, enamored with the mathematical symmetry of the Borda count. This is totally worth mentioning because his last name is a great source of puns.

But Condorcet soon realized there was a problem with his proposal too: it's possible for A to beat B pairwise, and B to beat C, while C still beats A. That is, pairwise victories can be cyclical, not transitive. Naturally speaking, this is rare; but if there's a decision between A and B, the voters who favor B might have the power to artificially create a "poison pill" amendment C which can beat A and then lose to B.

How would a Condorcet cycle occur? Imagine the following election:

1: A>B>C

1: B>C>A

1: C>A>B

(This notation means that there's 1 voter of each of three types, and that the first voter prefers A over B over C.) In this election, A beats B by 2 to 1, and similarly B beats C and C beats A.

Fast-forward to 1950, when theorists at the RAND corporation were inventing game theory in order to reason about the possibility of nuclear war. One such scientist, Kenneth Arrow, proved that the problem that Condorcet (and Llull) had seen was in fact a fundamental issue with any ranked voting method. He posed 3 basic "fairness criteria" and showed that no ranked method can meet all of them:

  • Ranked unanimity: if every voter prefers X to Y, then the outcome has X above Y.
  • Independence of irrelevant alternatives: If every voter's preferences between some subset of candidates remain the same, the order of those candidates in the outcome will remain the same, even if other candidates outside the set are added, dropped, or changed.
  • Non-dictatorial: the outcome depends on more than one ballot.

Arrow's result was important in and of itself; intuitively, most people might have guessed that a ranked voting method could be fair in all those ways. But even more important than the specific result was the idea of an impossibility proof for voting.

Using this idea, it wasn't long until Gibbard and Satterthwaite independently came up with a follow-up theorem, showing that no voting system (ranked or otherwise) could possibly avoid creating strategic incentives for some voters in some situations. That is to say, there is no non-dictatorial voting system for more than two possible outcomes and more than two voters in which every voter has a single "honest" ballot that depends only on their own feelings about the candidates, such that they can't sometimes get a better result by casting a ballot that isn't their "honest" one.

There's another way that Arrow's theorem was an important foundation, particularly for rationalists. He was explicitly thinking about voting methods not just as real-world ways of electing politicians, but as theoretical possibilities for reconciling values. In this more philosophical sense, Arrow's theorem says something depressing about morality: if morality is to be based on (potentially revealed) preferences rather than interpersonal comparison of (subjective) utilities, it cannot simply be a democratic matter; "the greatest good for the greatest number" doesn't work without inherently-subjective comparisons of goodness. Amartya Sen continued exploring the philosophical implications of voting theory, showing that the idea of "private autonomy" is incompatible with Pareto efficiency.

Now, in discussing Arrow's theorem, I've said several times that it only applies to "ranked" voting systems. What does that mean? "Ranked" (also sometimes termed "ordinal" or "preferential") systems are those where valid ballots consist of nothing besides a transitive preferential ordering of the candidates (partial or complete). That is, you can say that you prefer A over B or B over A (or in some cases, that you like both of them equally), but you cannot say how strong each preference is, or provide other information that's used to choose a winner. In Arrow's view, the voting method is then responsible for ordering the candidates, picking not just a winner but a second place etc. Since neutrality wasn't one of Arrow's criteria, ties can be broken arbitrarily.

This excludes an important class of voting methods from consideration: those I'd call rated (or graded or evaluational), where you as a voter can give information about strength of preference. Arrow consciously excluded those methods because he believed (as Gibbard and Satterthwaite later confirmed) that they'd inevitably be subject to strategic voting. But since ranked voting systems are also inevitably subject to strategy, that isn't necessarily a good reason. In any case, Arrow's choice to ignore such systems set a trend; it wasn't until approval voting was reinvented around 1980 and score voting around 2000 that rated methods came into their own. Personally, for reasons I'll explain further below, I tend to prefer rated systems over purely ranked ones, so I think that Arrow's initial neglect of ranked methods got the field off on a bit of a wrong foot.

And there's another way I feel that Arrow set us off in the wrong direction. His idea of reasoning axiomatically about voting methods was brilliant, but ultimately, I think the field has been too focused on this axiomatic "Arrovian" paradigm, where the entire goal is to prove certain criteria can be met by some specific voting method, or cannot be met by any method. Since it's impossible to meet all desirable criteria in all cases, I'd rather look at things in a more probabilistic and quantitative way: how often and how badly does a given system fail desirable criteria.

The person I consider to be the founder of this latter, "statistical" paradigm for evaluating voting methods is Warren Smith. Now, where Kenneth Arrow won the Nobel Prize, Warren Smith has to my knowledge never managed to publish a paper in a peer-reviewed journal. He's a smart and creative mathematician, but... let's just say, not exemplary for his social graces. In particular, he's not reluctant to opine in varied fields of politics where he lacks obvious credentials. So there's plenty in the academic world who'd just dismiss him as a crackpot, if they are even aware of his existence. This is unfortunate, because his work on voting theory is groundbreaking.

In his 2000 paper on "Range Voting" (what we'd now call Score Voting), he performed systematic utilitarian Monte-Carlo evaluation of a wide range of voting systems under a wide range of assumptions about how voters vote. In other words, in each of his simulations, he assumed certain numbers of candidates and of voters, as well as a statistical model for voter utilities and a strategy model for voters. Using the statistical model, he assigned each virtual voter a utility for each candidate; using the strategy model, he turned those utilities into a ballot in each voting method; and then he measured the total utility of the winning candidate, as compared to that of the highest-total-utility candidate in the race. Nowadays the name for the difference between these numbers, scaled so that the latter would be 100% and the average randomly-selected candidate would be 0%, is "Voter Satisfaction Efficiency" (VSE).

Smith wasn't the first to do something like this. But he was certainly the first to do it so systematically, across various voting methods, utility models, and strategic models. Because he did such a sensitivity analysis across utility and strategic models, he was able to see which voting methods consistently outperformed others, almost regardless of the specifics of the models he used. In particular, score voting, in which each voter gives each candidate a numerical score from a certain range (say, 0 to 100) and the highest total score wins, was almost always on top, while FPTP was almost always near the bottom.

More recently, I've done further work on VSE, using more-realistic voter and strategy models than what Smith had, and adding a variety of "media" models to allow varying the information on which the virtual voters base their strategizing. While this work confirmed many of Smith's results — for instance, I still consistently find that FPTP is lower than IRV is lower than approval is lower than score — it has unseated score voting as the undisputed highest-VSE method. Other methods with better strategy resistance can end up doing better than score.

Of course, something else happened in the year 2000 that was important to the field of single-winner voting theory: the Bush-Gore election, in which Bush won the state of Florida and thus the presidency of the USA by a microscopic margin of about 500 votes. Along with the many "electoral system" irregularities in the Florida election (a mass purge of the voter rolls of those with the same name as known felons, a confusing ballot design in Palm Beach, antiquated punch-card ballots with difficult-to-interpret "hanging chads", etc.) was one important "voting method" irregularity: the fact that Ralph Nader, a candidate whom most considered to be ideologically closer to Gore than to Bush, got far more votes than the margin between the two, leading many to argue that under almost any alternative voting method, Gore would have won. This, understandably, increased many people's interest in voting theory and voting reform. Like Smith, many other amateurs began to make worthwhile progress in various ways, progress which was often not well covered in the academic literature.

In the years since, substantial progress has been made. But we activists for voting reform still haven't managed to use our common hatred for FPTP to unite behind a common proposal. (The irony that our expertise in methods for reconciling different priorities into a common purpose hasn't let us do so in our own field is not lost on us.)

In my opinion, aside from the utilitarian perspective offered by VSE, the key to evaluating voting methods is an understanding of strategic voting; this is what I'd call the "mechanism design" perspective. I'd say that there are 5 common "anti-patterns" that voting methods can fall into; either where voting strategy can lead to pathological results, or vice versa. I'd pose them as a series of 5 increasingly-difficult hurdles for a voting method to pass. Because the earlier hurdles deal with situations that are more common or more serious, I'd say that if a method trips on an earlier hurdle, it doesn't much matter that it could have passed a later hurdle. Here they are:

(0. Dark Horse. As in Condorcet's takedown of Borda above, this is where a candidate wins precisely because nobody expects them to. Very bad, but not a serious problem in most voting methods, except for the Borda Count.)
1. Vote-splitting / "spoiled" elections. Adding a minor candidate causes a similar major candidate to lose. Very bad because it leads to rampant strategic dishonesty and in extreme cases 2-party dominance, as in Duverger's Law. Problematic in FPTP, resolved by most other voting methods.
2. Center squeeze. A centrist candidate is eliminated because they have lost first-choice support to rivals on both sides, so that one of the rivals wins, even though the centrist could have beaten either one of them in a one-on-one (pairwise) election. Though the direct consequences of this pathology are much less severe than those of vote-splitting, the indirect consequences of voters strategizing to avoid the problem would be exactly the same: self-perpetuating two-party dominance. This problem is related to failures of the "favorite betrayal criterion" (FBC). Problematic in IRV, resolved by most other methods.
3. Chicken dilemma (aka Burr dilemma, for Hamilton fans). Two similar candidates must combine strength in order to beat a third rival. But whichever of the two cooperates less will be the winner, leading to a game of "chicken" where both can end up losing to the rival. This problem is related to failures of the "later-no-harm" (LNH) criterion. Because LNH is incompatible with FBC, it is impossible to completely avoid the chicken dilemma without creating a center squeeze vulnerability, but systems like STAR voting or 3-2-1 minimize it.
4. Condorcet cycle. As above, a situation where, with honest votes, A beats B beats C beats A. There is no "correct" winner in this case, and so no voting method can really do anything to avoid getting a "wrong" winner. Luckily, in natural elections (that is, where bad actors are not able to create artificial Condorcet cycles by strategically engineering "poison pills"), this probably happens less than 5% of the time.

Note that there's a general pattern in the pathologies above: the outcome of honest voting and that of strategic voting are in some sense polar opposites. For instance, under honest voting, vote-splitting destabilizes major parties; but under strategic voting, it makes their status unassailable. This is a common occurrence in voting theory. And it's a reason that naive attempts to "fix" a problem in a voting system by adding rules can actually make the original problem worse.

(I wrote a separate article with further discussion of these pathologies)

Here are a few of the various single-winner voting systems people favor, and a few (biased) words about the groups that favor them:

FPTP (aka plurality voting, or choose-one single-winner): Universally reviled by voting theorists, this is still favored by various groups who like the status quo in countries like the US, Canada, and the UK. In particular, incumbent politicians and lobbyists tend to be at best skeptical and at worst outright reactionary in response to reformers.

IRV (Instant runoff voting), aka Alternative Vote or RCV (Ranked Choice Voting... I hate that name, which deliberately appropriates the entire "ranked" category for this one specific method): This is a ranked system where to start out with, only first-choice votes are tallied. To find the winner, you successively eliminate the last-place candidate, transferring those votes to their next surviving preference (if any), until some candidate has a majority of the votes remaining. It's supported by FairVote, the largest electoral reform nonprofit in the US, which grew out of the movement for STV proportional representation (see the multi-winner section below for more details). IRV supporters tend to think that discussing its theoretical characteristics is a waste of time, since it's so obvious that FPTP is bad and since IRV is the reform proposal with by far the longest track record and most well-developed movement behind it. Insofar as they do consider theory, they favor the "later-no-harm" criterion, and prefer to ignore things like the favorite betrayal criterion, summability, or spoiled ballots. They also don't talk about the failed Alternative Vote referendum in the UK.

Approval voting: This is the system where voters can approve (or not) each candidate, and the candidate approved by the most voters wins. Because of its simplicity, it's something of a "Schelling point" for reformers of various stripes; that is, a natural point of agreement as an initial reform for those who don't agree on which method would be an ideal end state. This method was used in Greek elections from about 1860-1920, but was not "invented" as a subject of voting theory until the late 70s by Brams and Fishburn. It can be seen as a simplistic special case of many other voting methods, in particular score voting, so it does well on Warren Smith's utilitarian measures, and fans of his work tend to support it. This is the system promoted by the Center for Election Science (electology.org), a voting reform nonprofit that was founded in 2012 by people frustrated with FairVote's anti-voting-theory tendencies. (Full disclosure: I'm on the board of the CES, which is growing substantially this year due to a significant grant by the Open Philanthropy Project. Thanks!)

Condorcet methods: These are methods that are guaranteed to elect a pairwise beats-all winner (Condorcet winner) if one exists. Supported by people like Erik Maskin (a Nobel prize winner in economics here at Harvard; brilliant, but seemingly out of touch with the non-academic work on voting methods), and Markus Schulze (a capable self-promoter who invented a specific Condorcet method and has gotten groups like Debian to use it in their internal voting). In my view, these methods give good outcomes, but the complications of resolving spoil their theoretical cleanness, while the difficulty of reading a matrix makes presenting results in an easy-to-grasp form basically impossible. So I personally wouldn't recommend these methods for real-world adoption in most cases. Recent work in "improved" Condorcet methods has showed that these methods can be made good at avoiding the chicken dilemma, but I would hate to try to explain that work to a layperson.

Bucklin methods (aka median-based methods; especially, Majority Judgment): Based on choosing a winner with the highest median rating, just as score voting is based on choosing one with the highest average rating. Because medians are more robust to outliers than averages, median methods are more robust to strategy than score. Supported by French researchers Balinski and Laraki, these methods have an interesting history in the progressive-era USA. Their VSE is not outstanding though; better than IRV, plurality, and Borda, but not as good as most other methods.

Delegation-based methods, especially SODA (simple optionally-delegated approval): It turns out that this kind of method can actually do the impossible and "avoid the Gibbard-Satterthwaite theorem in practice". The key words there are "in practice" — the proof relies on a domain restriction, in which voters honest preferences all agree with their favorite candidate, and these preference orders are non-cyclical, and voters mutually know each other to be rational. Still, this is the only voting system I know of that's 100% strategy free (including chicken dilemma) in even such a limited domain. (The proof of this is based on complicated arguments about convexity in high-dimensional space, so Saari, it doesn't fit here.) Due to its complexity, this is probably not a practical proposal, though.

Rated runoff methods (in particular STAR and 3-2-1): These are methods where rated ballots are used to reduce the field to two candidates, who are then compared pairwise using those same ballots. They combine the VSE advantages of score or approval with extra resistance to the chicken dilemma. These are currently my own favorites as ultimate goals for practical reform, though I still support approval as the first step.

Quadratic voting: Unlike all the methods above, this is based on the universal solvent of mechanism design: money (or other finite transferrable resources). Voters can buy votes, and the cost for n votes is proportional to n². This has some excellent characteristics with honest voters, and so I've seen that various rationalists think it's a good idea; but in my opinion, it's got irresolvable problems with coordinated strategies. I realize that there are responses to these objections, but as far as I can tell every problem you fix with this idea leads to two more.


  • Plurality voting is really bad. (Borda count is too.)
  • Arrow's theorem shows no ranked voting method is perfect.
  • Gibbard-Satterthwaite theorem shows that no voting method, ranked or not, is strategy-free in all cases.
  • Rated voting methods such as approval or score can get around Arrow, but not Gibbard-Satterthwaite.
  • Utilitarian measures, known as VSE, are one useful way to evaluate voting methods.
  • Another way is mechanism design. There are (1+)4 voting pathologies to worry about. Starting from the most important and going down: (Dark horse rules out Borda;) vote-splitting rules out plurality; center squeeze would rule out IRV; chicken dilemma argues against approval or score and in favor of rated runoff methods; and Condorcet cycles mean that even the best voting methods will "fail" in a few percent of cases.

What are the basics of multi-winner voting theory?

Multi-winner voting theory originated under parliamentary systems, where theorists wanted a system to guarantee that seats in a legislature would be awarded in proportion to votes. This is known as proportional representation (PR, prop-rep, or #PropRep). Early theorists include Henry Droop and Charles Dodgson (Lewis Carroll). We should also recognize Thomas Jefferson and Daniel Webster's work on the related problem of apportioning congressional seats across states.

Because there are a number of seats to allocate, it's generally easier to get a good answer to this problem than in the case of single-winner voting. It's especially easy in the case where we're allowed to give winners different voting weights; in that case, a simple chain of delegated voting weight guarantees perfect proportionality. (This idea has been known by many names: Dodgson's method, asset voting, delegated proxy, liquid democracy, etc. There are still some details to work out if there is to be a lower bound on final voting weight, but generally it's not hard to find ways to resolve those.)

When seats are constrained to be equally-weighted, there is inevitably an element of rounding error in proportionality. Generally, for each kind of method, there are two main versions: those that tend to round towards smaller parties (Sainte-Laguë, Webster, Hare, etc.) and those that tend to round towards larger ones (D'Hondt, Jefferson, Droop, etc.).

Most abstract proportional voting methods can be considered as greedy methods to optimize some outcome measure. Non-greedy methods exist, but algorithms for finding non-greedy optima are often considered too complex for use in public elections. (I believe that these problems are NP-complete in many cases, but fast algorithms to find provably-optimal outcomes in all practical cases usually exist. But most people don't want to trust voting to algorithms that nobody they know actually understands.)

Basically, the outcome measures being implicitly optimized are either "least remainder" (as in STV, single transferable vote), or "least squares" (not used by any real-world system, but proposed in Sweden in the 1890s by Thiele and Phragmen). STV's greedy algorithm is based on elimination, which can lead to problems, as with IRV's center-squeeze. A better solution, akin to Bucklin/median methods in the single-winner case, is BTV (Bucklin transferable vote). But the difference is probably not a big enough deal to overcome STV's advantage in terms of real-world track record.

Both STV and BTV are methods that rely on reweighting ballots when they help elect a winner. There are various reweighting formulas that each lead to proportionality in the case of pure partisan voting. This leads to an explosion of possible voting methods, all theoretically reasonable.

Because the theoretical pros and cons of various multi-winner methods are much smaller than those of single-winner ones, the debate tends to focus on practical aspects that are important politically but that a mathematician would consider trivial or ad hoc. Among these are:

  • The role of parties. For instance, STV makes partisan labels formally irrelevant, while list proportional methods (widely used, but the best example system is Bavaria's MMP/mixed member proportional method) put parties at the center of the decision. STV's non-partisan nature helped it get some traction in the US in the 1920s-1960s, but the only remnant of that is Cambridge, MA (which happens to be where I'm sitting). (The other remnant is that former STV advocates were key in founding FairVote in the 1990s and pushing for IRV after the 2000 election.) Political scientist @jacksantucci is the expert on this history.
  • Ballot simplicity and precinct summability. STV requires voters to rank candidates, and then requires keeping track of how many ballots of each type there are, with the number of possible types exceeding the factorial of the number of candidates. In practice, that means that vote-counting must be centralized, rather than being performed at the precinct level and then summed. That creates logistical hurdles and fraud vulnerabilities. Traditionally, the way to resolve this has been list methods, including mixed methods with lists in one part. Recent proposals for delegated methods such as my PLACE voting (proportional, locally-accountable, candidate endorsement; here's an example) provide another way out of the bind.
  • Locality. Voters who are used to FPTP (plurality in single-member districts) are used to having "their local representative", while pure proportional methods ignore geography. If you want both locality and proportionality, you can either use hybrid methods like MMP, or biproportional methods like LPR, DMP, or PLACE.
  • Breadth of choice. Ideally, voters should be able to choose from as many viable options as possible, without overwhelming them with ballot complexity. My proposal of PLACE is designed to meet that ideal.

Prop-rep methods would solve the problem of gerrymandering in the US. I believe that PLACE is the most viable proposal in that regard: maintains the locality and ballot simplicity of the current system, is relatively non-disruptive to incumbents, and maximizes breadth of voter choice to help increase turnout.

Oh, I should also probably mention that I was the main designer, in collaboration with dozens of commenters on the website Making Light, of the proportional voting method E Pluribus Hugo, which is now used by the Hugo Awards to minimize the impact and incentives of bloc voting in the nominations phase.

Anticlimactic sign-off

OK, that's a long article, but it does a better job of brain-dumping my >20 years of interest in this topic than anything I've ever written. On the subject of single-winner methods, I'll be putting out a playable exploration version of all of this sometime this summer, based off the work of the invaluable nicky case (as well as other collaborators).

I've now added a third article on this topic, in which I included a paragraph at the end asking people to contact me if they're interested in activism on this. I believe this is a viable target for effective altruism.

New Comment
99 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

As the author, I think this has generally stood the test of time pretty well. There are various changes I'd make if I were doing a rewrite today; but overall, these are minor.

Aside from those generally-minor changes, I think that the key message of this piece remains important to the purpose of Less Wrong. That is to say: making collective decisions, or (equivalently) statements about collective values, is a tough problem; it's important for rationalists; and studying existing theory on this topic is useful.

Here are the specific changes I'd make if I were going to rewrite this today:

  • The most significant change is that I'd probably start off with multi-winner voting theory, because I've come to believe that it is clearly more important than single-winner theory overall.
  • I would avoid quick-decaying cultural references such as the "American Chopper" meme. There are also a few passages where the language is noticeably awkward and could be rewritten for clarity.
  • Unlike my attitude in this piece, I've come to accept, although not to like, the "Rank Choice Voting" terminology.
  • I am no longer on the board of the CES; my term ended. I still en
... (read more)
Congratulations on finishing your doctorate! I'm very much looking forward to the next post in the sequence on multi-winner methods, and I'm especially the metric you mention.
2Ben Pace

I think this post should be included in the best posts of 2018 collection. It does an excellent job of balancing several desirable qualities: it is very well written, being both clear and entertaining; it is informative and thorough; it is in the style of argument which is preferred on LessWrong, by which I mean makes use of both theory and intuition in the explanation.

This post adds to the greater conversation by displaying rationality of the kind we are pursuing directed at a big societal problem. A specific example of what I mean that distinguishes this post from an overview that any motivated poster might write is the inclusion of Warren Smith's results; Smith is a mathematician from an unrelated field who has no published work on the subject. But he had work anyway, and it was good work which the author himself expanded on, and now we get to benefit from it through this post. This puts me very much in mind of the fact that this community was primarily founded by an autodidact who was deeply influenced by a physicist writing about probability theory.

A word on one of our sacred taboos: in the beginning it was written that Politics is the Mindkiller, and so it was for years ... (read more)

  Being about political mechanisms rather than object-level disagreements helps a lot.  Even though political mechanisms is an object-level disagreement for monarchists vs republicans.

I haven't mentioned futarchy ("markets for predictions, votes for values") at all here. Futarchy, of course, would not be classed as a voting method in my typology; it's something bigger than even than my concept of a voting system. I think that the main useful thing I can say about futarch is: if you're going to spend energy on it, you should spend some energy on understanding lower-level voting methods too.

ETA: I guess I think I can expand on that a bit. The issue is that when you embed one system inside another one, there's some inevitable bleed-over in terms of incentives. For this meta-reason, as well as for the simple reason of direct applicability, one lesson from voting theory that I think would apply to futarchy is that "patching a problem can make it worse, because strategic outcomes can be the 'opposite' of unstrategic ones." That's an intuition that for me was hard-won, and I wouldn't expect it to be easy for you to learn it just by hearing it. If that's true and you still want to learn it, spend some time thinking about honest and strategic outcomes in various voting methods in the five levels of pathology I mentioned above. The playable exploration I mention at the end will substantially focus on these ideas, in a way I hope will be visually and interactively easy to digest.

Most elections aren't big. Most elections are about questions like a random group electing their chairman. It's a lot easier to convince a random group to use a different voting system than to change the way voting is done on a national election.

I think the most likely way to create change in the national way of voting is to first change the way smaller groups run their elections.

Your post doesn't really analyses the desirable features of an electoral system and I think that's a common failure mode for people who think theoretically about voting systems without any practical involvement into implementing them.

Running elections costs time and given that most groups need to elect their chairman every year minimizing the amount of time it takes to make that election is desirable.

The role of an election is not just about finding the optimal candidate but about establishing that the candidate was picked in a trustworthy way. This means that "our tech person runs a server and we trust the results of the voting software" is not a satisfactory strategy. Complex counting rules also make fraud easier.

8Jameson Quinn
This is a theory post. I post activism stuff elsewhere. (Anybody reading this in Somerville, MA who's interested in activism on this, get in touch with me via gmail. Obvious address, both names. Also goes for other parts of US, or for BC, less urgently.)

Exciting stuff! It would not shock me in the least for this to be a common point-of-reference for various rationalist experiments (i.e. houses, startups, etc). I have a bunch of questions:

1) Do these voting systems have anything to say about voting for referendums, or is the literature constrained to candidates? Is there any sense in which a batch of competing referendums would not be treated the same as single-winner races?

2) You mention the practical concerns of laypeople being able to grasp the process and counting the votes a few times - is there a body of research that focuses on these elements, or is this work being done by the Center for Election Science?

3) You mention the use of a Condorcet method being used by Debian in internal voting - is there a sense of which methods are suitable for use in organizational voting, as distinct from political voting, ie some of them are unusually sensitive to scale? Further, how is deployment in an organization weighed as evidence regarding deployment in elections?

4) Can you recommend a source which discusses voting theory from the perspective of information aggregation, in the same way as has been done for markets?

4Jameson Quinn
1) In most cases, the voting method options when voting on competing (non-sapient) plans are the same as those for candidates. In fact, as I said, Arrow's theorem and Sen's theorem were originally posed as being about voting over world-states rather than candidates. And approval voting, with a 50% threshold, has been used various times in practice for voting on incompatible ballot initiatives. The exception to this rule is when voting methods use delegation in some way (such as "liquid democracy", SODA, PLACE, and to a much lesser extent 3-2-1). Obviously, these methods require sapient candidates, or at least some kind of impartial divergence measure over candidates. 2) As I hinted, I think that the academic literature on this tends to focus more on the axiomatic/Arrovian paradigm than it should. I suspect that there is some political science research that relates, but aside from a few simple results on spoiled ballots under IRV (they go up) I'm not familiar with it. 3) Organizations are probably more able to tolerate "complicated" voting methods — especially organizations of "nerds", such as Debian or the Hugo awards. But my intuition in this area is based on anecdotes, not solid research. 4) Hmm... I'll have to think about that one, I'll get back to you.

I might’ve missed this, as I didn’t read some of the sections of this post quite as closely as I could have, but here’s a question which, I think, is critical:

Has there been any research aimed at determining which voting method is easiest to understand (for people who aren’t specialists in the field[1], or, in other words, for the general public)? (And, by extension: how hard is each method to understand? How do they compare, in this regard?)

Comprehensibility by the general public seems to me to be a sine qua non of any voting method we’re to consider for adoption into the political process.

[1] Or, as we might say, “voting theory nerds”.

In particular, one could ask: Do people take advantage of the options in the voting systems that they do have? To what extent do Australians make use of ranked choice? I don't know. It appears to me that most British Labour Party members,† faced with a slate of 5 candidates for PM, restrict their consideration to viable candidates and don't take advantage of their opportunity to express their preferences, even though the party goes out of its way to nominate diverse candidates. † Labour Party members are a small group, more like activists than American primary voters, who are, in turn, more involved than general election voters. The Party is trying to move in the American direction, but hasn't moved far. The Tories don't have primaries at all.
3Jameson Quinn
There has definitely been attention to this question. All of the proposals I support in practice (Approval, 3-2-1, or STAR for single-winner; and PLACE for multi-winner) are among the simpler to vote and to explain. But most of the research is in the form of proprietary focus groups or polling, so unfortunately there's no good source I can point you to. I'm working to change that.
2Said Achmiz
How does the comprehensibility of these methods compare with that of the system we have now?
6Jameson Quinn
I'm assuming that "we" is the USA; and "the system" is FPTP (that is, you're ignoring the electoral college and the system of primaries and redistricting). Approval is just as easy to understand, and in fact easier to avoid ballot spoilage. 3-2-1 and STAR are both a bit more complicated, but not much; either one can still be distilled down to 9 words. PLACE is significantly more complicated for vote-counting, but to cast a ballot, it's still just about as simple as FPTP. In fact, if you take into account the fact that you are more free to ignore strategic concerns, it's arguably simpler.

I've been moderately hard on IRV, and its supporters, in the above. I want to state here for the record that though IRV is my least-favorite serious* reform proposal, I think that it would solve a substantial fraction of the voting-system dysfunction of FPTP; that if IRV were more of a focus of this essay than it is, there are other decent pro-IRV arguments I could include; and that there are IRV supporters whose intellect and judgement I respect even as I disagree with them on this.

*That excludes Borda. #SorryNotSaari

I've curated this post for these reasons:

  • This is an opinionated introduction to a piece of math-based social theory.
  • The two followups on Moloch and Alignment also helped flesh out the key intuitions.
  • It's written by someone with a lot of expertise in the field.

My biggest hesitation(s) with curating this:

  • I didn't find it practical (yet). In my head voting theory is still its own disconnected thing from the ways I think about solving social or utilitarian problems in my life.
  • It's quite long.

Overall though the opinionated style of the writi... (read more)

1Jameson Quinn
What would I have to do to make this a sequence?
3Ben Pace
Go to the library page, scroll down to 'community sequences', and hit the button 'new sequence'. You need an image for the header/icon.

I'm quite interested in voting systems, but I was surprised to discover that the general consensus is that score beats approval! I checked it out and it seems to be a robust finding that in real life people understand & are happier with score, but this surprised me. 

I'd think that since there are so many options for score, it'd be a bit overwhelming and hard to figure out how to optimize.  Whereas with approval it's basically "vote for the minor candidates you like better than the major ones; and also vote for your least unfavorite major cand... (read more)

The trouble with Score is that optimal voter strategy there is to min-max your ratings -- score(max) everyone you'd accept and score(min) everyone else -- which would make it functionally equivalent to Approval if everyone did so; however, since not everyone will, voters who use the full score range are just voluntarily, albeit unwittingly, diluting their ballot power to determine the actual winner vs. voters who use min-max strategy. E.g., suppose your favorite is a minor-party longshot, and your second choice is a major-party frontrunner; you might naively rate your favorite 5/5 and your second 4/5, but that doesn't much help your favorite actually win since they're a longshot, and it nerfs the support you give to your second who stands a fair shot at winning, so why pull that proverbial punch? It's more effective, and more likely to maximize your chances of a satisfactory outcome, to just rate them both 5/5. Moreover, suppose you rate your worst-evil candidate 0/5 and your lesser-evil 1/5; sure, at least lesser-evil isn't That Guy, but you really don't want either to win, so why give them any support at all that might help them edge out someone you like better? It's more effective to just rate them both 0/5. STAR addresses that problem by giving voters a compelling reason to use the full score range, as the summed/average scores don't determine the final winner, only the top-two runoff finalists, and then the finalist rated higher on more ballots wins, thereby making relative scoring relevant. Attempting insincere strategy here is about as likely to backfire as succeed, so voters might as well just rate each candidate sincerely using the full score range available.

There are two things I would add to this:

  1. The other big disadvantage of Borda is teaming, where nominating more candidates makes a party more likely to win (opposite of vote-splitting).

  2. It seems that to me the advantage of party-less systems in multi-winner elections is that, if your voting system succeeds so well that parties are no longer playing such a big role, you don't have to later change it to be party-less to complete the transition. :)

...also I was going to ask a question about rated-runoff methods (i.e., did anyone consider the most obvious... (read more)

Thanks for writing this! Very informative.

I saw a paper (which I don't remember enough about to find, unfortunately) in which a bunch of voting theorists got together and voted on the best voting method, using approval voting (I think?), and the winner was approval voting. I wonder if anyone also saw this and saved it so they can tell me how to find it?


There's another way that Arrow's theorem was an important foundation, particularly for rationalists. He was explicitly thinking about voting methods not just as real-world ways of electing politicians, but as theoretical possibilities for reconciling values. In this more philosophical sense, Arrow's theorem says something depressing about morality: if morality is to be based on (potentially revealed) preferences rather than interpersonal comparison of (subjective) utilities, it cannot simply be a democratic matter; "the greatest good for the greatest numbe

... (read more)

Ah, right, here's the rub:

A generalization of Gibbard–Satterthwaite is Gibbard's Theorem, which says that whether ranked or not no mechanism can

  • be non-dictatorial
  • choose between more than two options
  • be strategy-proof.

So the crux of the issue isn't ordinal vs cardinal (preferences vs utility, ranked vs scored). Rather, the crux of the issue is strategy-proofness: Arrow and related theorems are about the difficulty of strategy-proof implementations of the utillitarian ideal.

For a trivial example, see any discussion about utility monsters.
Say more about the relevance?
Basically, a utility monster is a person or group that derives orders of magnitude more utility from some activity, and the effect is so large that it cancels out the rest of the population's preferences. An example is Confederate slave owners derived way more utility from owning slaves than non-slave owners, so if there was an election over whether slavery is illegal, then the Confederate slave owners have a strategy that is: Turn into utility monsters that derive orders of magnitude more utility, and make slave owners have extreme preferences. Even if there were 1,000 slave owners to 1 million non slave owners, there's still a way for the slave owners to win using a strategy.
2the gears to ascension
While this is a solid example if argued better, in my view this is a somewhat badly argued description of it, and the example is one that is extremely important to be correct about. There are several turns of phrase that I think are false under standard definitions of the words, despite themselves being standard turns of phrase; eg, it is in my view not possible under <?natural law?> to own another being, so the "ownership" in the law of the enforcers of the time was misleading phrasing, and that "ownership" should not be agreed with today since we have begun to near consensus that they were in the wrong (though some people, obviously in my view terrible people, endorse continuation of slavery in the countries where it exists, such as the US prison system or various mining groups in africa). That said, it's also not the end of the world to be wrong about it as a first attempt from someone who has spent less than decades thinking heavily about this - such is the nature of attempting to discuss high sensitivity things, one is wrong about them as often or more than low sensitivity things. But I would encourage attempting to rephrase to encode the concepts you wouldn't want a reader to miss, since in my view it's worth posting commentary on everything to clarify convergence towards prosocial policy. IMO this is a good time to use something besides hedonic utilitarianism's lens as an additional comparison: eg, a utility monster, in the preference utilitarian math, is defined by and could only be implemented by a hyperoptimizer, ie a being willing to push something very far into a target configuration. In this case, the slavers were optimizing their own life's shape by enslaving people in order to optimize the slaver's preferences at incredible cost to those they trapped. The trapping people and forcing them to work allowed the person who'd done so to implement devaluation of others' preferences by the universe; the enslaved people were not able to implement their prefer

I'm curious about the question of wasted votes. In the link you provide to explain PLACE, it talks about wasted votes a lot. My understanding is this:

1) Under the loose definition, any vote not for the winning candidate is wasted.

2) Under the strict definition, any vote not needed to win is also wasted.

This is deeply unintuitive to me. Just inferring from the above, this suggests that what should be done is to calculate the least-worst smallest-majority candidate, and then everyone else stays home. The direct implication is that votes are intrinsicall... (read more)

4Jameson Quinn
This is a very good question. I have a friend who's working on making a rigorous definition for "wasted votes" that squares with your rough understanding, and she's a smart, professional mathematician; if it were trivial, she'd have solved it in 10 minutes. You are basically right about your understanding and intuition. In particular, under FPTP, "wasted votes" includes anybody who doesn't support a winner, and some of those who do. But note that in a multi-winner voting method, it's possible to ensure that a supermajority of votes have some say in picking the winner. For instance, if all ballots are full strict rankings, under STV only 1 Droop quota are wasted; for 19 winners, that would be just 5%. This is an active area of research and I hope to be able to come back to your comment a year or so from now and have a better answer. (ps. It's PLACE, not PACE.)
Oops, typo. Fixed!
In principle, I agree with your point that the concept of wasted votes seems pretty incoherent. In practice, I have frequently felt somewhat bad over having voted for someone in a way which feels like it "wasted" my vote. Despite knowing that this makes no sense. I'm guessing something-something-tribal-instincts-that-want-to-place-you-in-the-same-coalition-as-the-guys-with-the-most-power-something-something.
So I know this is a common feeling, but I have never felt that way. To provide some intuition for the other way, consider the concept of the popular mandate: when a President or Governor in the United States wins by a large margin, they have extra political capital for driving their agenda. Importantly this political capital is spent among other elites, so at the very least other elected officials are prone to treat it as significant. It seems obvious to me that voting is an information-bearing signal, but something must be amiss here because I appear to be in a small minority and without agreement from experts.

What are the improved Condorcet methods you're thinking of? I do recall seeing that Ranked Pairs and Schulze have very favorable strategy-backfire to strategy-works ratios in simulations, but I don't know what you're thinking of for sure. If those are it, then if you approach it right, Schulze isn't that hard to work through and demonstrate an election result (wikipedia now has an example).

5Jameson Quinn
Actually, I was talking about the kind of methods discussed here. As to Schulze and Ranked Pairs, these two are very similar in philosophy. In terms of criteria compliances and VSE, RP is slightly superior; but Schulze has the advantage of Markus Schulze's energetic promotion.
2Luke A Somers
In terms of explaining the result, I think Schulze is much better. You can do that very compactly and with only simple, understandable steps. The best I can see doing with RP is more time-consuming and the steps have potential to be more complicated. As far as promotion is concerned, I haven't run into it; since it's so similar to RP, I think non-algorithmic factors like I mentioned above begin to be more important. ~~~~ The page you linked there has some undefined terms like u/a (it says it's defined in previous articles, but I don't see a link). >it certainly doesn’t prevent Beatpath (and other TUC methods) from being a strategic mess, without known strategy, Isn't that a… good thing? With the fog of reality, strategy looking like 60% stabbing yourself, 30% accomplishing nothing, 10% getting what you want… how is that a bad trait for a system to have? In particular, as far as strategic messes are concerned, I would definitely feel more pressure to use strategy of equivocation in SICT than in beatpath (Schulze), because it would feel a lot less drastic/scary/risky.
4Jameson Quinn
Note that I don't endorse that page I linked to, it's just the best source I could find for definitions of "improved Condorcet" methods. "U/A" is some strange class of voting scenarios where voters have a clear a priori idea about what is "unacceptable" versus "acceptable" and strategize accordingly. I don't think it's analytically very helpful.
3Luke A Somers
I see. I figured U/A meant something like that. I think it's potentially useful to consider that case, but I wouldn't design a system entirely around it.

Should this be broken up into sub-posts?

I think this serves as an excellent centralized reference, and so would argue no. However, it might be good to break out new posts if the comments turn into a deep dive on any particular subject, if as a summary if nothing else.

I nominate this post because it does a good job of succinctly describing different ways to approach one of the core problems of civilization, which is to say choosing policy (or choosing policy choosers). A lot of our activity here is about avoiding catastrophe somehow; we have spent comparatively little time on big-picture things that are incremental improvements.

Anecdotally, this post did a good job of jogging loose a funk I was in regarding the process of politics. Politics is a notoriously *ugh field* kind of endeavor in the broader culture, and a part... (read more)

Not the best place to put this comment, but there's a confusing mistake on the PLACE FAQ where the pie candidate shows a voting option for "Other Ice Cream Candidate" instead of "Other Pie Candidate."

Does anyone have an electorama account to remedy that?

1Jameson Quinn
Thanks, fixed. This is probably the best place to note that Electorama is limping along on a mostly-broken installation of mediawiki that a nice person set up like 15 years ago. Migrating it to, say, a GitHub Pages wiki would be a substantial benefit to the electoral reform community. If anybody wants to do that, I can get you a database dump.
For the record, it was migrated to https://electowiki.org/wiki/Main_Page
Due to its complexity, [SODA] is probably not a practical proposal, though.

I don't follow, how is that complexity a problem? I googled SODA, stumbled onto a sample ballot here: https://wiki.electorama.com/wiki/SODA_voting_(Simple_Optionally-Delegated_Approval)#Sample_Ballot it doesn't seem like it even matters if voters fully understand the (very succinct) instructions, the thickest voters will just treat it like an approval voting system, which is pretty adequate, isn't it?

Having the candidates' delegations be in there wont really impo... (read more)

2Jameson Quinn
Yes, the voter interface is the main thing, and SODA is quite simple on that. But to get a reform actually enacted, you usually have to explain it more-or-less-fully to at least some nontrivial fraction of people, and SODA is hard to explain.

Many of these methods can (I think) select multiple winners. For large elections, this is usually pretty unlikely, but still possible. What's your preferred method of dealing with that possibility? And have you looked into maximal lotteries? http://dss.in.tum.de/files/brandt-research/fishburn_slides.pdf

2Jameson Quinn
You're talking about the possibility of "ties" at various stages in a method's procedure — that is, cases where a difference is being checked for sign but is equal to zero. As you say, this becomes vanishingly unlikely in large elections. In that case, any tiebreaker will do; if nothing else is clearly better, just draw lots.

(I had several hundred karma on the old Less Wrong site under a nym. Is there both a reason to want to reclaim that and a way to do so?)

We haven't ported the old karma over yet, but are working on it. Sorry for the delay on that, but it should be fixed within the month.

It seems like a lot of the challenges in designing a voting system stem from wanting to give each geographic region "their" representative, while not letting people "throw away" their vote.

If we abandon the first part (which is totally reasonable here in the 21st century, with the takeover of digital communication and virtual communities), there is a clean solution to the second part.

Specifically, remove all the names from the ballot, and have people only vote for their preferred party, then allow each party that gets more than [small]% of the vote to desi... (read more)

I don't think we have a takeover of virtual communities. When I'm ill then I don't go to a virtual hospital but on in bricks and mortar. While some people work virtual jobs at home most people have local jobs.  Having the interests of different localities represented matters for day-to-day politics. You can argue that maybe it would be better if there's no representative from Detroit who thinks saving the car industry from Detroit should be his most important political goal but we don't live in a world where the people in Detroid work in virtual jobs and thus the Detroit car industry isn't a big deal to them.
3Jameson Quinn
You've described, essentially, a weighted-seats closed-list method. List methods: meh. It's actually possible to be biproportional — that is, to represent both party/faction and geography pretty fairly — so reducing it to just party (and not geography or even faction) is a step down IMO. But you can make reasonable arguments either way. Closed methods (party, not voters, decides who gets their seats): yuck. Why take power from the people to give it to some party elite? Weighted methods: who knows, it's scarcely been tried. A few points: * If voting weights are too unequal, then effective voting power can get out of whack. For instance, if there are 3 people with 2 votes each, and 1 person with 1 vote, then that last person has no power to ever shift the majority, even though you might have thought they had half as much power as the others. * I think that part of the point of representative democracy is deliberation in the second stage. For that purpose, it's important to preserve cognitive diversity and equal voice. So that makes me skeptical of weighted methods. But note that this is a theoretical, not an empirical, argument, so add whatever grains of salt you'd like. General points: I like your willingness to experiment; it is possible to design voting methods that are better than even the best ones in common use. But it's not easy, so I wouldn't want to adopt a method that somebody had just come up with; important to at least let experienced theoreticians kick it around some first.
Why should random people who are not experts in "ability to debate," "ability to read and understand the impact of legal language," or other attributes that make a good lawmaker get to decide which human beings are tasked with the process of writing and compromising on language? People have an interest in having their values reflected, but that's already determined by the party they vote for. This is especially true in a system that encourages multiple parties, so the, for example, "low taxes" faction, the "low regulation" faction, and the "white power" faction can each be separate parties who collaborate (or not) on individual legislative priorities as needed. And each party can hire whatever mix of lawyers, negotiators, speech writers, and speech givers they want, without forcing "the person who decides who to hire," "the person who gives speeches," and "the person who has final say on how to vote" all be the same "candidate."
Kenneth Arrow, proved that the problem that Condorcet (and Llull) had seen was in fact a fundamental issue with any ranked voting method. He posed 3 basic "fairness criteria" and showed that no ranked method can meet all of them:
Ranked unanimity, Independence of irrelevant alternatives, Non-dictatorial

I've been reading up on voting theory recently and Arrow's result - that the only voting system which produces a definite transitive preference ranking, that will pick the unanimous answer if one exists, and doesn't change dependi... (read more)

1Jameson Quinn
You seem to be comparing Arrow's theorem to Lord Vetinari, implying that both are undisputed sovereigns? If so, I disagree. The part you left out about Arrow's theorem — that it only applies to ranked voting methods (not "systems") — means that its dominion is far more limited than that of the Gibbard-Satterthwaite theorem. As for the RL-voting paper you cite: thanks, that's interesting. Trying to automate voting strategy is hard; since most voters most of the time are not pivotal, the direct strategic signal for a learning agent is weak. In order to deal with this, you have to give the agents some ability, implicit or explicit, to reason about counterfactuals. Reasoning about counterfactuals requires make assumptions, or have information, about the generative model that they're drawn from; and so, that model is super-important. And frankly, I think that the model used in the paper bears very little relationship to any political reality I know of. I've never seen a group of voters who believe "I would love it if any two of these three laws pass, but I would hate it if all three of them passed or none of them passed" for any set of laws that are seriously proposed and argued-for.
1Sammy Martin
It was a joke about how if you take Arrow's theorem literally, the fairest 'voting method' (at least among ranked voting methods), the only rule which produces a definite transitive preference ranking and which meets the unanimity and independence conditions is 'one man, one vote', i.e. dictatorship. Such a situation doesn't seem all that far-fetched to me - suppose there are three different stimulus bills on offer, and you want some stimulus spending but you also care about rising national debt. You might not care which bills pass, but you still want some stimulus money, but you also don't want all of them to pass because you think the debt would rise too high, so maybe you decide that you just want any 2 out of 3 of them to pass. But I think the methods introduced in that paper might be most useful not to model the outcomes of voting systems, but for attempts to align an AI to multiple people's preferences.

I would weakly support this post's inclusion in the Best-of-2018 Review. It's a solid exposition of an important topic, though not a topic that is core to this community.

I think voting theory is pretty useful, and this is the best introduction I know of. I've linked it to a bunch of people in the last two years who were interested in getting a basic overview over voting theory, and it seemed to broadly be well-received. 

his last name is last name is great source of puns

Typo: his last name is a great source of puns.

The greater the number of voters the less time it makes sense as an individual to spend researching the options. It seems a good first step would be to randomly reduce the number of voters to an amount that would maximize the overall quality of the decision. Any thoughts on this?

2Jameson Quinn
That's basically a form of multi-stage election with sortition in the first stage. Sortition is a pretty good idea, but unlikely to catch on for public politics, at least not as an every-election thing. One version of "sortition light" is to have a randomly-selected group of people who are paid and brought together before an election to discuss and vote, with the outcome of that vote not binding, but publicized. Among other things, the sortition outcome could be posted in every voting booth.
Interesting and thanks for your response! I didn't mean there would be multiple stages of voting. I meant the first stage is a random selection and the second stage is the randomly chosen people voting. This puts the full weight of responsibility on the chosen ones and they should take it seriously. Sounds great if they are given money too. The thing I feel is missing but this community has a sense for is that the bar to improving a decision when people have different opinions is far higher than people treat it. And if that's true then the more concentrated the responsibility the better… like no more than 10 voters for anything?

It seems that to lift up one candidate to the top of our ballots we are implicitly expressing a judgement that they're better than every other candidate in the contest. Problem is, most of us know nothing about most of the candidates in most contests. We shouldn't be expressing things like "A > B > all others". We should prefer to say "A > B" and leave leave every other comparison grey. Let other voters, who actually know about C and D and E fill in the rest of the picture. There might be candidates who are better tha... (read more)

2Jameson Quinn
There are indeed various methods which allow for expressing uncertaintly, in principle. For instance, in score voting, you can count average instead of total score. Similar adjustments work for Bucklin, Condorcet, 3-2-1, STAR, etc. The problem is, as you say, the "Dark Horse" one, which in this case can be seen as a form of selection bias: if those who like a candidate are more likely to give them a score than those who hate them, the average is biased, so it becomes advantageous to be unknown. The best way to deal with this problem is a "soft quota" in the form of N pseudo-votes against every candidate. More-or-less equivalently, you can have every blank ballot count as X% of a vote against. It's usually my feeling that as a practical matter, these kinds of extra epicycles in a voting method add more complexity than they're worth, but I can imagine situations (such as a highly geeky target population, or conversely one that doesn't care enough about the voting method to notice the complexity) where this might be worth it.
2mako yass
I'm not sure I understand how that would do it. I've thought of a way that it could, but I have had to independently generate so much that I'm going to have to ask and make sure this is what you were thinking of. So, the best scheme I can think of for averaged score voting still punishes unknowns by averaging their scores towards zero (or whatever you've stipulated as the default score for unmentioned candidates) every time a ballot doesn't mention them, but if the lowest assignable score was, say, negative ten, a person would not be punished as severely for being unknown, as they would be punished if they were known and hated. Have I got that right? And that's why it's better, not because it's actually fair to unknowns but because it's less unfair? [Hmm, would it be a good idea to make the range of available scores uneven, for instance [5, -10], so that ignorers and approvers have less of an effect than oppponents (or the other way around, if opponents seem to be less judicious on average than approvers, whichever).] But that's just artificially amplifying the bias against unknowns, isn't it? Have you had so much trouble with dark horses that you've come to treat obscurity as a heuristic for inferiority? You know what else is obscure? Actual qualifications. Most voters don't have them and don't know how to recognise them. I worry that we will find the best domain experts in economics, civics, and (most importantly for a head of state) social preference aggregation, buried so deeply in the Dark Horse pile that under present systems they are not even bothering to put their names into the hat. [Hmm, back to recommender systems, because I've noticed a concise way to say this; It's reasonable to be less concerned about Dark Horses in recommender systems, because we will have the opportunity to measure and control how the set {people who know about the candidate} came to be. We know a lot about how people came to know the candidate, because if they are using our platf

Re SODA: The setup appears to actively encourage candidates to commit to a preference order. Naively, I would prefer a modification along the following lines; could you comment?

(1) Candidates may make promises about their preference order among other candidates; but this is not enforced (just like ordinary pre-election promises). (2) The elimination phase runs over several weeks. In this time, candidates may choose to drop out and redistribute their delegated votes. But mainly, the expected drop-outs will negotiate with expected survivors, in order to get... (read more)

2Jameson Quinn
The point of SODA isn't so much as a serious proposal; I prefer 3-2-1 for that, mostly because it's easier to explain. SODA's advantage is that, under "reasonable" domain restrictions, it is completely strategy-free. (Using my admittedly-idiosyncratic definition of "reasonable", it's actually the only system I know of that is. It's a non-trivial proof, so I don't expect that there are other proposals that I'm unaware of that do this.) Forcing candidates to pre-commit to a preference order is a key part of proving that property. I do see the point of your proposal of having post-election negotiations — it gives real proportional power to even losing blocs of voters, and unifies that power in a way that helps favor cooperative equilibria. Some of that same kind of thinking is incorporated into PLACE voting, though in that method the negotiations still happen pre-election. Even if post-election negotiations are a good idea, I'm skeptical that a majority of voters would want a system that "forced" them to trust somebody that much, so I think keeping it as a pre-election process helps make a proposal more viable.

Thanks for this introduction to voting systems! I always wanted to know more about them, but never actually got to study it more - so usually I just suggested approval voting as a reasonable default. (Hence we use it in CZEA)

Btw what do you think of this voting method? https://web.d21.me/en/ (pushed by somewhat eccentric mathematician-trader-billionaire Janecek)

(Btw I would suggest this post gets to curated)

9Jameson Quinn
This is a system where voters can give +1, 0, or -1 points to candidates in two-member districts, and the highest-score candidates win. So far, that sounds like score voting. However, there are also limits on how many candidates a given voter can support (4) and oppose (2), making this method something like single non-transferable voting. This voting method is not mathematically clean, and is subject to every one of the pathologies I described. It's not particularly hard to describe, but also a bit confusing at first glance. It is probably less likely to be caught in lesser-evil than FPTP, but that's the best thing I can say about it. Verdict: thumbs emphatically down.

Dear Jameson, as you say the theme is extremely important, but I miss more about Storable Votes: one period Arrovian results deeply change in dynamic voting scenarios. I have recently written two articles about this: one has been published in Journal of Economic Interaction and Coordination, the other is still a pre-print:


I also suggest you to read the Casella and Mace review about “vote trading” (there a is Journal version, here you have the pre-print):

https:... (read more)

Hi Jameson, brilliant post. I have some questions regarding the candidates in an election : 

1. What are the implicit and explicit assumptions we make about candidates? 
2. Would it be possible to incentivize preferable behavior from voters by making the candidates play an anonymous game prior to the election? For example lets say we have 100 candidates that we want to narrow to 10.  If we, for example, made the 100 candidates answer 5 pertinent questions in a digital form and had 1000 voters rank order(or other voting function of choice) the responses of 10 random candidates, we could take the top 10 performing candidates and then have a non-anonymous round of voting. 

Just found this and I have question and comment.

Q. Here (NZ), local body elections are usually STV for both mayor and councillors. It was seen as a way to get around vote-splitting leading to an unfavoured winner largely. There is always idle tea-time discussions about strategic voting without anyone getting sufficient interested to really analyse it. Your comment about it strategic voting in preference system revived my curiousity. How do you game an STV system? The best we could manage is that it seem best to rank all the candidates, rather than just ran... (read more)

"Because LNH is incompatible with FBC, it is impossible to completely avoid the chicken dilemma without creating a center squeeze vulnerability, but systems like STAR voting or 3-2-1 minimize it."

Unfortunately, the main weakness of STAR voting is in the automatic runoff system it introduces. Automatic runoff systems (of which IRV is the most known) sacrifice numerous benefits and exhibit various pathologies in the name of speed. STAR voting, while far better than current plurality systems, is unfortunately still introducing an unnecessarily layer of compli... (read more)


Could you dive into the strategic approach challenges you see with quadratic voting a bit further? The way I see it, a decentralised blockchain which rewards consensus can ensure honesty using simple game-theoretic principles. Quadratic voting is especially useful in reputation scoring, where both the magnitude and the diversity of the votes are important to ensure robustness.

More specifically, I'm referring to the quadratic voting method proposed in the Capital Restricted Liberal Radicalism paper. I'm assuming you're referring to the sam... (read more)

This post introduced me to a whole new way of thinking about institutional/agency design. The most merit, for me, was pointing out this field existed. The subject is close to one of the core subjects of my thoughts, which is how to design institutions that align selfish behavior with altruistic outcomes on different hierarchical levels, from the international, to the cultural, national, communal, relational, and as far down as the subagent level.

2Ben Pace
Note that this is just a nomination, there'll be a whole month for reviews. But the nomination deadline is Monday, so get them in quick! :)
I still appreciate the nominations going into more depth of "why it was valuable" (but, yes, the Review Phase will start next Monday and will encourage a lot more depth)


Thank you for that very insightful article.

Sorry to post a question so long after its publication, I hope it will reach you. I'm wondering how all these methods stands for anonimity of voters, I'm explaining: in most election, there is a lot of candidats (at leat, it's the case in France, where for example the last 1st round of Presidential election had 11 voters. That leads to a lot of unique permutation (for example, with Approval, we can encode 2^11 = 2048 states, for a fully sorted algo we get 11!, etc). In France, votes are orga... (read more)

1Jameson Quinn
"Summable" voting methods require only anonymous tallies (totals by candidate or candidate pair) to find a winner. These do not suffer from the problem you suggest. But for non-summable methods, such as IRV/RCV/STV, you are absolutely correct. These methods must sacrifice either verifiability/auditability or anonymity. This is just one of the reasons such reforms are not ideal (though still better than choose-one voting, aka plurality).
2Francois Armand
By the way, just to be sure I understood correctly: 3-2-1 is a summable voting method and so is not subject to that risk? If so, it seems to be definitly the best voting method available.
2Francois Armand
Thanks. As fall of democracies goes, it seems that auditability is a better properties to have than anonymity (because when the totalitariam regime is sufficiently powerful to openly threat people, anonymity most certainly doesn't matter anymore. But on a working democracy, people need to be sure of the transparency of the system to believe and accept it). Is there a place where all these methods are formally defined in a comparable way? I'm a little lost with all the acronym, and having all of them in a big table with their algorithm & properties would be extremelly useful.
2Francois Armand
Obviously, you did it somewhere, and now that I navigated a little bit more in the mass of very good documents you produced, I can point towards: * http://electology.github.io/vse-sim/VSE/ for the code repository, with a long explanation, of a statistical test of a lot of methods. That implies that we have the actual code to understand: * the model used (for voters preferences and strategies - extremely interesting!) * the actual algorithmic definition of all the methods * nice VSE graph for each method :) * http://electology.github.io/vse-sim/VSEbasic/ which is a nice summary of the main methods. Also, I wanted to point out that there was a big real life test of a variant of score voting (with 6 score from "reject" to "very good") before french presidential election in 2017: https://articles.laprimaire.org/l%C3%A9lection-pr%C3%A9sidentielle-au-jugement-majoritaire-les-r%C3%A9sultats-373e089315a4 (sorry in French). The scrutin was a little complex, with a first tour with 16 candidates and >10k voters. Each voter was presented a random set of 5 candidates and was asked for evaluation of each candidate project on ~10 aspects (your clusters) (each time with the 6 states scale) (see https://articles.laprimaire.org/r%C3%A9sultats-du-1er-tour-de-laprimaire-org-c8fe612b64cb). Smart contracts on ethereum were used to register votes. Then a second round took the 5 bests candidates, and more than 30k voters ranked them. (https://articles.laprimaire.org/r%C3%A9sultats-du-2nd-tour-de-laprimaire-org-2d61b2ad1394) It was a real pleasure to participate to it because you really try to estimate each candidate project, and you don't try to be strategic. But it does not seems possible in real life major elections (like French or USA presidential), because it requires quite a lot of time and will from voters.

Another view of voting is where an objective opinion or belief is being formed about ANYTHING that can be named. It might be a belief about utility where the utility can only be seen subjectively but an objective value would be useful in planning, for instance. Another example is determination of what a given set of users might buy given prior page views, or what the value of a given piece of intelligence is to a given audience at a given time and related to a given topic. Such a voting structure is needed in crowd sourcing and in 'training'/&#x... (read more)


>>>> Borda: "My method is made for honest men!"

This is why it would miserably fail. We live in a fallen world. We would like to live in a rational one. We do not. We will not until irrational behavior is punished in some horrible fashion. Preferably punished by the Darwin Gods w/o any human intervention that irrational monkeys with car keys can blame.

Being wrong does not mean that you deserve to suffer. Even beyond the moral disagreement, an environment where disagreement implicitly carries the wish that your opponent suffer horribly is not a good place to find truth. That will, among other things, promote defensiveness, tribalism, and overconfidence.

I see two problems conflated here: (1) how to combine many individual utilities, and (2) how to efficiently extract information about those utilities from each voter.

In this view, a particular voting system is just a way to collect information to build an accurate model of the voter's utility, which would then be used in some simple way. Of course, actually reducing something like ranked voting to utility maximization would be hard. But that's not a point against this perspective.

Problem (1) is presumably solved by simple utilitarianism though th... (read more)

3Jameson Quinn
Certainly, that's a reasonable point of view to take. If you fully embrace utilitarianism, that's a "solution" (at least in a normative sense) for what you call problem (1). In that case, your problem (2) is in fact separate and posterior. I don't fully embrace utilitarianism. In my view, if you reduce everything to a single dimension, even in theory, you lose all the structure which makes life interesting. I think any purely utilitarian ethics is subject to "utility monsters" of some kind. Even if you carefully build it to rule out the "Felix" monster and the "Soma" monster, Godel's incompleteness applies, and so you can never exclude all possible monsters; and in utilitarianism, even one monster is enough to pull down the whole edifice. So I think that looking seriously at theorems like Arrow's and Sen's is useful from a philosophical, not just a practical, point of view. Still, I believe that utilitarianism is useful as the best way we have to discuss ethics in practice, so I think VSE is still an important consideration. I just don't think it's the be-all or end-all of voting theory. Even if you do think that ultimately VSE is all that matters, the strategy models it's built on are very simple. Thinking about the 5 pathologies I've listed is the way towards a more-realistic strategy model, so it's not at all superseded by existing VSE numbers. And if you look seriously at the issue of strategy, there is a tension between getting more information from voters (as you suggest in your discussion of (2), and as would be optimized by something like score voting or graduated majority judgment) and getting more-honest information (as methods like 3-2-1 or SODA lean more towards).
3mako yass
I can't find a definition of Soba? Isn't that fairly easily solved by, as the right honorable Zulu Pineapple says, "let all utilities be positive and let them sum to one"? It would guarantee that no agent can have a preference for any particular outcome that is stronger than one. Nothing can have just generally stronger preferences overall, under that requirement. It does seem to me that it would require utility functions to be specified over a finite set of worlds (I know you can have a finite integration over an infinite range, but this seems to require clever mathematical tricks that wouldn't really be applicable to a real world utility function?). I'm not sure how this would work. Do remember, at some point, being pulled in directions you don't particularly want to go in under obligation to optimise for utility functions other than yours is literally what utilitarianism (and social compromise in general) is, and if you don't like pleasing other peoples' strange desires, you might want to consider conquest and hegemony as an alternative to utilitarianism, at least while it's still cheaper. Hmm what if utility monsters don't exist in nature, and are not permitted to be made, because such a thing would be the equivalent of strategic (non-honest) voting and we have stipulated as a part of the terms of utilitarianism that we have access to the True utility function of our constituents, and that their twisted children Felix and Soba don't count. Of course, you would need an operational procedure for counting individual agents. You would need some way of saying which are valid and which ephemeral replicas of a mind process will not be allowed to multiply their root's will arbitrarily. Going down this line of thought, I started to wonder whether there have ever or will ever exist any True Utility Functions that are not strategic constructions. I think there would have to be at some point, and that constructed strategic agents are easily distinguishable from agent
3Jameson Quinn
"Soba" referred to the happy drug from Brave New World; that is, to the possibility of "utility superstimulus" on a collective, not individual, level. "Sum to one" is a really stupid rule for utilities. As a statistician, I can tell you that finding normalizing constants is hard, even if you have an agreed-upon measure; and agreeing on a measure in a politically-contentious situation is impossible. Bounded utility is a better rule, but there are still cases where it clearly fails, and even when it succeeds in the abstract it does nothing to rein in strategic incentives in practice. As to questions about True Utility Functions and a utopic utilitarian society... those are very interesting, but not at all practical.
3mako yass
(That's Soma. I don't believe the joy of consuming Soba comes close to the joy of Soma, although I've never eaten Soba a traditional context.)
1Jameson Quinn
Oops, fixed.
In what practical context do we work with utilities as explicit numbers? I don't understand what context you're thinking of. If you have some numbers, then you can normalize them and if you don't have numbers, then how does a utility monster even work?
(I read it as Zu Lupine Apple.)
The point is that whatever solution you propose, you have to justify why it is "good", and you have to use some moral theory to explain what's "good" about it (I feel that democracy is naturally utilitarian, but maybe other theories can be used too). For example, take your problem 0, Dark Horse. Why is this a problem, why is it "bad"? I can easily imagine an election where this dark horse wins and everyone is ok with that. The dark horse is only a problem if most people are unhappy with the outcome, i.e. if VSE is low. There is nothing inherently bad about a dark horse winning elections. There is no other way to justify that your problem 0 is in fact a problem. Of course, the simulations of what the voters would do, used in computing the VSE, are imperfect. Also, the initial distribution of voter's true utilities might not match reality very well. Both of those points need work. For the former, I feel that the space of possible strategies should be machine-searchable, (although there is no point to account for a strategy if nobody is going to use it). For the latter, I wonder how well polling works, maybe if you just ask the voter about their preferences (in a way different from the election itself), they are more likely to be honest.
2Jameson Quinn
I wrote a separate article discussing the pathologies, in which I gave utility-based examples of why they're problematic. This discussion would probably be better there.
[+][comment deleted]10