The argument that voting is irrational is commonplace. From your point of view as a single voter, the chance that your one vote will sway the election are typically miniscule, and the direct effect of your vote is null if it doesn't sway the election. Thus, even if the expected value of your preferred candidate to you is hugely higher than that of the likely winner without your vote, when multiplied by the tiny chance your vote matters, the overall expected value isn't enough to justify the small time and effort of voting. This problem at the heart of democracy has been noted by many — most prominently, Condorcet, Hegel, and Downs.
There have been various counterarguments posed over the years:
- Voters get some kind of intrinsic utility from the act of voting expressively.
- Voters get some utility because changing the margin of the election affects how the resulting government behaves. (I personally find this argument highly implausible; the level of effects that would be necessary seem visibly lacking.)
- If voters have a sufficiently altruistic utility function, the expected utility of a better government for all citizens could be sufficient to make voting worth it.
- Voting itself is irrational, but having a policy of voting is rational.
- This could be true if, for instance, paying attention to politics were intrinsically good for one's mental health, and yet akrasia would prevent paying sufficient attention without a policy of voting. I suspect most readers here will decisively reject that idea.
- This could also be true if there were some kind of iterated or outrospective prisoners dilemma [Ed: actually, more like stag hunt] involved, in which voting was cooperation and not-voting was betrayal.
Of all of the above, I find the last bullet most interesting. But I am not going to pursue that here. Here, I'm going to propose a different rationale for voting; one that, as far as I know, is novel.
Participating in democratic elections is a group skill that requires practice. And it's worth practicing this skill because there is an appreciable chance that a future election will have a significant impact on existential risk, and thus will have a utility differential so high so as to make a lifetime of voting worth it.
Let's build a toy model with the following variables:
- c: The cost of voting, in utilons.
- i: "importance", the probability that your highest values — whether that is the survival of your ethnic group, the flourishing of humanity in general, maximizing pleasure for all sentient beings, or whatever — hang in the balance in any given future election. Call such elections "important".
- u: the utility differential, in utilons, at stake in important elections. To cancel out utilons, we can focus on the dimensionless quantity u/c.
- l: number of people "like you" in any given election
- t: the chance that a given person like you truly notices the election is important
- f<t: the chance of false positives in noticing important elections
- s: the chance that a person like you will, if they vote, cast a correctly strategic ballot in an important election
- b<s: chance that they will, if they vote, cast an anti-strategic (bad) ballot
- p: marginal slope of probability of a good outcome. The chance that m strategic ballots will have the power to swing the election is roughly pm over the plausible range of values of m.
Note that t, f, s, and b refer to individuals' marginal chances, but independence is not assumed; outcomes can be correlated across voters. So the utility benefit per election per voter of the policy of "voting iff you notice that the election is important" is uit(s-b)p, while the cost is itc+(1-i)fc. The utility benefit per election of "always voting" is ui(s-b)p, while its costs are c. If u/c can take values above 1e11 and i is above 1e-4 — values I consider plausible — then for reasonable choices of the other variables "always voting" can be a rational policy.
This model is weak in several ways. For one, the chances of swinging an election with strategic votes are not linear with the number of votes involved; it's probably more like a logistic cdf, and l could easily be large enough that the derivative isn't approximately constant. For another, adopting a policy of voting probably has side-effects; it probably increases t, possibly decreases f, and may increase one's ability to sway the votes of other voters who do not count towards l. All of these structural weaknesses would tend to lead the model to underestimate the rationality of voting. (Of course, numerical issues could lead to bias in either direction; I'm sure some people will find values of i>1e-4 to be absurdly high.)
Yet even this simple model can estimate that voting has positive expected value. And it applies whether the existential threat at issue is a genocidal regime such as has occurred in the past, or a novel threat such as powerful, misaligned AI.
Is this a novel argument? Somewhat, but not entirely. The extreme utility differential for existential risk is probably to some degree altruistic. That is, it's reasonable to exert substantially more effort to avert a low-risk possibility that would destroy everything you care about, than you would if it would only kill you personally; and this implies that you care about things beyond your own life. Yet this is not the everyday altruism of transient welfare improvements, and thus it is harder to undermine it with arguments using revealed preference.
I've given my own reasons against voting before. I specifically addressed the "altruistic" justification for voting, since nobody thinks they can make a case for selfish voting anymore. My two main arguments:
1. You shouldn't expect to know who the better candidate will be with any confidence, since the policies actually implemented are unpredictable, let alone their effects.
2. Voting contributes to your own mind-kill and to disliking your friends. You will think less clearly about a politician and their supporters once you cast a vote for/against them because of consistency bias, myside bias, confirmation bias etc.
With that said, I actually enjoyed this essay. The X-risk-EA argument presented here actually presents a case that's both novel and would make my two main objections irrelevant. However, there's some evidence that it's not very applicable to real life.
In summer 2016 I heard from several prominent EAs that they think EA orgs should recommend Hillary's campaign as a key cause, and that EAs should donate to it. I have also seen zero attempts at rigorous analysis showing that Trump is a bigger X-risk than Hillary. If we convince ourselves that elections are an EA cause, the false-positive rate for "important" election will quickly approach 100%, and the chance that EAs decide that the Republican candidate is actually safer will approach 0%. The only effect would be losing a lot of resources, friends and mental energy to this nonsensical theater.
On 1, both candidates suck, and not because someone on the margin votes or doesn't but because of a thousand upstream causes: the personality type required to succeed in politics, the voting system that ensures a two-party lock in, the inability of citizens to comprehend the complexity of modern nation governments, etc.
On 2, let me make my general argument very particular:
1. Polls show that polarization on politics ("Would you let your child marry a Democrat?") is stronger than polarization on any other major alignment.
2. Unlike other things... (read more)