Voting is like donating thousands of dollars to charity

Summary:  People often say that voting is irrational, because the probability of affecting the outcome is so small. But the outcome itself is extremely large when you consider its impact on other people. I estimate that for most people, voting is worth a charitable donation of somewhere between $100 and $1.5 million. For me, the value came out to around $56,000.  So I figure something on the order of $1000 is a reasonable evaluation (after all, I'm writing this post because the number turned out to be large according to this method, so regression to the mean suggests I err on the conservative side), and that's be enough to make me do it.

Moreover, in swing states the value is much higher, so taking a 10% chance at convincing a friend in a swing state to vote similarly to you is probably worth thousands of expected donation dollars, too.

I find this much more compelling than the typical attempts to justify voting purely in terms of signal value or the resulting sense of pride in fulfilling a civic duty. And voting for selfish reasons is still almost completely worthless, in terms of direct effect. If you're on the way to the polls only to vote for the party that will benefit you the most, you're better off using that time to earn $5 mowing someone's lawn. But if you're even a little altruistic... vote away!

Time for a Fermi estimate

Below is an example Fermi calculation for the value of voting in the USA. Of course, the estimates are all rough and fuzzy, so I'll be conservative, and we can adjust upward based on your opinion.

I'll be estimating the value of voting in marginal expected altruistic dollars, the expected number of dollars being spent in a way that is in line with your altruistic preferences.1 If you don't like measuring the altruistic value of the outcome in dollars, please consider making up your own measure, and keep reading. Perhaps use the number of smiles per year, or number of lives saved. Your measure doesn't have to be total or average utilitarian, either; as long as it's roughly commensurate with the size of the country, it will lead you to a similar conclusion in terms of orders of magnitude.

Component estimates:

At least 1/(100 million) = probability estimate that my vote would affect the outcome. This is the most interesting thing to estimate. There are approximately 100 million voters in the USA, and if you assume a naive fair coin-flip model of other voters, and a naive majority-rule voting system (i.e. not the electoral college), with a fair coin deciding ties, then the probability of a vote being decisive is around √(2/(pi*100 million)) = 8/10,000.

But this is too big, considering the way voters cluster: we are not independent coin flips. As well, the USA uses the electoral college system, not majority rule. So I found this paper by Gelman, King, and Boscardin (1998), where they simulate the electoral college using models fit to previous US elections, and find that the probability of a decisive vote came out between 1/(3 million) and 1/(100 million) for voters in most states in most elections, with most states lying very close to 1/(10 million).

At least 55% = my subjective credence that I know which candidate is "better", where I'm using the word "better" subjectively to mean which candidate would turn out to do the most good for others, in my view, if elected. If you don't like this, please make up your own definition of better and keep reading :) In any case, 55% is pretty conservative; it means I consider myself to have almost no information.

At least $100 billion = the approximate marginal altruistic value of the "better" candidate. I think this is also very conservative. The annual federal budget is around $3 trillion right now, making $12 trillion over a 4-year term, and Barack Obama and Mitt Romney differ on trillions of dollars in their proposed budgets. It would be pretty strange to me if, given a perfect understanding of what they'd both do, I would only care altruistically about 100 billion of those dollars, marginally speaking.

Result

I don't know which candidate would turn out "better for the world" in my estimation, but I'd consider myself as having at least a 55%*1/(100 million) chance of affecting the outcome in the better-for-the-world direction, and a 45%*1/(100 million) chance of affecting it in the worse-for-the-world direction, so in expectation I'm donating at least around

(55%-45%)*1/(100 million)*($100 billion) = $100

Again, this was pretty conservative:

  • I'm more like 70% sure,
  • Being in California, Gelman et al. put my probability of a decisive vote around 1/(5 million).
  • To me, the outcome matters more on the order of a $700 billion donation, given that Obama and Romney's budgets differ on around $7 trillion, and I figure at least 10% of that is stuff that I'd care about relative to other shifts in money I could imagine.

That makes (70%-30%)*1/(5 million)*($700 billion) = $56,000. Going further, if you're

  • 90% sure,
  • voting in Virginia -- 1/(3.5 million), and
  • care about the whole $7 trillion dollar difference in budgets,

you get (90%-30%)*1/(3.5 million)*($7 trillion) = $1.2 million. This is so large, it becomes a valuable use of my time to take 1% chances at convincing other people to vote... which I'm hopefully doing by writing this post.

Discussion

Now, I'm sure all these values are quite wrong in the sense that taking account everything we know about the current election would give very different answers. If anyone has a more nuanced model of the electoral college than Gelman et al, or a way of helping me better estimate how much the outcome matters to me, please post it! My $700 billion outcome value still feels a bit out-of-a-hat-ish.

But the intuition to take away here is that a country is a very large operation, much larger than the number of people in it, and that's what makes voting worth it... if you care about other people. If you don't care about others, voting is probably not worth it to you. That expected $100 - $1,500,000 is going to get spread around to 300 million people... you're not expecting much of it yourself! That's a nice conclusion, isn't it? Nice people should vote, and selfish people shouldn't?

Of course, politics is the mind killer, and there are debates to be had about whether voting in the current system is immoral because the right thing to do is abstain in silent protest that we aren't using approval voting, which has better properties than the current system... but I don't think that's how to get a new voting system. I think while we're making whatever efforts we can to build a better global community, it's no sacrifice to vote in the current system if it's really worth that much in expected donations.

So if you weren't going to vote already, give some thought to this expected donation angle, and maybe you'll start. Maybe you'll start telling your swing state friends to vote, too. And if you do vote to experience a sense of pride in doing your civic duty, I say go ahead and keep feeling it!


Related reading

I've found a couple of papers by authors with similar thoughts to these:

  • Jankowski (2002), "Buying a Lottery Ticket to Help the Poor: Altruism, Civic Duty, and Self-interest in the Decision to Vote", and
  • Edlin, Gelman and Kaplan (2007), "Voting as a Rational Choice: Why and How People Vote To Improve the Well-Being of Others.

Also, just today I found this this interesting Overcoming Bias post, by Andrew Gelman as well.

 


1 A nitpick, for people like me who are very particular about what they mean by utility: in this post, I'm calculating expected altruistic dollars, not expected utility. However, while our personal utility functions are (or would be, if we managed to have them!) certainly non-linear in the amount of money we spend on ourselves, there is a compelling argument for having the altruistic part of your utility function be approximately linear in altruistic dollars: there are just so many dollars in the world, and it's reasonable to assume utility is approximately differentiable in commodities. So on the scale of the world, your affect on how altruistic dollars are spent is small enough that you should value them approximately linearly.

210 comments, sorted by
magical algorithm
Highlighting new comments since Today at 9:55 AM
Select new highlight date
Moderation Guidelinesexpand_more

Obviously you are willing to extend this sort of cost benefit analysis to all kinds of influencing government?

If me grabbing a nanoslice of power in the form of casting a vote is like donating a thousand dollars to charity, me grabbing more than a nanoslice even by illegal means shouldn't be dismissed out of hand and deserves even handed analysis. The value of such information seems to be pretty high.

You have, in a nutshell, just explained why lobbyists exist.

Yes, this. I'd like to see the author of the article give a similar analysis on whether or not we should quit our jobs and become lobbyists.

I would argue it is easier to pull sideways by lobbying than voting or campaigning.

Do you own any stock in Diebold?

Are you a candidate this election for local supervisor of elections?

I shudder to think of what a small group of dedicated rational thinkers could do to subvert democracy to serve the needs of the people...

It's worth pointing out that the $100 you "donate" probably goes much, much less far than a $100 donation to an effective charity. So it might be better to think in terms of shifting $100 in federal funds. That makes it seem like a lot less of a slam dunk to me. Would I take half an hour out of my day to move $100 from an ineffective government agency to an effective one? Meh. Feels like my other attempts at altruism probably have a much higher expected impact.

I'm also worried about getting called up for jury duty if I re-register to vote now that I'm living in a different county.

On the whole though, fairly persuasive.

What would you consider an effective government agency?

It's worth pointing out that the $100 you "donate" probably goes much, much less far than a $100 donation to an effective charity.

Yes, this is a very good point, and should be part of the $100 billion dollar estimate at the election outcome value... How much do you think the difference of candidates is worth, in MED$? (marginal effective donation dollars)

Some interesting comparisons (I don't claim these are the most interesting, they're just off the top of my head):

$100 billion = 1/3 (100 million) ($1000) = 1/3 (population of the USA) (marginal cost of saving lives with mosquito nets) and also

$100 billion = 1/70 (population of the world) (marginal cost of saving lives with mosquito nets)

It's actually not obvious to me that the difference in candidates would be much less valuable than these things, but these comparisons do make me personally not want to call MED$100 billion a conservative lower bound.

Does anyone have any interesting estimates for the difference of election outcomes in MED$?

You might want to specify that when you talk about "donations" you are referring to charitable donations rather than campaign donations. It might just be me, but the political priming made this distinction less obvious than it probably should have been.

Wow, thanks for this! I just changed the title.

Ah! Yeah, I didn't get this distinction on an admittedly casual readthrough, and was trying to figure out why the 55% confidence even factored in, since presumably if I'm wrong about who the best candidate is when I vote I would counterfactually be equally wrong when I donate, so the 55% factor would apply equally to both sides of the inequality.

But this makes more sense.

Of course, my confidence that my charitable donation is going to somewhere valuable is similarly a factor, and might be less than 55%. (Though I do realize that by local social convention this is a solved problem.)

Being in California, Gelman et al. put my probability of a decisive vote around 1/(5 million).

As the paper says:

[W]e consider how the results would change as better information is added so as to increase the accuracy of the forecasts. In most states this will have the effect of reducing the chance of an exact tie; that is, adding information will bring the probability that one vote will be decisive even closer to 0.

And as it turns out, conditional on polls and other information from right before the election, one would have to assign a very low probability that California will (almost) vote Republican. Also, conditional on California (almost) voting Republican, one would have to assign a very high probability that enough other states will vote Republican to make California's outcome not matter.

It seems to me that a reasonable probability estimate here would be multiple orders of magnitude lower than the cited estimate; and it seems to me that together with the optimal philanthropy point made by user:theduffman and user:dankane and user:JohnMaxwellIV elsewhere in the thread, this makes voting in states like California not worthwhile based on the calculation presented in the original post.

Similarly, California's Senate race isn't significantly likely to shift. But at the House of Representatives level, the probabilities could be more significant - depending where you live. State representatives, mayors, plebiscites, etc.: there are many opportunities.

Irrespective of California, many people even in swing states think voting is silly, so I would hope that they read this post... but regarding California,

And as it turns out, conditional on polls and other information from right before the election, one would have to assign a very low probability that California will (almost) vote Republican. Also, conditional on California (almost) voting Republican, one would have to assign a very high probability that enough other states will vote Republican to make California's outcome not matter.

Thumbs up, except that conclusion here is not to not vote... it's to either

1) watch the polls and vote based on proximity to a tie at both the state and federal level,

or if the time watching the polls is more of a sacrifice to you than the time spent on late-stage voting (can't vote by mail),

2) just vote without the poll information.

Reason: supposing that (a) without the poll information, the EV of voting is high, and (b) finding out the poll results can change your decision, (a+b) implies that the poll results have high VOI. More precisely,

1/ (5 million) = Pr(decisive | no info) = Pr(decisive | close poll) Pr(close poll) + Pr(decisive | not-close poll) Pr(not-close poll)

Since Pr(decisive | not-close poll) is many orders of magnitude closer to 0 than 1/(5 million), and Pr(close poll) is quite small, say 1/N, Pr(decisive | close poll) must be on the order of N 1/(5 million), so the payoff would be N whatever is reported in the post, which would be huge.

So the conclusion here is that "Voting without poll results is like donating to charity, but adopting the policy of watching the polls and deciding to vote based on proximity to a tie is like donating almost as much to charity, and saves you time, unless you spend more time watching the polls that you would spend to vote."

1) watch the polls and vote based on proximity to a tie at both the state and federal level,

The problem is that letting polls influence voting decisions is subject to Goodhart's law.

This seems to actually underestimate the value of voting, in that it assumes that a vote is only significant if it flips the winner of the election. But as Eliezer wrote:

But a vote for a losing candidate is not "thrown away"; it sends a message to mainstream candidates that you vote, but they have to work harder to appeal to your interest group to get your vote. Readers in non-swing states especially should consider what message they're sending with their vote before voting for any candidate, in any election, that they don't actually like.

Also, rationalists are supposed to win. If we end up doing a fancy expected utility calculation and then neglect voting, all the while supposedly irrational voters ignore all of that and vote for their favored candidates and get them elected while ours lose... then that's, well, losing.

it sends a message to mainstream candidates that you vote, but they have to work harder to appeal to your interest group to get your vote.

I recently heard an argument to the contrary: Viewing voter preferences along one dimension for simplicity, if a small percentage on the left breaks away and votes for an extreme-left candidate, the mainstream left candidate may actually move further to the right--since the majority of undecided voters are in the middle, not along the boundary between left and extreme-left.

This may not generalize to a hyperplane separating a particular non-mainstream candidate from other candidates in n-dimensional policy-space, but I don't know if presidential campaigns are set up to do that level of analysis.

You were right when you described "along one dimension" as being simplistic. There are other options than extreme-left, left, centrist, right, and extreme-right (for instance). Engaging in false dilemma reasoning as an excuse to vote for a mainstream candidate with no interest in sending political messages encouraging reform is not particularly rational.

One might argue that you could send a better message by writing about an issue for 1 hr rather than waiting 1 hr in line at the polls.

I don't think a single vote -- and that's all any voter has -- sends any message. It hardly makes a difference to party A whether party B gets 279451 or 279452 votes.

If irrational voters are the supermajority, and elect their candidate, then you lose, yes. If you waste your time voting for someone rational who can't win, you lose more.

Rational voters never being large enough of a block to influence the outcome of any election seems quite unlikely, especially so if we don't require the rationalists' favored candidates to necessarily win. I don't know about the US, but at least in Finland, even a candidate who doesn't get elected but does get a considerable amount of votes will still have more influence within his party (and with the actual elected candidates) than somebody who got close to no votes.

My claim isn't that this can never be the case but that it's not the case now, and in general it's the most important factor in whether a rational voter can win by voting.

Don't forget to take the long game into account.

? The long game makes voting when you can't make a decent impact even less rational compared to anything else you could be doing that would give you long term gains. Making money you can invest, taking time to learn a skill or network, getting more information on almost anything, convincing people to follow your beliefs or teaching others about information, donating to x-risk or other charities, working on inventing. Each of these are "long game" activities.

Of course almost no one spends all their time doing this sort of thing, and I don't care if you take 20 minutes out of one day to go vote because it gives you fuzzies. But don't pretend it's a great thing you do.

I won't pretend it's a great thing to vote if you promise you'll stop pretending I pretended any such thing, or that I was talking about anything other than comparisons of voting strategies.

The US suffers from a major problem with institutionalizing false dilemmas in politics. Playing the long game as a voter might well involve actions intended to lead to eventual disillusionment in that regard. Whether your time is better spent, in the long run, doing something other than voting (and learning about your voting options) is a somewhat distinct matter.

In short, you suggested that at this time rational voters cannot win by voting, which I took to mean you meant they could not get a winning result in the election in which they vote right now. My response was meant to convey the idea that there are voting strategies which could lead to a win several elections down the line (as part of a larger strategy). You then replied, for some reason, by suggesting that voting is not as useful in general as inventing something -- which may be true without in any way contradicting my point.

It's ridiculous to condemn me for trying to interpret actual meaning out of your vague one sentence reply and then respond with 2 paragraphs of what you "meant to convey", none of which was any more obviously implied than what I read into your comment.

To respond to THIS point: So what? Each vote is a distinct event. It can easily make sense that you can influence elections positively in the future without you having that ability in any relevant way today.

I fail to see how not knowing what someone meant somehow compels you to make up elaborate fantasies about what the person meant, or even excuses it.

. . . and of course nobody ever does anything other than actually cast a vote when strategizing for the future. There's no way anyone could possibly, say, make the voting part of a grander strategy.

. . . and I suppose you probably think that I think voting is a winning strategy in some way, basically because I pointed out some possible strategies that might seem like a good idea to someone, somewhere, as part of an attempt to remind you that the one-vote-right-now tactic may not be the only reason someone casts a vote.

In short, you assume far too much, then blame me. Good job. That's certainly rational.

But a vote for a losing candidate is not "thrown away"; it sends a message to mainstream candidates that you vote, but they have to work harder to appeal to your interest group to get your vote. Readers in non-swing states especially should consider what message they're sending with their vote before voting for any candidate, in any election, that they don't actually like.

But that point can still be subject to the same (invalid, IMHO) argument against voting: your vote alone is not going to change the poll's percentages by any noticeable extent, hence you could as well not vote and nobody will notice the difference.

I'll explain why I think this line of argument is invalid in another comment. EDIT: here

Also, rationalists are supposed to win. If we end up doing a fancy expected utility calculation and then neglect voting, all the while supposedly irrational voters ignore all of that and vote for their favored candidates and get them elected while ours lose... then that's, well, losing.

That's actually a better point, but it opens a can of worms: ideally, istrumentally rational agents should always win (or maximize their chance of winning, if uncertainty is involved), but does a consistent form of rationality that allows that actually exist?

Consider two pairs of players playing a standard one-shot prisoner's dilemma, where the players are not allowed to credibly commit or communicate in any way.

In one case the players are both CooperateBots: they always cooperate because they think that God will punish them if they defect, or they feel a sense of tribal loyalty towards each other, or whatever else. These players win.

In the other case, the players are both utility maximizing rational agents. What outcome do they obtain?

By having two agents play the same game against different opposition, you compare two scenarios that may seem similar on the surface but are fundamentally different. Obviously, making sure your opponent cooperates is not part of PD, so you can't call this winning. And as soon as you delve into the depths of meta-PD, where players can influence other players' decisions beforehand and/or hand out additional punishment afterwards, like for example in most real life situations, the rational agents will devise methods by which mutual cooperation can be assured much better than by loyalty or altruism or whatever. Anyone moderately rational will cooperate if the PD matrix is "cooperate and get [whatever] or defect and have all your winnings taken away by the player community and given to the other player", and accordingly win against irrational players, while any non-playing rationalist would support such kind of convention; although, depending on how/why PD games happen in the first place, this may evolve into "cooperate and have all winnings taken away by the player community or defect and additionally get punished in an unpleasant way".

By the way, the term CooperateBot only really makes sense when talking about iterated PD, where it refers to an agent always cooperating regardless of the results of any previous rounds.

Obviously, making sure your opponent cooperates is not part of PD, so you can't call this winning.

Nevertheless the CooperateBots win when playing between each other, while the rational agents lose when they have no mean to credibly commit (according to any standard decision theory, not 'super-rationality' or something more exotic).

Maybe your point is that a community of rational agents will always find the way to create conditions that allow credible committment, and the costs of the commitment schemes will be outweighted by the benefit of individually optimal decision making. That's a reasonable hypothesis, but it seems non-trivial to prove that it holds for all reasonably probable scenarios.

By the way, the term CooperateBot only really makes sense when talking about iterated PD, where it refers to an agent always cooperating regardless of the results of any previous rounds.

How do you call an agent that unconditionally playes "Cooperate" in an one-shot prisoner's dilemma?

In non-iterated PD, someone who cooperates is a cooperator.

Nevertheless the CooperateBots win when playing between each other, while the rational agents lose when they have no mean to credibly commit.

No, the cooperators actually lose when playing each other, because they gain less than what they could, while the only reason they get anything at all is because they are playing against other cooperators. Likewise, the defectors win when playing other defectors, and they obviously win against cooperators. Cooperating could only win if it effected your opponent's decision, which is not the case in PD.

It seems your definition of winning is flawed in that you want your agents to achieve results that are clearly outside their influence. Rationalists should win under the constraints of reality, not invent scenarios in which they have already won.

The scenario we are discussing is a community of agents that reason according to the similar principles (though not necessarily algorithmically identical).

Clearly in a community of unconditional cooperators every agent obtains a better payoff than any agent in a community of defectors. I think this fits the definition of 'winning', even though unconditional cooperation is suboptimal according to essentially any decision theory.

Typical decision theories hold that in this problem defection is the optimal choice (although more exotic decision theories, such as Hofstadter's superrationality hold that the knowledge that the other agents reason according to similar principles makes the cooperation the optimal choice).

The point is that a strategy that is locally optimal for the individual agent may not be optimal when applied by most individuals. In evolutionary biology, this issue manifests as the tension between individual selection and group selection. It's generally believed that individual selection prevails on group selection, unless the group selection effect is very strong.

Clearly in a community of unconditional cooperators every agent obtains a better payoff than any agent in a community of defectors.

As soon as you're talking about communities, you're talking about meta-PD, not PD, and as I've explained above, rationalist agents play meta-PD by making sure cooperation is desirable for the individuum as well, so they win. End of story.

Nitpick: Superrationality is not a decision theory.

According to Wikipedia:

Superrationality is a type of rational decision making which is different than the usual game-theoretic one

Wikipedia is not a determinative source.

Which answer is the "superrational" one in Newcomb's problem? In a game of chicken? In an ultimatum game?

Decision theories like Causal Decision Theory and Evidential Decision theory have answers, and can explain why they reached those answers. As far as I am aware, there's no equivalent formalization of "superrationality." Until such a formalization exists, it is misleading in this type of discussion to call "superrationality" a decision theory.

Wikipedia is not a determinative source.

If you have a better source feel free to share.

Which answer is the "superrational" one in Newcomb's problem?

Newcomb's problems is an one-player decision problem (Omega is merely reactive), hence superrationality doesn't come into play, the outcome is the same of whatever one-player decision strategy you are using.

In a game of chicken

Swerve.

In an ultimatum game?

That's an actually open question, since generalization of superrationality to asymmetric games is yet undefined, AFAIK. A simple strategy would be to fall back to regular game-theoretic rationality.

Decision theories like Causal Decision Theory and Evidential Decision theory have answers, and can explain why they reached those answers.

Uhm? Can you explain what answers do these theories yield in multiplayer games? In order to reduce multiplayer games to single-player decision problems you need a model of the other players, which can be unavailable in many cases, and even if it is available, if the other players have a model of you, you risk running into logical paradoxes.

(under some strong assumptions, that's the program equilibrium problem. AFAIK some people at SI are currently working on it, with various proposals like TDT, ADT and UDT, but they didn't relase any definitive result yet)

In a game of chicken

Swerve.

And let the other guy win? Madness!

That makes (70%-30%)1/(5 million)($700 billion) = $56,000.

These figures seem implausibly high if we are comparing to the best donations you can pick out. Trivially, the campaigns spend only a few billion dollars, with $700 billion you could use the interest alone to spend ludicrously on voter turnout and advertising in every election, state, local, and national going forward, for an expected impact greater than winning one election.

That is to say, voting yourself can't be be worth more than $n if you can generate more than one vote with political spending of $n. And randomized trials find voter-turnout costs per voter in the hundreds of dollars. Even adjusting those estimates upward for various complications, there's just no way that you wouldn't be able to turn out or persuade one more vote for $56,000.

It's just trivial that if voting is rational, political spending is even more rational. It's not germane to use political contributions in proxy for charitable contributions.

"It's just trivial that if voting is rational, political spending is even more rational."

I clearly explained why this is wrong directly above. If your opportunity cost of time is $50 per hour, voting would take an hour, and it would cost you $400 to elicit a marginal vote of equal expected impact, then you are getting an eightfold multiplier on effort spent voting as opposed to earning money to influence other votes. You need to spend one hour instead of eight hours worth of effort to get the same result.

If your opportunity cost of time is lower, or the impact of money on elections is lower, the effect gets more extreme. If your breakeven point is anywhere in that range then voting can make sense for you even while political donation does not.

Gelman, Silver, and Edlin have a more recent paper looking at the 2008 election, which estimated that California voters had a 1 in 1 billion chance of being decisive and that 1 in 10 million was the maximum probability of being decisive (for voters in the four swingiest states).

I think the polls are closer in 2012 than 2008.

Voting is more like stealing thousands of dollars to donate to an ok charity.

The problem is that the thousands of dollars are being stolen anyway and you need to vote to have any say in which charity, or to reduce the amount of money being stolen.

Just like no candidate really supports peace, no candidate really supports not-stealing.

Also, by taking part in a system of violence & theft, you are granting it some amount of legitimacy.

Taxes are an involuntary transfer of wealth made under threat of coercive violence. Theft is an an involuntary transfer of wealth made under threat of coercive violence.

Saying that does not mean much until one defines the proper scope of legitimate violence in society. Max Weber made the analytically useful point that one definitional aspect of modern government is its monopoly on legitimate violence.

On the other hand, if you want to strictly adhere to utilitarian principles, you would probably have to note that the "altruistic dollars" obtained through selection of the better candidate probably produce much less utility per dollar than a dollar donated to one of givewell.org's top rated charities. Standards of living in the US are already so high that a marginal dollar is worth much less than a marginal dollar in a less developed country. Then again, if you really followed this philosophy and lived in the US, you should probably actually devoting essentially all of your time to earning money to donate to such charities, which is not something that many people are actually willing to do.

US policy and wars have a large effect on people in poor countries. They are presumably considered in the $100 billion "better for the world" sum.

Unless they would be earning fabulous sums in that hour, an altruist would probably be justified in considering the expected value of their vote higher than the expected value of whatever else they would do with that one hour per year.

On the other hand, it seems to me like the differences between the foreign policies of the two candidates don't seem nearly as significant as the differences in their domestic policies. Furthermore, problems like malaria would be best dealt with through foreign aid budgets, which really aren't that big to begin with. In any case, I think my point stands that the value of an "altruistic dollar" in this context is significantly less than the value of an actual dollar donated to an optimal charity.

I agree with you, and while I agree that the values of "charity" versus "well-directed US spending" are at least one order of magnitude different, but I'm not convinced that they are more than three orders of magnitude different and most people do not even make $56/hour with total time fungibility.

Let X = the amount of money such that you are indifferent between being given X and getting to choose who wins the election.

Let P = the probability your vote would decide the election.

XP = the rough expected value of voting.

Say we need XP>$10 to make it worth voting. If P = 1/20M then you need X>$200M, which seems way too big.

Of course risk aversion would complicate this.

If you had a choice between getting large amount of money and personally deciding the election, what would you do?

I would prefer to get the $200 million since I think using this money to fight existential risk would more than offset the 70% or so (as of now) chance that my preferred candidate will lose the election.

So, your preferred candidate (EY?) would be expected to provide less than about $285M delta to fight existential risk?

I was stepping somewhat outside the box set by the previous box- for me, X is greater than the amount of wealth on the planet, but P is so small for a non-mainstream candidate that XP is negligible.

X is greater than the amount of wealth on the planet

I don't believe you. Compare your level of happiness/sadness over the election results to how you would feel if Bill Gates decided to give you $30 billion to spend however you see fit. Given that the House and Senate are controlled by different political parties (so each party has the ability to block any new law from going into effect) the election wasn't as important as you are making it out to be.

Contrast "Being able to select one of two candidates" with "Being able to select from all people meeting constitutional requirements."

$30B isn't nearly enough to enact reform in the military (something the CiC can do without Congressional approval), nor in the rest of the executive branch alphabet soup. Decisions regarding what laws to focus on enforcing are nearly as effective as decisions to repeal laws. The tricky part would be implementing reform in such a manner as to outlive the single expected term of office.

I agree that I would rather have a large sum of money than personally deciding the election. An interesting question would be how much money would it take? There's also the bias that I would take a small sum of money because it would benefit me. Perhaps a better way of framing this would be a choice between an effective charity or a random LW getting a sum of money or personally deciding the election.

Hmm... The calculations work, but somehow it seems against our intuitions. Thinking about it, it seems that the problem is one of scope insensitivity. $100 billion, $700 billion, $7 trillion. They all feel more or less the same, which of course, is absolutely insane. When I look at the numbers, it just feels like "a lot". Ultimately, what this post is saying is to simply shut up and multiply, which is a very good and and very relevant point.

$100 billion, $700 billion, $7 trillion. They all feel more or less the same, which of course, is absolutely insane. When I look at the numbers, it just feels like "a lot".

Think of "billion" as being just the name of a unit, like an inch, or a light-year. You have 100, 700, or 7000 units. Do they still feel the same?

Yeah, of course, I understand about the problem of scope insensitivity and how to try to avoid it. My point was that before reading this article, I did not think of it like that, but rather just looked at that number and thought "a lot". Reading this article made me understand that I was being scope insensitive, and that let me put everything into perspective, in a similar manner to the method you stated. The big problem with biases like this isn't compensating for them once you identify them, but rather identifying them in the first place.

This was a post well worth making, particularly because of how much rhetorical support the superficial arguments against voting get

If one Virginia voter does an expected 1/(3.5 million)*($7 trillion) = $2 million good by voting for candidate X, then there is another Virginia voter that does an expected $2 million of damage by voting for candidate Y. It seems that either

  1. Roughly half of the population is misinformed about which alternative is objectively better. In that case, how do I justify a belief that I have a greater than 50% chance of being right, when everyone else has access to the same information?

  2. There are real differences in values, and by my vote I direct the outcome towards my preference instead of the other Virginia voter's. In that case, sure I want to vote, but should we really call it altruism?

Roughly half of the population is misinformed about which alternative is objectively better. In that case, how do I justify a belief that I have a greater than 50% chance of being right, when everyone else has access to the same information?

Non-meta calculations, like usual. If someone else thinks the indefinite integral of x^2 is 3x^3, I don't say "well, if we have the same information, I must have a 50% chance of being wrong." Instead, I check the result using boring, ordinary math, and go "nope, looks like it's x^3 / 3."

should we really call it altruism?

Yes.

I agree with your approach to solving disagreements about integrals. I do not see how it applies to politics, where disagreements are far more diverse, including factual, moral, and unconscious conflicts.

Well, people do differ in values, but it seems like more often some people are just wrong. Viz: global warming as a factual disagreement.

So what do you do if half the population disagrees with you about a factual issue? (Copy and paste time!) I don't say 'well, if we have the same information, I must have a 50% chance of being wrong.' Instead, I check the result using boring, ordinary scholarship, and go 'nope, looks like there's a mechanism for CO2 to cause the atmosphere to warm up.'

Note that a key part of this process is that if you're wrong, you should notice sometimes - there's no "checking" otherwise, just pretend-checking. So that's a good skill to work on.

That some people are "just wrong" is not at issue. Even mistaken people agree that some people are wrong. (They just think it's the right-thinking folks who are in error.)

I don't say 'well, if we have the same information, I must have a 50% chance of being wrong.'

Of course you don't. If half the population disagrees with you about an issue, you should interpret that as evidence that you are incorrect. How strong the evidence is, depends on how likely they are to possess information you don't, to be misled by things you've prepared yourself for, etc.

In other words, people who are convinced by this argument are more likely than the average person to be correct about the objectively better candidate it convinces them to vote for?

Roughly half of the population is misinformed about which alternative is objectively better. In that case, how do I justify a belief that I have a greater than 50% chance of being right, when everyone else has access to the same information?

Well, you can replace "which alternative is objectively better" with any other belief on which opinions differ and the same argument applies.

"any other belief"

This invites us to look at why beliefs differ. First we have to acknowledge that we are talking about differences between people with comparable levels of expertise, so this isn't the same as the disagreements that exist between experts and novices.

For elections, I think we can say that people disagree in large part because the situation is incredibly complicated. It it hard to know how government policies will affect human welfare, and it is hard to know how elected officials will shape government policy.

The only interesting factor that I can think of is differences in our scope of altruism -- one voter may feel altruistic towards their city, while another focuses on the nation, and a third focuses on all of humanity.

"First we have to acknowledge that we are talking about differences between people with comparable levels of expertise"

The assertion that the vast majority of voters have done a sizeable amount of research, rather than simply voting "along party lines" or "like mom always did" or "because dad was overcontrolling and I'm not going to support HIS party" strikes me as the sort of assertion that would require quite a lot of evidence.

One can reasonably conclude that in politics, as with math, the "average person" is ignorant and their opinion is not based on any sort of expertise.

"One can reasonably conclude that in politics, as with math, the "average person" is ignorant and their opinion is not based on any sort of expertise."

Even if you limit the population to those who are well informed, that population is still rather evenly split and so his points still hold.

Even if you limit the population to those who are well informed, that population is still rather evenly split

On some issues, probably. On others, you have the well-informed, educated, cares-about-facts types versus the religious fanatics who want to push their religious agenda, or their personal agenda, or support pork-barrel funding of pet projects, or want to waste extravagant amounts on feel-good charity that accomplishes nothing in the end.

I don't think either political party in the US has a monopoly on educated - it's easier for me to demonize and strawman Republicans since I was raised Democratic. Apologies if my examples thus seem biased in that direction.

So, yes, sometimes, it's clear my opponent has a genuine, reasoned stance. Sometimes, it's equally clear that they don't. It's important to be aware that sometimes the opposing side doesn't have any rational objections because they're wrong.

Roughly half of the population is misinformed about which alternative is objectively better. In that case, how do I justify a belief that I have a greater than 50% chance of being right, when everyone else has access to the same information?

"Voting is irrational unless you are arrogant?"

There are real differences in values, and by my vote I direct the outcome towards my preference instead of the other Virginia voter's. In that case, sure I want to vote, but should we really call it altruism?

You can still call it altruism, and it can be helpful to distinguish "selfishness" in the sense usually considered for decision problems from "altruism". The example I like to propose for illustration is the Codependent Prisoner's Dillema, which has Romeo and Juliet as the prisoners who are each obsessed with the other's wellbeing and the jailers use this fact when manipulating them. So when Romeo is "selfishly maximising his own preferences" and picking the option that puts him away for 10 years but lets Juliet go free he is also being "altruistic" towards Juliet while brutally ignoring her preference that she be the one who gets to be the martyr.

How are you measuring 'objectively better'?

Roughly half the population is paperclip maximizers.

The whole point of democracy is the results should equal the will of the people. If a significant percentage of the population does