I’ve recently heard a number of people arguing for “fanaticism“ when it comes to longtermism. Basically, if a cause area has even a minuscule probability of positively affecting the long-term future of humanity (and thus influencing an effectively unbounded number of lives), we should fund/support that cause even at the expense of near-term projects with high probability of success. If this is so, I have trouble seeing why Pascal’s Wager (or the even less probable Pascal’s Mugging) shouldn’t hold. I know most people (even religious people) don’t believe Pascal’s argument is valid, but most arguments against it I’ve read would seem to also exclude low-probability longtermist causes from being valid. What am I missing here?

New Answer
New Comment

2 Answers sorted by

james.lucassen

Jul 27, 2022

63

First and most important thing that I want to say here is that fanaticism is sufficient for longtermism, but not necessary. The ">10^36 future lives" thing means that longtermism would be worth pursuing even on fanatically low probabilities - but in fact, the state of things seems much better than that! X-risk is badly neglected, so it seems like a longtermist career should be expected to do much better than reducing X-risk by 10^-30% or whatever the break-even point is.

Second thing is that Pascal's Wager in particular kind of shoots itself in the foot by going infinite rather than merely very large. Since the expected value of any infinite reward is infinity regardless of the probability it's multiplied, there's an informal cancellation argument that basically says "for any heaven I'm promised for doing X and hell for doing Y, there's some other possible deity that offers heaven for Y and hell for X".

Third and final thing - I haven't actually seen this anywhere else, but here's my current solution for Pascal's Mugging. Any EV agent is going to have a probability distribution over how much money the mugger really has to offer. If I'm willing to consider (p>0) any sum, then my probability distribution has to drop off for higher and higher values, so the total integrates to 1. As long as this distribution drops off faster than 1/x as the offer increases, then arbitrarily large offers are overwhelmed by vast implausibility and their EV becomes arbitrarily small. 

As long as this distribution drops off faster than 1/x as the offer increases, then arbitrarily large offers are overwhelmed by vast implausibility and their EV becomes arbitrarily small.

This has the problem that you have no assurance that the distribution does drop off sufficiently fast. It would be convenient if it did, but the world is not structured for anyone's convenience.

4james.lucassen2y
Agree that there is no such guarantee. Minor nitpick that the distribution in question is in my mind, not out there in the world - if the world really did have a distribution of muggers' cash that was slower than 1/x, the universe would be comprised almost entirely of muggers' wallets (in expectation).  But even without any guarantee about my mental probability distribution, I think my argument does establish that not every possible EV agent is susceptible to Pascal's Mugging. That suggests that in the search for a formalism of ideal decison-making algorithm, formulations of EV that meet this check are still on the table.

I absolutely agree that fanaticism isn’t necessary for longtermism; my question is for those few who are “fanatics,” how do they resolve that sort of thing consistently.

JBlack

Jul 28, 2022

52

Some of them are invalid for exactly the same reason.

One of the problems with Pascal's Wager (and Pascal's Mugging) is that actually, you don't even know the sign of the result of your actions.

Sure, you've got a book that says something about the disposition of an afterlife, but there are many books written and unwritten, and you don't know which is correct. Doing that which gets you a good afterlife in one gets you eternal torture in another. Likewise the mugger says that there will be invisible multitudes tortured iff you don't give them money, but really how trustworthy is such a person even if they do have such power? Isn't it just as likely that they'll use success on this occasion as evidence that they should just keep trying the same thing until they do end up torturing all those people anyway? Maybe they can use all this money (or acquiescence in other ways) to build their power to torture even more people in the future.

Likewise efforts made on behalf of uncountable future multitudes might have the opposite effect from the intended one. The future is hard to predict, and except for some really obvious and immediate issues, you should expect that the correlation between the intended and actual outcome is near zero.

If your proposed actions have some easily predictable negative outcomes that are not outweighed by known positive outcomes with similar or better credence, then you almost certainly shouldn't do them.

2 comments, sorted by Click to highlight new comments since: Today at 11:44 AM

How do they justify the confidence of being able to affect the far far future positively rather than negatively, or at all? That seems rather presumptuous (In Bostrom's "presumptuous philosopher" sense). Which is also the reason where Pascal's wager fails.

Curious why people are downvoting this—if you did, I’d love to hear why, especially if you have advice on what you’d prefer to see in future questions.