## LESSWRONGLW

Actually, considering the possibility that you've misjudged the probability doesn't help with Pascal's Mugging scenarios, because

``````P(X|judged that X has probability p) >= p\*P(judgment was correct)
``````

And while P(judgment was correct) may be small, it won't be astronomically small under ordinary circumstances, which is what it would take to resolve the mugging.

(My preferred resolution is to restrict the class of admissable utility function-predictor pairs to those where probability shrinks faster that utility grows for any parameterizable statement, which is slightly less restrictive than requiring bounded utility functions.)

(My preferred resolution is to restrict the class of admissable utility function-predictor pairs to those where probability shrinks faster that utility grows for any parameterizable statement, which is slightly less restrictive than requiring bounded utility functions.)

It's still way too restrictive though, no? And are there ways you can Dutch book it with deals where probability grows faster (instead of the intuitively-very-common scenario where they always grow at the same rate)?

0Eugine_Nier9yBTW, you realize we're talking about torture vs. dust spec and not Pascal's mugging here?

# 2

Designed to gauge responses to some parts of the planned “Noticing confusion about meta-ethics” sequence, which should intertwine with or be absorbed by Lukeprog’s meta-ethics sequence at some point.

Disclaimer: I am going to leave out many relevant details. If you want, you can bring them up in the comments, but in general meta-ethics is still very confusing and thus we could list relevant details all day and still be confused. There are a lot of subtle themes and distinctions that have thus far been completely ignored by everyone, as far as I can tell.

Problem 1: Torture versus specks

Imagine you’re at a Less Wrong meetup when out of nowhere Eliezer Yudkowsky proposes his torture versus dust specks problem. Years of bullet-biting make this a trivial dilemma for any good philosopher, but suddenly you have a seizure during which you vividly recall all of those history lessons where you learned about the horrible things people do when they feel justified in being blatantly evil because of some abstract moral theory that is at best an approximation of sane morality and at worst an obviously anti-epistemic spiral of moral rationalization. Temporarily humbled, you decide to think about the problem a little longer:

"Considering I am deciding the fate of 3^^^3+1 people, I should perhaps not immediately assert my speculative and controversial meta-ethics. Instead, perhaps I should use the averaged meta-ethics of the 3^^^3+1 people I am deciding for, since it is probable that they have preferences that implicitly cover edge cases such as this, and disregarding the meta-ethical preferences of 3^^^3+1 people is certainly one of the most blatantly immoral things one can do. After all, even if they never learn anything about this decision taking place, people are allowed to have preferences about it. But... that the majority of people believe something doesn’t make it right, and that the majority of people prefer something doesn’t make it right either. If I expect that these 3^^^3+1 people are mostly wrong about morality and would not reflectively endorse their implicit preferences being used in this decision instead of my explicitly reasoned and reflected upon preferences, then I should just go with mine, even if I am knowingly arrogantly blatantly disregarding the current preferences of 3^^^3 currently-alive-and-and-not-just-hypothetical people in doing so and thus causing negative utility many, many, many times more severe than the 3^^^3 units of negative utility I was trying to avert. I may be willing to accept this sacrifice, but I should at least admit that what I am doing largely ignores their current preferences, and there is some chance it is wrong upon reflection regardless, for though I am wiser than those 3^^^3+1 people, I notice that I too am confused."