And that this puts a strong upper bound on the chances.

If you multiplied it by the next thousand generations of humans on earth you wouldn't get 1E-6 of a human life equivalent.

So if you can stop using huge numbers like 1E-9, please do proceed, because you do understand the numbers of calculating costs in human life equivalents better than me!

My problem with what you've been writing is not your calculations, but the numbers you're using. Even if the cost were 6E12 lives, it's still not worth actually worrying about. You're demonstrating a comprehensive lac... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

You're demonstrating a comprehensive lack of actual domain knowledge - you literally don't know the thing you're talking about - and appear to be trying to compensate for that by leveraging what you do know.

As far as I can tell, everything Yvain has said on this topic is correct. In particular, there is a further possible assumption under which it is not the case that cosmic ray collisions with Earth and the Sun prove LHC black holes would be safe, as you can find spelled out in section 2.2 of this paper by Giddings and Mangano. As Yvain pointed out in ... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

2Scott Alexander9y I don't know if you're on board with the Bayesian view of probability, but the way I interpret it, probability is a subjective level of confidence based on our own ignorance. In "reality", the "probability" that the LHC will destroy the Earth is either 0 or 1 - either it ends up destroying the Earth or it doesn't - and in fact we know it turned out to be 0. What we mean when we say "probability" is "given my level of ignorance in a subject, how much should I expect different scenarios to happen". So when I ask "what is your probability of the LHC destroying the world", I'm asking "Given what you know about physics, and ignoring that both of us now know the LHC did not destroy the world, how confident should you have been that the LHC would not destroy the world". I'm not a particle physicist, and as far as I know neither are you. Both of us lack comprehensive domain knowledge. Both of us have only a medium-level of broad understanding of the basic concepts of particle physics, plus a high level of trust in the conclusion that professional particle physicists have given. But I'm doing what one is supposed to do with ignorance - which is not say I'm completely totally sure of the subject I'm ignorant about to a certainty of greater than a billion to one. Unless you are hiding a Ph.D in particle physics somewhere, your ignorance is not significantly less than my own, yet you are acting as if you had knowledge beyond that of even the world's greatest physicists, who are hesitant to attach more than a fifty million to one probability to that estimate. This is what I meant by offering you the bet - trying to show that you were not, in fact, so good at physics that you could make billion to one probability estimates about it. And this is why I find your argument that I'm ignorant to be such a poor one. Of course I'm ignorant. We both are. But only one of us is pretending to near absolute certainty.

A Thought on Pascal's Mugging

by komponisto 9y10th Dec 2010159 comments

12


For background, see here.

In a comment on the original Pascal's mugging post, Nick Tarleton writes:

[Y]ou could replace "kill 3^^^^3 people" with "create 3^^^^3 units of disutility according to your utility function". (I respectfully suggest that we all start using this form of the problem.)

Michael Vassar has suggested that we should consider any number of identical lives to have the same utility as one life. That could be a solution, as it's impossible to create 3^^^^3 distinct humans. But, this also is irrelevant to the create-3^^^^3-disutility-units form.

Coming across this again recently, it occurred to me that there might be a way to generalize Vassar's suggestion in such a way as to deal with Tarleton's more abstract formulation of the problem. I'm curious about the extent to which folks have thought about this. (Looking further through the comments on the original post, I found essentially the same idea in a comment by g, but it wasn't discussed further.)

The idea is that the Kolmogorov complexity of "3^^^^3 units of disutility" should be much higher than the Kolmogorov complexity of the number 3^^^^3. That is, the utility function should grow only according to the complexity of the scenario being evaluated, and not (say) linearly in the number of people involved. Furthermore, the domain of the utility function should consist of low-level descriptions of the state of the world, which won't refer directly to words uttered by muggers, in such a way that a mere discussion of "3^^^^3 units of disutility" by a mugger will not typically be (anywhere near) enough evidence to promote an actual "3^^^^3-disutilon" hypothesis to attention.

This seems to imply that the intuition responsible for the problem is a kind of fake simplicity, ignoring the complexity of value (negative value in this case). A confusion of levels also appears implicated (talking about utility does not itself significantly affect utility; you don't suddenly make 3^^^^3-disutilon scenarios probable by talking about "3^^^^3 disutilons").

What do folks think of this? Any obvious problems? 

12