## LESSWRONGLW

Calculating probabilities about nearly any real world event is extremely complex. Someone who accepts the logic of your post shouldn't believe there is much value to Bayesian analysis other then allowing you to determine whether new information should cause you to increase or decrease your estimate of the probability of some event occurring.

It should be possible for someone to answer the following question: Is the probability of X occurring greater or less than Y? And if you answer enough of these questions you can basically determine the probability of X.

# 33

It may come as a surprise to some readers of this blog, that I do not always advocate using probabilities.

Or rather, I don't always advocate that human beings, trying to solve their problems, should try to make up verbal probabilities, and then apply the laws of probability theory or decision theory to whatever number they just made up, and then use the result as their final belief or decision.

The laws of probability are laws, not suggestions, but often the true Law is too difficult for us humans to compute.  If P != NP and the universe has no source of exponential computing power, then there are evidential updates too difficult for even a superintelligence to compute - even though the probabilities would be quite well-defined, if we could afford to calculate them.

So sometimes you don't apply probability theory.  Especially if you're human, and your brain has evolved with all sorts of useful algorithms for uncertain reasoning, that don't involve verbal probability assignments.

Not sure where a flying ball will land?  I don't advise trying to formulate a probability distribution over its landing spots, performing deliberate Bayesian updates on your glances at the ball, and calculating the expected utility of all possible strings of motor instructions to your muscles.

Trying to catch a flying ball, you're probably better off with your brain's built-in mechanisms, then using deliberative verbal reasoning to invent or manipulate probabilities.

But this doesn't mean you're going beyond probability theory or above probability theory.

The Dutch Book arguments still apply.  If I offer you a choice of gambles (\$10,000 if the ball lands in this square, versus \$10,000 if I roll a die and it comes up 6), and you answer in a way that does not allow consistent probabilities to be assigned, then you will accept combinations of gambles that are certain losses, or reject gambles that are certain gains...

Which still doesn't mean that you should try to use deliberative verbal reasoning.  I would expect that for professional baseball players, at least, it's more important to catch the ball than to assign consistent probabilities.  Indeed, if you tried to make up probabilities, the verbal probabilities might not even be very good ones, compared to some gut-level feeling - some wordless representation of uncertainty in the back of your mind.

There is nothing privileged about uncertainty that is expressed in words, unless the verbal parts of your brain do, in fact, happen to work better on the problem.

And while accurate maps of the same territory will necessarily be consistent among themselves, not all consistent maps are accurate.  It is more important to be accurate than to be consistent, and more important to catch the ball than to be consistent.

In fact, I generally advise against making up probabilities, unless it seems like you have some decent basis for them.  This only fools you into believing that you are more Bayesian than you actually are.

To be specific, I would advise, in most cases, against using non-numerical procedures to create what appear to be numerical probabilities.  Numbers should come from numbers.

Now there are benefits from trying to translate your gut feelings of uncertainty into verbal probabilities.  It may help you spot problems like the conjunction fallacy.  It may help you spot internal inconsistencies - though it may not show you any way to remedy them.

But you shouldn't go around thinking that, if you translate your gut feeling into "one in a thousand", then, on occasions when you emit these verbal words, the corresponding event will happen around one in a thousand times.  Your brain is not so well-calibrated.  If instead you do something nonverbal with your gut feeling of uncertainty, you may be better off, because at least you'll be using the gut feeling the way it was meant to be used.

This specific topic came up recently in the context of the Large Hadron Collider, and an argument given at the Global Catastrophic Risks conference:

That we couldn't be sure that there was no error in the papers which showed from multiple angles that the LHC couldn't possibly destroy the world.  And moreover, the theory used in the papers might be wrong.  And in either case, there was still a chance the LHC could destroy the world.  And therefore, it ought not to be turned on.

Now if the argument had been given in just this way, I would not have objected to its epistemology.

But the speaker actually purported to assign a probability of at least 1 in 1000 that the theory, model, or calculations in the LHC paper were wrong; and a probability of at least 1 in 1000 that, if the theory or model or calculations were wrong, the LHC would destroy the world.

After all, it's surely not so improbable that future generations will reject the theory used in the LHC paper, or reject the model, or maybe just find an error.  And if the LHC paper is wrong, then who knows what might happen as a result?

So that is an argument - but to assign numbers to it?

I object to the air of authority given these numbers pulled out of thin air.  I generally feel that if you can't use probabilistic tools to shape your feelings of uncertainty, you ought not to dignify them by calling them probabilities.

The alternative I would propose, in this particular case, is to debate the general rule of banning physics experiments because you cannot be absolutely certain of the arguments that say they are safe.

I hold that if you phrase it this way, then your mind, by considering frequencies of events, is likely to bring in more consequences of the decision, and remember more relevant historical cases.

If you debate just the one case of the LHC, and assign specific probabilities, it (1) gives very shaky reasoning an undue air of authority, (2) obscures the general consequences of applying similar rules, and even (3) creates the illusion that we might come to a different decision if someone else published a new physics paper that decreased the probabilities.

The authors at the Global Catastrophic Risk conference seemed to be suggesting that we could just do a bit more analysis of the LHC and then switch it on.  This struck me as the most disingenuous part of the argument.  Once you admit the argument "Maybe the analysis could be wrong, and who knows what happens then," there is no possible physics paper that can ever get rid of it.

No matter what other physics papers had been published previously, the authors would have used the same argument and made up the same numerical probabilities at the Global Catastrophic Risk conference.  I cannot be sure of this statement, of course, but it has a probability of 75%.

In general a rationalist tries to make their minds function at the best achievable power output; sometimes this involves talking about verbal probabilities, and sometimes it does not, but always the laws of probability theory govern.

If all you have is a gut feeling of uncertainty, then you should probably stick with those algorithms that make use of gut feelings of uncertainty, because your built-in algorithms may do better than your clumsy attempts to put things into words.

Now it may be that by reasoning thusly, I may find myself inconsistent.  For example, I would be substantially more alarmed about a lottery device with a well-defined chance of 1 in 1,000,000 of destroying the world, than I am about the Large Hadron Collider being switched on.

On the other hand, if you asked me whether I could make one million statements of authority equal to "The Large Hadron Collider will not destroy the world", and be wrong, on average, around once, then I would have to say no.

What should I do about this inconsistency?  I'm not sure, but I'm certainly not going to wave a magic wand to make it go away.  That's like finding an inconsistency in a pair of maps you own, and quickly scribbling some alterations to make sure they're consistent.

I would also, by the way, be substantially more worried about a lottery device with a 1 in 1,000,000,000 chance of destroying the world, than a device which destroyed the world if the Judeo-Christian God existed.  But I would not suppose that I could make one billion statements, one after the other, fully independent and equally fraught as "There is no God", and be wrong on average around once.

I can't say I'm happy with this state of epistemic affairs, but I'm not going to modify it until I can see myself moving in the direction of greater accuracy and real-world effectiveness, not just moving in the direction of greater self-consistency.  The goal is to win, after all.  If I make up a probability that is not shaped by probabilistic tools, if I make up a number that is not created by numerical methods, then maybe I am just defeating my built-in algorithms that would do better by reasoning in their native modes of uncertainty.

Of course this is not a license to ignore probabilities that are well-founded.  Any numerical founding at all is likely to be better than a vague feeling of uncertainty; humans are terrible statisticians.  But pulling a number entirely out of your butt, that is, using a non-numerical procedure to produce a number, is nearly no foundation at all; and in that case you probably are better off sticking with the vague feelings of uncertainty.

Which is why my Overcoming Bias posts generally use words like "maybe" and "probably" and "surely" instead of assigning made-up numerical probabilities like "40%" and "70%" and "95%".  Think of how silly that would look.  I think it actually would be silly; I think I would do worse thereby.

I am not the kind of straw Bayesian who says that you should make up probabilities to avoid being subject to Dutch Books.  I am the sort of Bayesian who says that in practice, humans end up subject to Dutch Books because they aren't powerful enough to avoid them; and moreover it's more important to catch the ball than to avoid Dutch Books.  The math is like underlying physics, inescapably governing, but too expensive to calculate.  Nor is there any point in a ritual of cognition which mimics the surface forms of the math, but fails to produce systematically better decision-making.  That would be a lost purpose; this is not the true art of living under the law.