In the previous article in this sequence, I conducted a thought experiment in which simple probability was not sufficient to choose how to act. Rationality required reasoning about *meta-probabilities*, the probabilities of probabilities.

Relatedly, lukeprog has a brief post that explains how this matters; a long article by HoldenKarnofsky makes meta-probability central to utilitarian estimates of the effectiveness of charitable giving; and Jonathan_Lee, in a reply to that, has used the same framework I presented.

In my previous article, I ran thought experiments that presented you with various colored boxes you could put coins in, gambling with uncertain odds.

The last box I showed you was blue. I explained that it had a fixed but unknown probability of a twofold payout, uniformly distributed between 0 and 0.9. The overall probability of a payout was 0.45, so the expectation value for gambling was 0.9—a bad bet. Yet your optimal strategy was to gamble a bit to figure out whether the odds were good or bad.

Let’s continue the experiment. I hand you a black box, shaped rather differently from the others. Its sealed faceplate is carved with runic inscriptions and eldritch figures. “I find this one *particularly* interesting,” I say.

What is the payout probability? What is your optimal strategy?

In the framework of the previous article, you have no knowledge about the insides of the box. So, as with the “sportsball” case I analyzed there, your meta-probability curve is flat from 0 to 1.

The blue box also has a flat meta-probability curve; but these two cases are very different. For the blue box, you know that the curve *really is* flat. For the black box, you have no clue what the shape of even the meta-probability curve is.

The relationship between the blue and black boxes is the same as that between the coin flip and sportsball—except at the meta level!

So if we’re going on in this style, we need to look at the distribution of *probabilities of probabilities of probabilities*. The blue box has a sharp peak in its meta-meta-probability (around flatness), whereas the black box has a flat meta-meta-probability.

You ought now to be a little uneasy. We are putting epicycles on epicycles. An infinite regress threatens.

Maybe at this point you suddenly reconsider the blue box… I *told* you that its meta-probability was uniform. But perhaps I was lying! How reliable do you think I am?

Let’s say you think there’s a 0.8 probability that I told the truth. That’s the meta-meta-probability of a flat meta-probability. In the *worst* case, the actual payout probability is 0, so the average *just plain probability* is 0.8 x 0.45 = 0.36. You can feed that worst case into your decision analysis. It won’t drastically change the optimal policy; you’ll just quit a bit earlier than if you were entirely confident that the meta-probability distribution was uniform.

To get this really right, you ought to make a best guess at the meta-meta-probability *curve*. It’s not just 0.8 of a uniform probability distribution, and 0.2 of zero payout. That’s the *worst* case. Even if I’m lying, I might give you better than zero odds. How much better? What’s your confidence in your meta-meta-probability curve? Ought you to draw a meta-meta-meta-probability curve? Yikes!

Meanwhile… that black box is *rather sinister*. Seeing it makes you wonder. What if I rigged the blue box so there is a small probability that when you put a coin in, it jabs you with a poison dart, and you die horribly?

Apparently a zero payout is *not* the worst case, after all! On the other hand, this seems paranoid. I’m odd, but probably not *that* evil.

Still, what about the black box? You realize now that it could do *anything*.

- It might spring open to reveal a collection of fossil trilobites.
- It might play Corvus Corax’s
*Vitium in Opere*at ear-splitting volume. - It might analyze the trace DNA you left on the coin and use it to write you a
*personalized*love poem. - It might emit a strip of paper with a recipe for dundun noodles written in Chinese.
- It might sprout six mechanical legs and jump into your lap.

What is the probability of its giving you $2?

That no longer seems quite so relevant. In fact… it might be utterly meaningless! This is now a situation of **radical uncertainty**.

What is your optimal strategy?

I’ll answer that later in this sequence. You might like to figure it out for yourself now, though.

## Further reading

The black box is an instance of Knightian uncertainty. That’s a catch-all category for any type of uncertainty that can’t usefully be modeled in terms of probability (*or* meta-probability!), because you can’t make meaningful probability estimates. Calling it “Knightian” doesn’t help solve the problem, because there’s lots of sources of non-probabilistic uncertainty. However, it’s useful to know that there’s a literature on this.

The blue box is closely related to Ellsberg’s paradox, which combines probability with Knightian uncertainty. Interestingly, it was invented by the same Daniel Ellsberg who released the Pentagon Papers in 1971. I wonder how his work in decision theory might have affected his decision to leak the Papers?

Instead of metaprobabilities, the black box might be better thought of in terms of hierarchically partitioning possibility space.

Each sublist's probability's should add up to the heading above, and the top-level headings should add up to 1. Given how long the list is, all the probabilities are very small, though we might be able to organize them into high-level categories with reasonable probabilities and then tack on a "something else" category. Categories are map, not territory, so we can rewrite them to our convenience.

It's useful to call the number of pegs the "probability" which makes the probability of 45 pegs a "meta-probability". It isn't useful to call opera or yodeling a "probability" so calling the probability that a music box is opera a "meta-probability" is really weird, even though it's basically the same sort of thing b... (read more)

The Bayesian Universalist answer to this would be that there is no separate meta-probability. You have a universal prior over all possible hypotheses, and mutter a bit about Solomonoff induction and AIXI.

I am putting it this way, distancing myself from the concept, because I don't actually believe it, but it is the standard answer to draw out from the LessWrong meme space, and it has not yet been posted in this thread. Is there anyone who can make a better fist of expounding it?

The idea of metaprobability still isn't particularly satisfying to me as a game-level strategy choice. It might be useful as a description of something my brain already does, and thus give me more information about how my brain relates to or emulates an AI capable of perfect Bayesian inference. But in terms of picking optimal strategies, perfect Bayesian inference has no subroutine called CalcMetaProbability.

My first thought was that your approach elevates your brain's state above states of the world as symbols in the decision graph, and calls the differ... (read more)

I don't have a full strategy, but I have an idea for a data-gathering experiment:

I hand you a coin and try to get

youto put it in the box for me. If you refuse, I update in the direction of the box harming people who put coins in it. If you comply, I watch and see what happens.Meta-probability seems like something that is reducible to expected outcomes and regular probability. I mean, what kind of box the black box is, is nothing more than what you expect it to do conditional on what you might have seen it do. If it gives you three dollars the next three times you play it, you'd then expect the fourth time to also give you three dollars (4/5ths of the time, via Bayes' Theorem, via Laplace's Rule of Succession).

Meta-probability may be a nifty shortcut, but it's reducible to expected outcomes and conditional probability.

Obviously he was a rational thinker. And that seems to have implied thinking aoutside of the rules and customs. For him leaking the papers was just one nontirivial option among lots.

"http://en.wikipedia.org/wiki/Epicycle#Epicycles"

Ha!

A few terminological headaches in this post. Sorry for the negative tone.

There is talk of a "fixed but unknown probability," which should always set alarm bells ringing.

More generally, I propose that whenever one assigns a probability to some parameter, that parameter is guaranteed not to be a probability.

I am also disturbed by the mention of Knightian uncertainty, descried as "uncertainty that can't be usefully modeled in terms of probability." Now there's a charitable interpretation of that phrase, and I can see that there may be a ps... (read more)

I throw the box into the corner of the room with a high pitched scream of terror. Then I run away to try to find thermite.

Edit: then I throw the ashes into a black hole, and trigger a True Vacuum colapse just in case.

You need to take advantage of the fact that probability is a consequence of incomplete information, and think about the models of the world people have that encode their information. "Meta-probabbility" only exists within a certain model of the problem, and if you totally ignore that you get some drastically confusing conclusions.

The problem of what to expect from the black box?

I'd think about it like this: suppose that I hand you a box with a slot in it. What do you expect to happen if you put a quarter into the slot?

To answer this we engage our big amount of human knowledge about boxes and people who hand them to you. It's very likely that nothing at all will happen, but I've also seen plenty of boxes that also emit sound, or gumballs, or temporary tattoos, or sometimes more quarters. But suppose that I have previously handed you a box that emits more quarters sometimes when you put quarters in. Then maybe you raise the probability that it also emits quarters, et cetera.

Now, within this model you have a probability of some payoff, but only if it's one of the reward-emitting boxes, and it also has some probability of emitting sound etc. What you call a "meta-probability" is actually the probability of some sub-model being verified or confirmed. Suppose I put in one quarter in and two quarters come out - now you've drastically cut down the models that can describe the box. This is "updating the meta-probability."

I like this article / post but I find myself wanting more at the end. A payoff or a punch line or at least a lesson to take away.