Before I read Probability is in the Mind and Probability is Subjectively Objective I was a realist about probabilities; I was a frequentest. After I read them, I was just confused. I couldn't understand how a mind could accurately say the probability of getting a heart in a standard deck of playing cards was not 25%. It wasn't until I tried to explain the contrast between my view and the subjective view in a comment on Probability is Subjectively Objective that I realized I was a subjective Bayesian all along. So, if you've read Probability is in the Mind and read Probability is Subjectively Objective but still feel a little confused, hopefully, this will help.

I should mention that I'm not sure that EY would agree with my view of probability, but the view to be presented agrees with EY's view on at least these propositions:

  • Probability is always in a mind, not in the world.
  • The probability that an agent should ascribe to a proposition is directly related to that agent's knowledge of the world.
  • There is only one correct probability to assign to a proposition given your partial knowledge of the world.
  • If there is no uncertainty, there is no probability.

And any position that holds these propositions is a non-realist-subjective view of probability. 

 


 

Imagine a pre-shuffled deck of playing cards and two agents (they don't have to be humans), named "Johnny" and "Sally", which are betting 1 dollar each on the suit of the top card. As everyone knows, 1/4 of the cards in a playing card deck are hearts. We will name this belief F1; F1 stands for "1/4 of the cards in the deck are hearts.". Johnny and Sally both believe F1. F1 is all that Johnny knows about the deck of cards, but sally knows a little bit more about this deck. Sally also knows that 8 of the top 10 cards are hearts. Let F2 stand for "8 out of the 10 top cards are hearts.". Sally believes F2. John doesn't know whether or not F2. F1 and F2 are beliefs about the deck of cards and they are either true or false.

So, sally bets that the top card is a heart and Johnny bets against her, i.e., she puts her money on "Top card is a heart." being true; he puts his money on "~The top card is a heart." being true. After they make their bets, one could imagine Johnny making fun of Sally; he might say something like: "Are you nuts? You know, I have a 75% chance of winning. 1/4 of the cards are hearts; you can't argue with that!" Sally might reply: "Don't forget that the probability you assign to '~The top card is a heart.' depends on what you know about the deck. I think you would agree with me that there is an 80% chance that 'The top card is a heart' if you knew just a bit more about the state of the deck."

To be undecided about a proposition is to not know which possible world you are in; am I in the possible world where that proposition is true, or in the one where it is false? Both Johnny and Sally are undecided about "The top card is a heart."; their model of the world splits at that point of representation. Their knowledge is consistent with being in a possible world where the top card is a heart, or in a possible world where the top card is not a heart. The more statements they decide on, the smaller the configuration space of possible worlds they think they might find themselves in; deciding on a proposition takes a chunk off of that configuration space, and the content of that proposition determines the shape of the eliminated chunk; Sally's and Johnny's beliefs constrain their respective expected experiences, but not all the way to a point. The trick when constraining one's space of viable worlds, is to make sure that the real world is among the possible worlds that satisfy your beliefs. Sally still has the upper hand, because her space of viably possible worlds is smaller than Johnny's. There are many more ways you could arrange a standard deck of playing cards that satisfies F1 than there are ways to arrange a deck of cards that satisfies F1 and F2. To be clear, we don't need to believe that possible worlds actually exist to accept this view of belief; we just need to believe that any agent capable of being undecided about a proposition is also capable of imagining alternative ways the world could consistently turn out to be, i.e., capable of imagining possible worlds.

For convenience, we will say that a possible world W, is viable for an agent A, if and only if, W satisfies A's background knowledge of decided propositions, i.e., A thinks that W might be the world it finds itself in.

Of the possible worlds that satisfy F1, i.e., of the possible worlds where "1/4 of the cards are hearts" is true, 3/4 of them also satisfy "~The top card is a heart." Since Johnny holds that F1, and since he has no further information that might put stronger restrictions on his space of viable worlds, he ascribes a 75% probability to "~The top card is a heart." Sally, however, holds that F2 as well as F1. She knows that of the possible worlds that satisfy F1 only 1/4 of them satisfy "The top card is a heart." But she holds a proposition that constrains her space of viably possible worlds even further, namely F2. Most of the possible worlds that satisfy F1 are eliminated as viable worlds if we hold that F2 as well, because most of the possible worlds that satisfy F1 don't satisfy F2. Of the possible worlds that satisfy F2 exactly 80% of them satisfy "The top card is a heart." So, duh, Sally assigns an 80% probability to "The top card is a heart." They give that proposition different probabilities, and they are both right in assigning their respective probabilities; they don't disagree about how to assign probabilities, they just have different resources for doing so in this case. P(~The top card is a heart|F1) really is 75% and P(The top card is a heart|F2) really is 80%.

This setup makes it clear (to me at least) that the right probability to assign to a proposition depends on what you know. The more you know, i.e., the more you constrain the space of worlds you think you might be in, the more useful the probability you assign. The probability that an agent should ascribe to a proposition is directly related to that agent's knowledge of the world.

This setup also makes it easy to see how an agent can be wrong about the probability it assigns to a proposition given its background knowledge. Imagine a third agent, named "Billy", that has the same information as Sally, but say's that there's a 99% chance of "The top card is a heart." Billy doesn't have any information that further constrains the possible worlds he thinks he might find himself in; he's just wrong about the fraction of possible worlds that satisfy F2 that also satisfy "The top card is a heart.". Of all the possible worlds that satisfy F2 exactly 80% of them satisfy "The top card is a heart.", no more, no less. There is only one correct probability to assign to a proposition given your partial knowledge.

The last benefit of this way of talking I'll mention is that it makes probability's dependence on ignorance clear. We can imagine another agent that knows the truth value of every proposition, lets call him "FSM". There is only one possible world that satisfies all of FSM's background knowledge; the only viable world for FSM is the real world. Of the possible worlds that satisfy FSM's background knowledge, either all of them satisfy "The top card is a heart." or none of them do, since there is only one viable world for FSM. So the only probabilities FSM can assign to "The top card is a heart." are 1 or 0. In fact, those are the only probabilities FSM can assign to any proposition. If there is no uncertainty, there is no probability.

The world knows whether or not any given proposition is true (assuming determinism). The world itself is never uncertain, only the parts of the world that we call agents can be uncertain. Hence, Probability is always in a mind, not in the world. The probabilities that the universe assigns to a proposition are always 1 or 0, for the same reasons FSM only assigns a 1 or 0, and 1 and 0 aren't really probabilities.

In conclusion, I'll risk the hypothesis that: Where 0≤x≤1, "P(a|b)=x" is true, if and only if, of the possible worlds that satisfy "b", x of them also satisfy "a". Probabilities are propositional attitudes, and the probability value (or range of values) you assign to a proposition is representative of the fraction of possible worlds you find viable that satisfy that proposition. You may be wrong about the value of that fraction, and as a result you may be wrong about the probability you assign.

We may call the position summarized by the hypothesis above "Modal Satisfaction Frequency theory", or "MSF theory".

This theory is widely known as the "classical theory of probability" (see here: http://plato.stanford.edu/entries/probability-interpret/#ClaPro). The main problems are:

  1. Fares poorly with infinite sets of events, as noted above.
  2. Can't handle irrational probabilities in an obvious way. Given a 1"x1" square, what's the probability of choosing a point within the .5" radius inscribed circle?
  3. Not clear how to handle "weighted" possibilities. If a coin is biased towards heads, there's still only two possibilities (it'll land on heads or land on tails), but p(heads) > 50%.
  4. Runs into problems with the principle of indifference. There are lots of different ways to partitioning up the same sets of events into finitely disjoint alternaties. How do we pick the "right" partitioning?

The two theories are structurally very similar. The only difference I've noticed is that the classical theory of probability implies that the probability of an event cannot be different given two different pieces of background knowledge, i.e., that the probability of an event is a fact about the event in possibility space.

Different people may be equally undecided about different things, which suggests that Laplace is offering a subjectivist interpretation in which probabilities vary from person to person depending on contingent differences in their evidence. This is not his intention.

MSF does not hold that any event has a probability independent of some agent's evidence, probabilities are propositional attitudes, hence, properties of an agent. MSF doesn't hold that P(E) = (number of worlds where E holds) / (the number of possible worlds), because MSF doesn't think that there are non-conditional probabilities. You always assign a probability based off of your background knowledge, i.e., given the other propositions you hold.

However, most of the problems that make difficulties for the classical theory, also make difficulties for MSF theory. And MSF must either be modified in some way to address those issues, or discarded as a hypothesis.

To be undecided about a proposition is to not know which possible world you are in; am I in the possible world where that proposition is true, or in the one where it is false?

Better framing: if proposition is about worlds, then ask whether you are in the worlds about which the proposition is true. It's true about the worlds it's true about "in all worlds", no relativism. But also, there are propositions that are just false, the worlds they are true about don't exist, so you are not in possible worlds about which they are true, but these worlds are not out there either, and still they hold some of the probability mass in a model that doesn't know that they don't make sense.

There are a lot of issues with ontologically committing yourself to possible worlds (just look up modal realism and any W.V.Quine essay on it) that are beyond the scope of this post. I'm not saying that we shouldn't use phrases like "there are propositions that are just false, the worlds they are true about don't exist" but those propositions do imply that possible worlds exist. In my formulation, I take special care not to ontologically commit myself to possible worlds. To avoid that commitment, I always treat possible worlds as if they are parts of a model, not of the modeled:

Both Johnny and Sally are undecided about "The top card is a heart."; their model of the world splits at that point of representation.

Each agent has its own unique set of possible worlds it thinks it might find itself in, and it's determined by that agent's background knowledge. Maybe I should use the phrase "Hypothetical world", or "imaginable world" instead of "Possible world" to make that clearer.

I added this sentence thinking about your comment, thanks for the help:

To be clear, we don't need to believe that possible worlds actually exist to accept this view of belief; we just need to believe that any agent capable of being undecided about a proposition is also capable of imagining alternative ways the world could consistently turn out to be, i.e., capable of imagining possible worlds.

Works in the finite case. In the infinite case you're forced to regard some measure on the set of possible worlds as fundamental and that more or less leaves you back where you started.

Could you elaborate? Do you mean in the cases where an infinite number of possible worlds satisfy one's background knowledge? And what do you mean by

regard some measure on the set of possible worlds as fundamental

Could you elaborate? Do you mean in the cases where an infinite number of possible worlds satisfy one's background knowledge?

Yes, precisely.

And what do you mean by

regard some measure on the set of possible worlds as fundamental

Well, I'll explain this. If there are finitely many possible worlds, you can get away with just saying that each possible world has exactly the same probability, in the absence of observations. If there are infinitely many, however, this is mathematically impossible; you must decide on some function that, given a set of some possible worlds, gives the probability of that set. Such a function is called a probability measure.

To see why it's necessary to decide on such a function, suppose that there is some number X, and the only thing that you know about X is that it's a real number. What's the probability that X lies between 0 and 1? Between 1 and 2? Between Graham's number, and Graham's number plus one? It's simply impossible to assign them all equal probabilities (at least, if you insist that your probabilities be real numbers).

I doubt that physically implementable agents ever actually compute an infinite number of possible worlds, but if we do, then MSF theory has to deal with it, or be likely wrong. One way of dealing with this is to do bayesian algebra with the hyper real or surreal numbers (I'm guessing) but I'm not sure that you can do that mathematically, or that "P(a)=x where x is a hyperreal (or surreal number)" makes any sense. Another argument you could make is that, for some reason, some particular probability measure is privileged, or that some particular probability measure is the one you should use when dealing with an infinite number of worlds. Though I guess we shouldn't be too surprised if the probabilities in the mind don't actually take on real values, we could in theory just use "high", "low", "higher than", "lower than", "necessary" and "impossible".

But this is definitely something worth thinking about if I, or anyone else for that matter, wants to take MSF theory as a serious hypothesis, and not just a dandy intuition pump.

One way of dealing with this is to do bayesian algebra with the hyper real or surreal numbers (I'm guessing) but I'm not sure that you can do that mathematically, or that "P(a)=x where x is a hyperreal (or surreal number)" makes any sense.

"Nonstandard analysis" is not a substantive departure from the standard real number system; it is simply an alternative language that some people like for aesthetic reasons, or can sometimes be useful for "bookkeeping". Basically, if nonstandard analysis solves your problem, there was already a solution in terms of standard analysis. Robinson's "hyperreals" essentially just replace the concept of a "limit", and do not represent a fundamentally "new kind of number" in the way that, say, Cantor's transfinite ordinals do.

Conway's surreals are a different story. However, if the sorts of problems you're talking about could be solved simply by saying "oh, just use that other number system over there", they would have been solved long ago (and everybody would probably be using that other number system).

Sally still has the upper hand, because her space of viably possible worlds is smaller than Johnny's.

Probably.