The Wikipedia article for a probability experiment that you linked to has the following definition:
A mathematical description of an experiment consists of three parts:
- A sample space, Ω (or S), which is the set of all possible outcomes.
- A set of events F, where each event is a set containing zero or more outcomes.
- The assignment of probabilities to the events—that is, a function P mapping from events to probabilities.
You advocate using this instead of "possible worlds", but it looks like the sample space is essentially the same concept. What exactly is the difference in your view? Or do you take issue with this definition?
My issue is with the interpretation of Ω as a set of "all logically consistent possible worlds" instead of set of "all possible elementary outcomes of a probability experiment".
The difference is that the former idea is untenable, doesn't work for logical uncertainty and confused everybody about self-location, while the latter provides a clear way to know, what should our Ω be for every specific case.
Okay, but I don't think you've clarified how to make that choice yet then. Like, what if I decide my outcomes are logically possible worlds? I could maybe buy that humans get less confused by the word "outcomes" for whatever reason, but I don't see what principle you're advocating for.
My own view is that you formally need a semantic theory, e.g. model theory. "All logically consistent possible worlds" is often not well-founded as a set (depending on how unconstrained "world" is), so you'd need to clarify this first before trying to do anything with it (which no one does, so no wonder it's confusing). But it doesn't seem inherently wrong, conceptually, to me.
Okay, but I don't think you've clarified how to make that choice yet then.
Which choice? What are the possible outcomes of a given experiment? Check this post of mine, I explain the application of the framework of probability experiments in more details.
Like, what if I decide my outcomes are logically possible worlds?
Can you give me an example? I'm not sure I understand your question.
My own view is that you formally need a semantic theory, e.g. model theory. "All logically consistent possible worlds" is often not well-founded as a set (depending on how unconstrained "world" is), so you'd need to clarify this first before trying to do anything with it (which no one does, so no wonder it's confusing).
Well, yes. That's why the framework is bad and we should use a better one.
But it doesn't seem inherently wrong, conceptually, to me.
I don't understand what you mean by "inherently". I'm not trying to make any metaphysical claims about essence of the idea of possible worlds or some such. I just notice that it works poorly for what we need from a framework for probability theory and propose a better one.
Yeah, the possible outcomes, thank you.
I mostly just mean what would stop one from saying e.g. "in the sleeping beauty experiment, the possible outcomes are that she's in one of this set of possible worlds". The well-founded thing is potentially one such reason, but I didn't see anything from you about what principle I should use to notice this sort of thing besides that it's just too hard for our poor human brains. That's not really compelling since probability theory at scale is not tractable, yet we still find it useful as a guiding star.
The logical uncertainty part is also unconvincing. Digits of irrational numbers is one of most straightforward cases, so walking through that and saying "Mystery solved." feels like you're glossing over what actually makes it hard. Which I would say are things like logical self-reference, and formalizing how exactly computational updates work including the weird edge cases.
Often, the solution to problems in a concept is to come up with a better version of the concept, but that doesn't feel like the point you were making.
Sadly, this is exactly what happened. When philosophers talk about probability, they take the “possible worlds” story at face value
Interpreted as "all philosophers who talk about possible worlds believe they are real", that isn't true. The belief that possible world are real, modal realism, is contentious.
The Wikipedia article attributes modal realism to David Lewis only, and cites many criticisms by other philosophers.
A lot of lot of philosophers talk about possible worlds, and perhaps taking such talk literally is the basis of the belief that modal realism is widespread ... since it isn't based on what most philosophers actually believe. But "there is a possible world where X" can be read as "it is not impossible that X" -- there is no need to reify the world itself.
Talk of possible worlds is common, because of the popularity of modal logic, and modal logic is ...old ... the sort of thing rationalists should approve of? It allows you to put the notions of necessity and possibility on a formal basis, instead of leaving them as vague metaphysical notions.
Modal logic is not modal realism, no matter who says it is
Not on the framework of possible worlds! Here we are supposed to conceptualize all the ways the world could be that are logically consistent with our previous observations and arrive to the conclusion that there are worlds where the coin comes Heads and worlds where it comes Tails
There's no single and obligatory notion of possibility involved in modal logic or modal realism...the logic works out the same, for logical, physical and epistemic possibility, so long as you use matching definitions of possible and necessary. For coin and dice problems, a common sense notion is enough.
I don't know where you found the idea that modal realists have to picture a whole world in order derive a common sense truth like "a coin can lands or tails".
Interpreted as "all philosophers who talk about possible worlds believe they are real", that isn't true. The belief that possible world are real, modal realism, is contentious.
Yes, you shouldn't interpret it this way. Modal realism is not the point of contention of this post. The whole framework of "possible worlds" is, regardless of whether one thinks they are somehow real or not.
I don't know how you manage to keep misinterpreting me about it, even though I've already explained it to you under a different post.
Talk of possible worlds is common, because of the popularity of modal logic, and modal logic is ...old ... the sort of thing rationalists should approve of?
Why? Are we suppose to endource everything that's old? Somehow I didn't get the memo.
It allows you to put the notions of necessity and possibility on a formal basis, instead of leaving them as vague metaphysical notions.
The sentiment is good the implementation, is not. To be useful such formalization should be based on the framework of probability experiments and not possible worlds, for the reasons described in the post
When mathematicians talk about probability, they do it in terms of a triplet ( Ω , F , P ) - sample space, event space and probability measure function, with specific properties, defined by probability axioms.
For a layman it may not be clear what all these things mean. Mathematical language is preseice but it’s not exactly catered to our intuitions. We are more used to understanding things through stories.
And so, people came up with a story:
Imagine as if there are multiple universes - possible worlds - representing all the alternative ways things can be. Ω is the set of all possible worlds. We don’t know which of these possible worlds is our actual world.
F is a set of all possible facts about a world. In some possible worlds these facts are true in others they are not. By learning facts about our world, we can figure out which of the possible worlds it is.
P represent our degree of belief in some facts about our world. A fact known to be true has P = 1, a fact known to be false has P = 0.
This story is okay-ish. It provides a somewhat intuitive idea of what probability theory is about. As long as we understand that it’s just that - a story, an intuition pump, not the actual principle beside things. Like the planetary model of the atom, it captures some aspects of the truth but not others.
While math is a truth preservation mechanism, that allows us to precisely talk about precise things, stories in natural language are much worse in this regard. Words are leaky generalizations; they can have multiple meanings and vague connotations. Therefore, when we are trying to communicate mathematical insights via natural language some aspects of what was implied inevitably slips through our fingers. And if we try to do philosophy with the same naive terminology, treating it as a referent instead of mere imperfect representation, naturally, we are doomed to confusion.
Sadly, this is exactly what happened. When philosophers talk about probability, they take the “possible worlds” story at face value. They argue about their metaphysical reality; they inference their properties based on vague intuitions. They build towers of assumptions on top of this shaky foundation and then try to solve mathematical problems with all this extra baggage.
Physical Uncertainty
Let’s see where the problems may lie if we accept the framework of possible worlds as it is. Starting from the simplest example - a fair coin toss.
Common sense tells us that our sample space consists of two outcomes:
Ω = {Heads; Tails}
But how do we justify it?
Now, if we used a saner framework, based on the notion of probability experiment as an approximation of some real-world process, we could’ve just tossed the coin multiple times, seen for ourselves what happens and then generalized, arriving to a particular semantic agreement what behavior of the coin counts as what outcome in our mathematical model.
Not on the framework of possible worlds! Here we are supposed to conceptualize all the ways the world could be that are logically consistent with our previous observations and arrive to the conclusion that there are worlds where the coin comes Heads and worlds where it comes Tails. Why is this a problem? Several reasons:
Of course no one is actually doing all this work. People just imagine that they did it, based on some vague intuition, without noticing a problem. But this is almost as bad. As a result you do not even notice that the framework you are allegedly using is completely untenable and your conclusions are justified by nothing more than appeals to intuition.
What this has to say about the whole domains of philosophy based on the notion of possible worlds and certain mind experiments about conceivability, I’m, for now, leaving as an exercise for the reader.
Logical Uncertainty
But this is only the beginning of our problems. Another huge issue of the framework of possible worlds is that it manages to make even less sense in the context of logical uncertainty.
For example:
What is the probability that 121735329th digit of pi is odd?
Here, intuitively it seems that the answer has to be 1/2, unless, of course, one happened to have some extra knowledge about this particular digit. But how can we justify it with possible worlds even in principle?
Pi’s 121735329th digit being something else instead of what it actually is, is not consistent with our observations. There is only one logically-coherent “possible world” here - the actual one. We just… do not know what’s the value of pi’s 121735329th digit in it.
Which leads a lot of people to a conclusion that logical uncertainty is some deep mystery that we do not know how to approach. That it may work according to some different rules.
Meanwhile, when we are using a framework of probability experiment, there is nothing mysterious here. Between different digits of pi about which we know exactly as much as about 121735329th, half of them are odd and half are even. We can do an actual experiment and see for ourselves. Therefore:
P(Even) = 1/2
Mystery solved.
“Self-Locating” Uncertainty
And let’s not forget about the so called “Self-Locating” uncertainty confusion, which I was dissolving in a previous post. We can see how it originates from the initial confusion about possible worlds.
If we conceptualize probability theory as reasoning about which possible world you are in, then what about reasoning about where you are in a possible world? After all, worlds are big, right? There are lots of place in them and it seems, well, possible that you can be in different places in the same world.
From this one faulty assumption all the wrong conclusions naturally follow. We are starting to conceptualize a separate magisterium of “self-locating probabilities” and a question of whether one can apply probability theory not just to possible worlds but also to “centred possible worlds”.
And from there it’s not too much of a jump to start talking about specialness of conscious observation and anthropic psychic powers to blackmail reality into doing what you want by creating copies of yourself; predict the future with extreme confidence or certainly know facts about the universe without even opening your eyes.
At which point, you might as well start believing in immaterial souls and omnibenevolent God. You’ve already smuggled so much idealism into your ontology, why stop here?
Of course, when one understands that elements of the sample space are not “worlds” with their own places inside of them, but merely mutually exclusive and collectively exhaustive outcomes of probability experiment, then the idea of “centredness” is immediately revealed to be incoherent.
Probability experiment is already about your perspective. To the best of your knowledge state. Outcomes are elementary. There is nothing to center on. Either your knowledge of your location can be represented as an independent trial of the experiment or it can’t. There is no ambiguity. It’s all very straightforward.
And no psychic powers. I know, it would’ve been awesome to have them, but alas.
Conclusion
With all this in mind, I think we should put the framework of possible worlds to rest. Whether it was really useful at some point in time or not, right now it’s doing us more harm than good, creating more confusion than it solves.
It demands an impossible standard of logical omniscience and then naturally fails to work with logical uncertainty. It tricked philosophers to argue about “self-location” for decades, spawning multiple “paradoxes” and confusing materialists into idealist assumptions.
Even if the notion of probability experiment is a bit harder to grasp, it saves us so much trouble down the line that it’s definitely worth it. It provides us with a unified way to straightforwardly reason about any type of uncertainty that systematically works for us in our actual world. And ultimately, isn’t it what matters?