When mathematicians talk about probability, they do it in terms of a triplet ( Ω , F , P ) - sample space, event space and probability measure function, with specific properties, defined by probability axioms.
For a layman it may not be clear what all these things mean. Mathematical language is preseice but it’s not exactly catered to our intuitions. We are more used to understanding things through stories.
And so, people came up with a story:
Imagine as if there are multiple universes - possible worlds - representing all the alternative ways things can be. Ω is the set of all possible worlds. We don’t know which of these possible worlds is our actual world.
F is a set of all possible facts about a world. In some possible worlds these facts are true in others they are not. By learning facts about our world, we can figure out which of the possible worlds it is.
P represent our degree of belief in some facts about our world. A fact known to be true has P = 1, a fact known to be false has P = 0.
This story is okay-ish. It provides a somewhat intuitive idea of what probability theory is about. As long as we understand that it’s just that - a story, an intuition pump, not the actual principle beside things. Like the planetary model of the atom, it captures some aspects of the truth but not others.
While math is a truth preservation mechanism, that allows us to precisely talk about precise things, stories in natural language are much worse in this regard. Words are leaky generalizations; they can have multiple meanings and vague connotations. Therefore, when we are trying to communicate mathematical insights via natural language some aspects of what was implied inevitably slips through our fingers. And if we try to do philosophy with the same naive terminology, treating it as a referent instead of mere imperfect representation, naturally, we are doomed to confusion.
Sadly, this is exactly what happened. When philosophers talk about probability, they take the “possible worlds” story at face value. They argue about their metaphysical reality; they inference their properties based on vague intuitions. They build towers of assumptions on top of this shaky foundation and then try to solve mathematical problems with all this extra baggage.
Physical Uncertainty
Let’s see where the problems may lie if we accept the framework of possible worlds as it is. Starting from the simplest example - a fair coin toss.
Common sense tells us that our sample space consists of two outcomes:
Ω = {Heads; Tails}
But how do we justify it?
Now, if we used a saner framework, based on the notion of probability experiment as an approximation of some real-world process, we could’ve just tossed the coin multiple times, seen for ourselves what happens and then generalized, arriving to a particular semantic agreement what behavior of the coin counts as what outcome in our mathematical model.
Not on the framework of possible worlds! Here we are supposed to conceptualize all the ways the world could be that are logically consistent with our previous observations and arrive to the conclusion that there are worlds where the coin comes Heads and worlds where it comes Tails. Why is this a problem? Several reasons:
First of all, it’s literally impossible to do with our human brains. We do not have enough cognitive resources to hold in mind all the facts about a world and check them for logical consistency.
Even if it was possible, we would have to do it for all the ways the world could be to our knowledge which would take approximately infinite time.
Which, even if we magically could, sounds like a total waste of time and energy, doesn’t it? Why would some random fact, say whether a particular person on the other side of the world is wearing a blue cap, be relevant to the coin toss that I’m going to make here an now?
Of course no one is actually doing all this work. People just imagine that they did it, based on some vague intuition, without noticing a problem. But this is almost as bad. As a result you do not even notice that the framework you are allegedly using is completely untenable and your conclusions are justified by nothing more than appeals to intuition.
What this has to say about the whole domains of philosophy based on the notion of possible worlds and certain mind experiments about conceivability, I’m, for now, leaving as an exercise for the reader.
Logical Uncertainty
But this is only the beginning of our problems. Another huge issue of the framework of possible worlds is that it manages to make even less sense in the context of logical uncertainty.
For example:
What is the probability that 121735329th digit of pi is odd?
Here, intuitively it seems that the answer has to be 1/2, unless, of course, one happened to have some extra knowledge about this particular digit. But how can we justify it with possible worlds even in principle?
Pi’s 121735329th digit being something else instead of what it actually is, is not consistent with our observations. There is only one logically-coherent “possible world” here - the actual one. We just… do not know what’s the value of pi’s 121735329th digit in it.
Which leads a lot of people to a conclusion that logical uncertainty is some deep mystery that we do not know how to approach. That it may work according to some different rules.
Meanwhile, when we are using a framework of probability experiment, there is nothing mysterious here. Between different digits of pi about which we know exactly as much as about 121735329th, half of them are odd and half are even. We can do an actual experiment and see for ourselves. Therefore:
P(Even) = 1/2
Mystery solved.
“Self-Locating” Uncertainty
And let’s not forget about the so called “Self-Locating” uncertainty confusion, which I was dissolving in a previous post. We can see how it originates from the initial confusion about possible worlds.
If we conceptualize probability theory as reasoning about which possible world you are in, then what about reasoning about where you are in a possible world? After all, worlds are big, right? There are lots of place in them and it seems, well, possible that you can be in different places in the same world.
From this one faulty assumption all the wrong conclusions naturally follow. We are starting to conceptualize a separate magisterium of “self-locating probabilities” and a question of whether one can apply probability theory not just to possible worlds but also to “centred possible worlds”.
And from there it’s not too much of a jump to start talking about specialness of conscious observation and anthropic psychic powers to blackmail reality into doing what you want by creating copies of yourself; predict the future with extreme confidence or certainly know facts about the universe without even opening your eyes.
At which point, you might as well start believing in immaterial souls and omnibenevolent God. You’ve already smuggled so much idealism into your ontology, why stop here?
Of course, when one understands that elements of the sample space are not “worlds” with their own places inside of them, but merely mutually exclusive and collectively exhaustive outcomes of probability experiment, then the idea of “centredness” is immediately revealed to be incoherent.
Probability experiment is already about your perspective. To the best of your knowledge state. Outcomes are elementary. There is nothing to center on. Either your knowledge of your location can be represented as an independent trial of the experiment or it can’t. There is no ambiguity. It’s all very straightforward.
And no psychic powers. I know, it would’ve been awesome to have them, but alas.
Conclusion
With all this in mind, I think we should put the framework of possible worlds to rest. Whether it was really useful at some point in time or not, right now it’s doing us more harm than good, creating more confusion than it solves.
It demands an impossible standard of logical omniscience and then naturally fails to work with logical uncertainty. It tricked philosophers to argue about “self-location” for decades, spawning multiple “paradoxes” and confusing materialists into idealist assumptions.
Even if the notion of probability experiment is a bit harder to grasp, it saves us so much trouble down the line that it’s definitely worth it. It provides us with a unified way to straightforwardly reason about any type of uncertainty that systematically works for us in our actual world. And ultimately, isn’t it what matters?
When mathematicians talk about probability, they do it in terms of a triplet ( Ω , F , P ) - sample space, event space and probability measure function, with specific properties, defined by probability axioms.
For a layman it may not be clear what all these things mean. Mathematical language is preseice but it’s not exactly catered to our intuitions. We are more used to understanding things through stories.
And so, people came up with a story:
Imagine as if there are multiple universes - possible worlds - representing all the alternative ways things can be. Ω is the set of all possible worlds. We don’t know which of these possible worlds is our actual world.
F is a set of all possible facts about a world. In some possible worlds these facts are true in others they are not. By learning facts about our world, we can figure out which of the possible worlds it is.
P represent our degree of belief in some facts about our world. A fact known to be true has P = 1, a fact known to be false has P = 0.
This story is okay-ish. It provides a somewhat intuitive idea of what probability theory is about. As long as we understand that it’s just that - a story, an intuition pump, not the actual principle beside things. Like the planetary model of the atom, it captures some aspects of the truth but not others.
While math is a truth preservation mechanism, that allows us to precisely talk about precise things, stories in natural language are much worse in this regard. Words are leaky generalizations; they can have multiple meanings and vague connotations. Therefore, when we are trying to communicate mathematical insights via natural language some aspects of what was implied inevitably slips through our fingers. And if we try to do philosophy with the same naive terminology, treating it as a referent instead of mere imperfect representation, naturally, we are doomed to confusion.
Sadly, this is exactly what happened. When philosophers talk about probability, they take the “possible worlds” story at face value. They argue about their metaphysical reality; they inference their properties based on vague intuitions. They build towers of assumptions on top of this shaky foundation and then try to solve mathematical problems with all this extra baggage.
Physical Uncertainty
Let’s see where the problems may lie if we accept the framework of possible worlds as it is. Starting from the simplest example - a fair coin toss.
Common sense tells us that our sample space consists of two outcomes:
Ω = {Heads; Tails}
But how do we justify it?
Now, if we used a saner framework, based on the notion of probability experiment as an approximation of some real-world process, we could’ve just tossed the coin multiple times, seen for ourselves what happens and then generalized, arriving to a particular semantic agreement what behavior of the coin counts as what outcome in our mathematical model.
Not on the framework of possible worlds! Here we are supposed to conceptualize all the ways the world could be that are logically consistent with our previous observations and arrive to the conclusion that there are worlds where the coin comes Heads and worlds where it comes Tails. Why is this a problem? Several reasons:
Of course no one is actually doing all this work. People just imagine that they did it, based on some vague intuition, without noticing a problem. But this is almost as bad. As a result you do not even notice that the framework you are allegedly using is completely untenable and your conclusions are justified by nothing more than appeals to intuition.
What this has to say about the whole domains of philosophy based on the notion of possible worlds and certain mind experiments about conceivability, I’m, for now, leaving as an exercise for the reader.
Logical Uncertainty
But this is only the beginning of our problems. Another huge issue of the framework of possible worlds is that it manages to make even less sense in the context of logical uncertainty.
For example:
What is the probability that 121735329th digit of pi is odd?
Here, intuitively it seems that the answer has to be 1/2, unless, of course, one happened to have some extra knowledge about this particular digit. But how can we justify it with possible worlds even in principle?
Pi’s 121735329th digit being something else instead of what it actually is, is not consistent with our observations. There is only one logically-coherent “possible world” here - the actual one. We just… do not know what’s the value of pi’s 121735329th digit in it.
Which leads a lot of people to a conclusion that logical uncertainty is some deep mystery that we do not know how to approach. That it may work according to some different rules.
Meanwhile, when we are using a framework of probability experiment, there is nothing mysterious here. Between different digits of pi about which we know exactly as much as about 121735329th, half of them are odd and half are even. We can do an actual experiment and see for ourselves. Therefore:
P(Even) = 1/2
Mystery solved.
“Self-Locating” Uncertainty
And let’s not forget about the so called “Self-Locating” uncertainty confusion, which I was dissolving in a previous post. We can see how it originates from the initial confusion about possible worlds.
If we conceptualize probability theory as reasoning about which possible world you are in, then what about reasoning about where you are in a possible world? After all, worlds are big, right? There are lots of place in them and it seems, well, possible that you can be in different places in the same world.
From this one faulty assumption all the wrong conclusions naturally follow. We are starting to conceptualize a separate magisterium of “self-locating probabilities” and a question of whether one can apply probability theory not just to possible worlds but also to “centred possible worlds”.
And from there it’s not too much of a jump to start talking about specialness of conscious observation and anthropic psychic powers to blackmail reality into doing what you want by creating copies of yourself; predict the future with extreme confidence or certainly know facts about the universe without even opening your eyes.
At which point, you might as well start believing in immaterial souls and omnibenevolent God. You’ve already smuggled so much idealism into your ontology, why stop here?
Of course, when one understands that elements of the sample space are not “worlds” with their own places inside of them, but merely mutually exclusive and collectively exhaustive outcomes of probability experiment, then the idea of “centredness” is immediately revealed to be incoherent.
Probability experiment is already about your perspective. To the best of your knowledge state. Outcomes are elementary. There is nothing to center on. Either your knowledge of your location can be represented as an independent trial of the experiment or it can’t. There is no ambiguity. It’s all very straightforward.
And no psychic powers. I know, it would’ve been awesome to have them, but alas.
Conclusion
With all this in mind, I think we should put the framework of possible worlds to rest. Whether it was really useful at some point in time or not, right now it’s doing us more harm than good, creating more confusion than it solves.
It demands an impossible standard of logical omniscience and then naturally fails to work with logical uncertainty. It tricked philosophers to argue about “self-location” for decades, spawning multiple “paradoxes” and confusing materialists into idealist assumptions.
Even if the notion of probability experiment is a bit harder to grasp, it saves us so much trouble down the line that it’s definitely worth it. It provides us with a unified way to straightforwardly reason about any type of uncertainty that systematically works for us in our actual world. And ultimately, isn’t it what matters?