Cross-Posted on By Way of Contradiction

As you may know from my past posts, I believe that probabilities should not be viewed as uncertainty, but instead as weights on how much you care about different possible universes. This is a very subjective view of reality. In particular, it seems to imply that when other people have different beliefs than me, there is no sense in which they can be wrong. They just care about the possible futures with different weights than I do. I will now try to argue that this is not a necessary conclusion.

First, let's be clear what we mean by saying that probabilities are weights on values. Imagine I have an unfair coin which give heads with probability 90%. I care 9 times as much about the possible futures in which the coin comes up heads as I do about the possible futures in which the coins comes up tails. Notice that this does not mean I want to coin to come up heads. What it means is that I would prefer getting a dollar if the coin comes up heads to getting a dollar if the coin comes up tails.

Now, imagine that you are unaware of the fact that it is an unfair coin. By default, you believe that the coin comes up heads with probability 50%. How can we express the fact that I have a correct belief, and you have an incorrect belief in the language of values?

We will take advantage of the language of terminal and instrumental values. A terminal value is something that you try to get because you want it. An instrumental value is something that you try to get because you believe it will help you get something else that you want.

If you believe a statement S, that means that you care more about the worlds in which S is true. If you terminally assign a higher value to worlds in which S is true, we will call this belief a terminal belief. On the other hand, if you believe S because you think that S is logically implied by some other terminal belief, T, we will call your belief in S an instrumental belief.

Instrumental values can be wrong, if you are factually wrong about the fact that the instrumental value will help achieve your terminal values. Similarly, an Instrumental belief can be wrong if you are factually wrong about the fact that it is implied by your terminal belief.

Your belief that the coin will come up heads with probability 50% is an instrumental belief. You have a terminal belief in some form of Occam's razor. This causes you to believe that coins are likely to behave similarly to how coins have behaved in the past. In this case, that was not valid, because you did not take into consideration the fact that I chose the coin for the purpose of this thought experiment. Your Instrumental belief is in this case wrong. If your belief in Occam's razor is terminal, then it would not be possible for Occam's razor to be wrong.

This is probably a distinction that you are already familiar with. I am talking about the difference between an axiomatic belief and a deduced belief. So why am I viewing it like this? I am trying to strengthen my understanding of the analogy between beliefs and values. To me, they appear to be two different sides of the same coin, and building up this analogy might allow us to translate some intuitions or results from one view into the other view.

New Comment
21 comments, sorted by Click to highlight new comments since: Today at 7:00 AM

I guess the problem I'm having with this definition is that "care" has such a fuzzy definition that I'm not entirely sure what it does and doesn't include. What if I care more about not losing then about winning, does that change the odds?

Or, for that matter, what if I say "I only care about universes where God exists, and that is a terminal value of mine"; is that the same as saying "from my point of view, God exists with a probability of 1"? If so, then what does that mean if we are actually in a universe where God existing has a probability of 0?

I believe the point is not "Probabilities are made of Caring rather than Truth." but rather "Probabilities and Values are made of the same kind of stuff."

I'm still not sure that makes sense.

If you care about X, if you want X to happen, then your goal as a rational actor should be to figure out what set of steps you can take that increase the odds of X happening? If a student wants to pass his history test tomorrow, and he thinks there's a 60% chance he will if he doesn't study and a 80% chance he will if he does study, then he should study. I'm not sure how you figure that out if you have "caring" and "probability" confused, though.

Probability is how likely something is to happen given certain circumstances; values are what you want to happen. If you confuse the two, it seems to me you're probably going to lose a lot of poker games.

(Maybe I'm just missing something here; I'm just not seeing how you can conflate the two without losing the decision-making value of understanding probability).

If a student wants to pass his history test tomorrow, and he thinks there's a 60% chance he will if he doesn't study and a 80% chance he will if he does study, then he should study.

Let's work out this example. "A student wants to pass his history test tomorrow." What does that even mean? The student doesn't have any immediate experience of the history test tomorrow, it's only grasped as an abstract concept. Without any further grounding he might as well want to be post-utopian. "he thinks there's a 60% chance he will if he doesn't study and a 80% chance he will if he does study" Ahh, there's how the concept is grounded in terms of actions. The student considers not studying as equivalent to 0.6 times passing the history test, and studying as 0.8 times passing. Now his preferences translate into preferences over actions. "Then he should study." Because that's what his preference over actions tells him to prefer.

In other words, probability estimates are a method for turning preferences over abstract concepts into preferences over immediate experiences. This is the method people prefer to use for many abstractions, particularly abstractions about "the future" with the presumption that these abstraction will in "the future" become immediate experience, but it is not necessary and people may prefer to use other methods for different abstractions.

That example and pretty much everything else that comes up outside contrived corner cases is embedded in complex webs of cause and effect, where indeed probability and values are very different in practice. But when you consider entire universes at a time that can not causally interact then you gain a degree of freedom and if you want to keep the distinction you have to make an arbitrary choice, which is a) unaesthetic, and b) different agents will make differently making it harder to reason about them. But really it's semantics; the models are isomorphic as far as I can tell.

Nothing has a probability of 0. From Coscott's POV, we are not actually in any particular universe, we are simultaneously in all imaginable universes consistent with our experience. So it is a valid option to e.g. care only about the universes in which God will speak to me from heavens tomorrow at 14:15, in which case I can treat it as a probability 1 event.

I don't feel satisfied by this extremely lax approach but I also don't have a compelling argument against it.

Nothing has a probability of 0.

Subjectively, maybe not. We may never be able to bring the Bayesian probability of something down to zero, from our point of view. However, objectively, there are some things that are simply not true, and in practice you could test them infinite times and they would never happen.

Maybe we can never know for sure which things those are, but that's not the same as saying they don't exist.

Why do you think it's the case? In a Tegmark IV multiverse, all mathematical possibilities exist. There are no things which are "simply not true", just things that are rare in the multiverse according to some measure.

In a Tegmark IV multiverse, all mathematical possibilities exist.

There's several different assumptions in that statement, which aren't really worth unpacking, and anyway it's not really relevant here anyway since we're only talking about one specific universe, the one we happen to live in.

Let me rephrase. There are two very different statements I could make here:

1) I believe, with 99% certainty, that there is zero chance of a divine intervention happening next week. 2) There is a 99% chance that there won't be a divine intervention next week, and a 1% chance that there will be.

Statement 1 and statement 2 are describing very, very different universes. In the universe described by statement 2, there is a God, and he intervenes about once every 100 weeks. In statement 1, I am 99% certain that I am in a universe where there is no God that ever intervenes in people's lives, and if I am correct then there is zero chance of a divine intervention next week or any week.

There is a vast difference between my personal subjective certainty of how likely an event is to happen, and the objective probability of something happening according to the objective reality of the universe we live in. My personal subjective uncertainty will vary based on the evidence I happen to have at this moment in time, with Bayes theorem and all that. However, the objective state of the universe will not change based on what evidence I do or don't happen to have.

We may never know for sure if we live in a universe with a divine being that occasionally intervenes in our lives, but in reality either there is one or there is not one, and if there is not one then the odds of God intervening in our lives is zero.

I think a mistake a lot of people make after they learn about Bayes' Theorem is that then they get confused between your personal current subjective state of knowledge, and the objective state of the universe.

There's several different assumptions in that statement, which aren't really worth unpacking, and anyway it's not really relevant here anyway since we're only talking about one specific universe, the one we happen to live in.

There's no such universe. We exist simultaneously in all universes consistent with our experience.

Statement 1 and statement 2 are describing very, very different universes. In the universe described by statement 2, there is a God, and he intervenes about once every 100 weeks. In statement 1, I am 99% certain that I am in a universe where there is no God that ever intervenes in people's lives, and if I am correct then there is zero chance of a divine intervention next week or any week.

There are universes in which God doesn't exist. There are universes in which God (or a god) exists. We can discuss the measure of the former w.r.t. the measure of the latter or we can discuss the frequency of divine intervention in the latter.

I think a mistake a lot of people make after they learn about Bayes' Theorem is that then they get confused between your personal current subjective state of knowledge, and the objective state of the universe.

Why do you think there is an "objective state of the universe"? The only fundamentally meaningful distinction is between

  1. The measure in the space of universes defined by the sum of available evidence
  2. The approximation to 1 produced by our limited computational power / analytic ability

There's no such universe. We exist simultaneously in all universes consistent with our experience.

That's an interesting way to look at things.

I'm curious; is it more useful to look at it that way then the more standard separation of subjective experience on one hand with objective reality on the other that most people make? When does that viewpoint make different predictions, if ever? Is it easier to use that as a viewpoint?

Your viewpoint does make sense; at least at the quantum-mechanics level, it probably is a valid way to view the universe. At a macro level, though, I think "all universes consistent with our experience" are probably almost exactally the same as "there is one objective universe"; it's just that we don't have brains capable of using the data we already have to eliminate most of the possibilities. A superintendence with the same data set we have would probably be able to figure out what "objective reality" looks like 99.9% of the time (on a macro level, at least); which means that most of your "possible universes" can't actually exist in a way that's consistent with our experiences, we're just not smart enough to figure that out yet.

I'm curious; is it more useful to look at it that way then the more standard separation of subjective experience on one hand with objective reality on the other that most people make? When does that viewpoint make different predictions, if ever? Is it easier to use that as a viewpoint?

If you assume you exist in a single "objective" universe then you should be able to assign probabilities to statements of the form "I am in universe U". However, it is not generally meaningful, as the following example demonstrates.

Suppose there is a coin which you know to be either a fair coin or a biased coin with 0.1 probability for heads and 0.9 probability for tails. Suppose your subjective probability of the coin being fair is 50%. After observing a sequence of coin tosses you should be able to update your subjective probability.

Now let's introduce another assumption: When the coin lands tails, you are split into 9 copies. When the coin lands heads nothing special happens. Consider again a sequence of coin tosses. How should you update your probability of the coin being fair? Should you assume that because of the 9 copy formation your subjective a priori probability for getting tails is multiplied by 9? The Anthropic Trilemma raises its ugly head.

IMO, the right answer is that of UDT: There are no meaningful subjective expectations. There are only answers to decision theoretic questions, e.g. questions of the sort "on what should you bet assuming the winnings of all your clones are accumulated in a given manner and you want to maximize the total profit". Therefore there is also no meaningful way to perform a Bayesian update, i.e. there are no meaningful epistemic probablities.

Everything becomes clear once you acknowledge all possibilities coexist and your decisions affect all of them. However, when you're computing your utility you should weight these possibilities according to the Solomonoff prior. In my view, the weights represents how real a given possibility is (amount of "magic reality fluid"). In Coscott's view it is just a part of the utility function.

which means that most of your "possible universes" can't actually exist in a way that's consistent with our experiences, we're just not smart enough to figure that out yet.

Not exactly. There is no way to rule out e.g. you seeing purple pumpkins falling out of the sky in the next second. It is not inconsistent, it is just improbable. Worse, since subjective expectations don't make sense, you can't even say it's improbable. The only thing you can say is that you should be making your decisions as if purple pumpkins are not going to fall out of the sky.

Not exactly. There is no way to rule out e.g. you seeing purple pumpkins falling out of the sky in the next second. It is not inconsistent, it is just improbable.

Well, let me put it this way. If there is no mathematically consistent and logically consistent universe where everything that I already know is true is actually true, and where purple pumpkins are going to suddenly fall out of the sky, then it is impossible for it to happen. That is true even if I, personally, am not intelligent enough to do that math to demonstrate that that is not possible based on my previous observations.

You will never experience two different things that are actually logically inconstant with each other. Which means that every time you experience anything, it automatically rules out any number of possibilities, and that's true no matter if you know that or not.

I suspect (although I don't know for sure) that a superintelligence would be able to rule out most possibilities with a fairly small amount of hard evidence, to a much greater extent then we can. So that means that if you have access to that same information, then many things are, in fact, impossible for you to ever experience because they're inconstant with things you already know, even if no human or group of humans has the intelligence to actually prove that they're inconsistent.

How about mathematical impossibilities?

These are probability 0, probably. Unless there is a Tegmark V multiverse of inconsistent mathematics, like Coscott suggested. However, e.g. "God exists" doesn't seem to be a mathematically inconsistent statement for all plausible definitions of "God". Maybe it should be "a god" rather than "God" since captial 'G' suggests something multiversal rather than something which exists only in obscure universes.

Many people have a definition of God which is logically inconsistent. I am not making a claim about what you should do then. However, if you have a logically consistent view of god and you only care about universes where God exists, then you should act under the assumption that god exists, and there is no sense in which you are objectively wrong.

This sounds a lot like quantum suicide, except... without the suicide. So those versions of yourself who don't get what they want (which may well be all of them) still end up in a world where they've experienced not getting what they want. What do those future versions of yourself want then?

EDIT: Ok, this would have worked better as a reply to Squark's scenario, but it still applies whenever this philosophy of yours is applied to anything directly (in the practical sense) observable.

I think you are misunderstanding me

First, let's be clear what we mean by saying that probabilities are weights on values. Imagine I have an unfair coin which give heads with probability 90%. I care 9 times as much about the possible futures in which the coin comes up heads as I do about the possible futures in which the coins comes up tails. Notice that this does not mean I want to coin to come up heads. What it means is that I would prefer getting a dollar if the coin comes up heads to getting a dollar if the coin comes up tails.

Also see this comment from Squark in the other thread

This is an incorrect interpretation of Coscott's philosophy. "Caring really hard about winning" = preferring winning to losing. The correct analogy would be "Caring about [whatever] only in case I win". The losing scenarios are not necessarily assigned low utilities: they are assigned similar utilities. This philosophy is not saying: "I will win because I want to win". It is saying: "If I lose, all the stuff I normally care about becomes unimportant, so when I'm optimizing this stuff I might just as well assume I'm going to win". More precisely, it is saying "I will both lose and win but only the winning universe contains stuff that can be optimized".

It has nothing to do with wanting one world more than another. It is all about thinking that one world is more important than another. If I observe that I am not in an important world, I work to make the most important world that I can change as good as possible.

How do you deal with logical coins? Do you care about the worlds where the 10,000th digit of pi is 3 about as much as the worlds where the 10,000th digit of pi is 7?

Do you care about the worlds where the 10,000th digit of pi is about 1/10th as much as you care about the worlds where 1 + 1 = 2, even though the two sets of worlds are exactly identical?

Logical uncertainty still gets probabilities just like they used to. Only indexical uncertainty gets pushed into the realm of values.

(At least for now, while I am thinking about the multiverse as tegmark 4. I am very open to the possibility that eventually I will believe even logically inconsistent universes exist, and then they would get the same fate as indexical uncertainty)

In one model I considered I put tegmark 4 as the one weighted according to my values, and called the set of different counterfactual universes other agents might care about as tegmark 5. This was mainly for the purpose of fiction where it filled a role as a social convention among agents with very different values of this type, but it's an interesting idea of what the concept might look like.

These could by the way not necessarily be just quantitatively different weights over the same set of universes. For example we can imagine that it turns out humans and human derived agents are solomnoff induction like and only value things describable by turing machines computing them causally, but some other things value only the outputs of continuous functions.