**Followup to**: Anthropic Reasoning in UDT by Wei Dai

Suppose that I flip a logical coin - e.g. look at some binary digit of pi unknown to either of us - and depending on the result, either create a billion of you in green rooms and one of you in a red room if the coin came up 1; or, if the coin came up 0, create one of you in a green room and a billion of you in red rooms. You go to sleep at the start of the experiment, and wake up in a red room.

Do you reason that the coin very probably came up 0? Thinking, perhaps: "If the coin came up 1, there'd be a billion of me in green rooms and only one of me in a red room, and in that case, it'd be very *surprising* that I found myself in a red room."

What is your degree of subjective credence - your posterior probability - that the logical coin came up 1?

There are only two answers I can see that might in principle be coherent, and they are "50%" and "a billion to one against".

Tomorrow I'll talk about what sort of trouble you run into if you reply "a billion to one".

But for today, suppose you reply "50%". Thinking, perhaps: "I don't understand this whole *consciousness* rigamarole, I wouldn't try to program a computer to update on it, and I'm not going to update on it myself."

In that case, why don't you believe you're a Boltzmann brain?

Back when the laws of thermodynamics were being worked out, there was first asked the question: "Why did the universe seem to start from a condition of low entropy?" Boltzmann suggested that the larger universe *was* in a state of high entropy, but that, given a *long enough *time, regions of low entropy would spontaneously occur - wait long enough, and the egg will unscramble itself - and that our own universe was such a region.

The problem with this explanation is now known as the "Boltzmann brain" problem; namely, while Hubble-region-sized low-entropy fluctuations will *occasionally* occur, it would be far more likely - though still not likely in any absolute sense - for a handful of particles to come together in a configuration performing a computation that lasted just long enough to think a single conscious thought (whatever that means) before dissolving back into chaos. A random reverse-entropy fluctuation is exponentially vastly more likely to take place in a small region than a large one.

So on Boltzmann's attempt to explain the low-entropy initial condition of the universe as a random statistical fluctuation, it's far more likely that we are a little blob of chaos temporarily hallucinating the *rest* of the universe, than that a multi-billion-light-year region spontaneously ordered itself. And most such little blobs of chaos will dissolve in the next moment.

"Well," you say, "that may be an *unpleasant *prediction, but that's no license to *reject *it." But wait, it gets worse: The vast majority of Boltzmann brains have experiences *much less ordered* than what you're seeing right now. Even if a blob of chaos coughs up a visual cortex (or equivalent), that visual cortex is unlikely to see a highly ordered visual field - the vast majority of possible visual fields more closely resemble "static on a television screen" than "words on a computer screen". So on the Boltzmann hypothesis, *highly ordered *experiences like the ones we are having now, constitute an exponentially infinitesimal fraction of all experiences.

In contrast, suppose one more simple law of physics not presently understood, which forces the initial condition of the universe to be low-entropy. Then the exponentially vast majority of brains occur as the result of ordered processes in ordered regions, and it's not at all surprising that we find ourselves having ordered experiences.

But wait! This is *just the same sort of logic *(is it?) that one would use to say, "Well, if the logical coin came up heads, then it's very surprising to find myself in a red room, since the vast majority of people-like-me are in green rooms; but if the logical coin came up tails, then most of me are in red rooms, and it's not surprising that I'm in a red room."

If you reject that reasoning, saying, "There's only *one *me, and that person seeing a red room does exist, even if the logical coin came up heads" then you should have no trouble saying, "There's only one me, having a highly ordered experience, and that person exists even if all experiences are generated at random by a Boltzmann-brain process or something similar to it." And furthermore, the Boltzmann-brain process is a much *simpler* process - it could occur with only the barest sort of causal structure, no need to postulate the full complexity of our own hallucinated universe. So if you're not updating on the apparent conditional *rarity* of having a highly ordered experience of gravity, then you should just believe the very simple hypothesis of a high-volume random experience generator, which would necessarily create your current experiences - albeit with extreme relative infrequency, but you don't care about that.

Now, doesn't the Boltzmann-brain hypothesis also predict that reality will dissolve into chaos in the next moment? Well, it predicts that the *vast majority* of blobs who experience this moment, cease to exist after; and that among the few who *don't* dissolve, the vast majority of *those* experience chaotic successors. But there would be an infinitesimal fraction of a fraction of successors, who experience ordered successor-states as well. And you're not alarmed by the rarity of those successors, just as you're not alarmed by the rarity of waking up in a red room if the logical coin came up 1 - right?

So even though your friend is standing right next to you, saying, "I predict the sky will *not* turn into green pumpkins and explode - oh, look, I was successful again!", you are not disturbed by their unbroken string of successes. You just keep on saying, "Well, it was necessarily true that *someone* would have an ordered successor experience, on the Boltzmann-brain hypothesis, and that just *happens* to be us, but in the *next* instant I will sprout wings and fly away."

Now this is not quite a *logical contradiction*. But the total rejection of all science, induction, and inference in favor of an unrelinquishable faith that the next moment will dissolve into pure chaos, is sufficiently unpalatable that even I decline to bite that bullet.

And so I still can't seem to dispense with anthropic reasoning - I can't seem to dispense with trying to think about *how many* of me or *how much* of me there are, which in turn requires that I think about what sort of process constitutes a *me*. Even though I confess myself to be sorely confused, about what could possibly make a certain computation "real" or "not real", or how some universes and experiences could be quantitatively realer than others (possess more reality-fluid, as 'twere), and I still don't know what exactly makes a causal process count as something I might have been for purposes of being surprised to find myself as me, or for that matter, what exactly is a causal process.

Indeed this is all greatly and terribly confusing unto me, and I would be less confused if I could go through life while only answering questions like "Given the Peano axioms, what is SS0 + SS0?"

But then I have no defense against the one who says to me, "Why don't you think you're a Boltzmann brain? Why don't you think you're the result of an all-possible-experiences generator? Why don't you think that gravity is a matter of branching worlds in which all objects accelerate in all directions and in some worlds all the observed objects happen to be accelerating downward? It *explains* all your observations, in the sense of logically necessitating them."

I want to reply, "But then *most people* don't have experiences *this ordered,* so *finding myself *with an ordered experience is, on your hypothesis, very *surprising.* Even if there are *some versions* of me that *exist* in regions or universes where they arose by chaotic chance, I *anticipate*, for purposes of predicting *my future experiences*, that *most of my existence* is *encoded *in regions and universes where I am the product of ordered processes."

And I currently know of no way to reply thusly, that does not make use of poorly defined concepts like "number of real processes" or "amount of real processes"; and "people", and "me", and "anticipate" and "future experience".

Of course confusion exists in the mind, not in reality, and it would not be the least bit surprising if a *resolution* of this problem were to dispense with such notions as "real" and "people" and "my future". But I do not presently have that resolution.

(Tomorrow I will argue that anthropic updates must be illegal and that the correct answer to the original problem must be "50%".)

Necromancy, but: easy. Boltzmann brains obey little or no causality, and thus cannot possibly benefit from rationality. As such, rationality is wasted on them. Optimize for the signal, not for the noise.

If the question was, "What odds should you bet at?", it could be answered using your values. Suppose each copy of you has $1000, and copies of you in a red room are offered a bet that costs $1000 and pays $1001 if the Nth bit of pi is 0. Which do you prefer:

To refuse the bet?

To take the bet?

But the question is "What is your posterior probability"? This is not a decision problem, so I don't know that it has an answer.

I think it may be natural to ask instead: "Given that y... (read more)

The skeleton of the argument is:

I think the argument can be improved.

According to the minimum description length notion of science, we have a model and a sequence of observations. A &... (read more)

I would have answered 1B:1 (looking forward to the second post to be proved wrong), however I think a rational agent should never believe in the Boltzmann brain scenario regardless.

Not because it is not a reasonable hypothesis, but since it negates the agent's capabilities of estimating prior probabilities (it cannot trust even a predetermined portion of its memories) plus it also makes optimizing outcomes a futile undertaking.

Therefore, I'd generally say that an agent has to assume an objective, causal reality as a

preconditionof using decision theory at all.This sounds backwards (sideways?); the reason to (strongly) believe one is a Boltzmann brain is that there are very many of them in some weighting compared to the "normal" you, which corresponds to accepting probability of 1 to the billion in this though... (read more)

It seems to me that “I’m a Bolzmann brain” is exactly the same sort of useless hypothesis as “Everything I think I experience is a hallucination manufactured by an omnipotent evil genie”. They’re both non-falsifiable by definition, unsupported by any evidence, and have no effect on one’s decisions in any event. So I say: show me some evidence, and I’ll worry about it. Otherwise it isn’t even worth thinking about.

Rosencrantz & Guildenstern Are Dead, Tom StoppardThe Boltzmann brain argument was the reason why I had not adopted something along the lines of UDT, despite having considered it and discussed it a bit with others, before the recent LW discussion. Instead, I had tagged it as 'needs more analysis later.' After the fact, that looks like flinching to me.

Here, let me re-respond to this post.

"A high-volume random experience generator" is not a hypothesis. It's a thing. "The universe is a high-volume random experience generator" is better, but stil... (read more)

I think we need to reduce "surprise" and "explanation" first. I suggest they have to do with bounded rationality and logical uncertainty. These concepts don't seem to exist in decision theories with logical omniscience.

Surprise seems to be the output of some heuristic that tell you when you may have made a cognitive error or taken a computational shortcut that turns out to be wrong (i.e., you find yourself in a situation where you had previously computed to have low probability) and should go back and recheck your logic. After you've f... (read more)

Suppose Omega plays the following game (the "Probability Game") with me:

You will tell me a number X representing the probability of A. If A turns out to be true, I will increase your utility by ln(X); otherwise, I will increase your utility by ln(1-X).It's well-known that the way one maximizes one's expected utility is by giving their actual expected probability of X.Presumably, decision mechanisms should be consistent under reflection. Even if not, if I somehow know that Omega's going to split me into 1,000,000,001 copies and do this, I want t... (read more)

(Missing word alert in paragraph 11: "Even [if] a blob of chaos coughs up a visual cortex (or equivalent)...".)

Is your intent here to argue both sides of the issue to help, well, lay out the issues, or is it your actual current position that anthropic updates really really are verbotten and that 50% is the really really correct answer?

In the criticisim of Boltzman, entropy sounds like a radio dial that someone is tweaking rather than a property of some space. I may be misunderstanding something.

Basically, if some tiny part of some enormous universe happened to condense into a very low-entropy state, that does not mean that it could spontaneously jump to a high-entropy state. It would, with extremely high probability, slowly return to a high-entropy state. It thus seems like we could see what we actually see and not be at risk of spontaneously turning into static. Our current observable ... (read more)

It's not enitrely clear what does t mean to create a number of "me": my consciuousness is only one and cannot be more than one and I only can feel sensations from one sigle body. If the idea is just to generate a certain number of physical copies of my body and embed my present consciousness into one of them at random then the problem is at least clear and determined from a mathematical point of view: it seems to be a simple probability problem about conditional probability. You are asking what is the probability that an event happened in the past given the condition of some a priori possible consequence, it can be easily solved by Bayes' formula and the probability is about one over 1 billion.

I think a portion of the confusion comes from implicit assumptions about what constitutes "you", and an implicit semantics for how to manipulate the concept. Suppose that there are N (N large) instances of "you" processes that run on Boltzmann Brains, and M (M << N) that run in sensible copies of the world around me. Which one of them is "you"? If "you" is a particular one of the N that run on Boltzmann Brains, then which one is "you, 10 seco... (read more)

ISTM the problem of Boltzmann brains is irrelevant to the 50%-ers. Presumably, the 50%-ers are rational--e.g., willing to update on statistical studies significant at p=0.05. So they don't object to the

statisticsof the situation; they're objecting to the concept of "creating a billion of you", such that you don't know which one you are. If you had offered to roll a billion-sided die to determine their fate (check your local tabletop-gaming store), there would be no disagreement.Of course, this problem of identity and continuity has been hash... (read more)

"Why did the universe seem to start from a condition of low entropy?"

I'm confused here. If we don't go with a big universe and instead just say that our observable universe is the whole thing, then tracing back time we find that it began with a very small volume. While it's true that such a system wold necessarily have low entropy, that's largely because small volume = not many different places to put things.

Alternative hypothesis: The universe began in a state of maximal entropy. This maximum value was "low" compared to present day... (read more)

BBs can't make correct judgement about their reality. Their judgements are random. So 50 per cent BBs think that they are in non-random reality even if they are in random. So your experience doesn't provide any information if you are BB or not. Only prior matters, and the prior is high.

If I wake up in a red room after the coin toss, I'm going to assume that there are a billion of us in red rooms, and one in a green room, and vice versa. That way a billion of me are assuming the truth, and one is not. So chances are (Billion-and-one out of billion) that this iteration of me is assuming the truth.

We'll each have to accept, of course, the possibility of being wrong, but hey, it's still the best option for me altogether.

Trouble? We'll take it on together, because every "I" is in this team. [applause]

Eliezer_Yudkowsky wrote: "I want to reply, "But then most people don't have experiences this ordered, so finding myself with an ordered experience is, on your hypothesis, very surprising."

One will feel surprised by winning a million dollar on the lottery too, but that doesn't mean that it would be rational to assume that just because one won a million dollar on the lottery most people win a million dollar on the lottery.

Maybe most of us exist only for a fraction of a second, but in that case, what is there to lose by (probably falsely, but m... (read more)

I am a Boltzmann brain atheist. ;)

Boltzmann brains are a problem even if you're a 50 percenter. Many fixed models of physics produce lots of BB. Maybe you can solve this with a complexity prior, that BB are less real because they're hard to locate. But having done this, it's not clear to me how this interacts with Sleeping Beauty. It may well be that such a prior also favors worlds with fewer BB, that is, worlds with fewer observers, but more properly weighted observers.

(ETA: I read the post backwards, so that was a non sequitur, but I do think the application of anthropics to BB is not at all clear. I agree with Eliezer that it looks like it helps, but it might well make it worse.)

Here's a logic puzzle that may have some vague relevance to the topic.

You and two teammates are all going to be taken into separate rooms and have flags put on your heads. Each flag has a 50% chance of being black or being white. None of you can see what color your own flag is, but you will be told what color flags your two teammates are wearing. Before each of you leave your respective rooms, you may make a guess as to what color flag you yourself are wearing. If at least one of you guesses correctly and nobody guesses incorrectly, you all win. If anyone ... (read more)

It's a tricky category of question alright - you can make it even trickier by varying the procedure by which the copies are created.

The best answer I've come up with so far is to just maximize total utility. Thus, I choose the billion to one side because it maximizes the number of copies of me that hold true beliefs. I will be interested to see whether my procedure withstands your argument in the other direction.

(And of course there is the other complication that strictly speaking the probability of a logical coin is either zero or one, we just don't know ... (read more)

Well, I don't think the analogy holds up all that well. In the coin flip story we "know" that there was a time before the universe with two equally likely rules for the universe. In the world as it is, AFAIK we really don't have a complete, internally consistent set of physical laws fully capable of explaining the universe as we experience it, let alone a complete set of all of them.

The idea that we live in some sort of low entropy bubble which spontaneously formed in a high entropy greater universe seems pretty implausible for the reasons you describe. But I don't think we can come to a conclusion from this significantly stronger than "there's a lot we haven't figured out yet".

This one always reminds me of flies repeatedly slamming their heads against a closed window rather than to face the fact that there is something fundamentally wrong with some of our unproven assumptions about thermodynamics and the big bang.

I'd like to be the first to point out that this post doubles as a very long (and very undeserved) response to this post.

Non-scientific hypothesis:

The universe's initial state was a singularity as postulated by the big bang theory, a state of minimal entropy. As per thermodynamics, entropy has been, is, and will be increasing steadily from that point until precisely 10^40 years from the Big Bang, at which point the universe will cease to exist with no warning whatsoever.Though this hypothesis is very arbitrary (the figure "10^40 years" has roughly 300 bits of entropy), I figure it explains our observations at least 300 bits better than the "vanilla heat death... (read more)