[ Question ]

Do you think you are a Boltzmann brain? If not, why not?

by Jack R1 min read15th Oct 202123 comments

2

World Modeling
Frontpage

For more on Boltzmann brains, see here.

New Answer
Ask Related Question
New Comment

7 Answers

Yes, and also no.

 

That is, there are Boltzmann Brains that represent my current mental state, and there are also 'normal' universes containing 'normal' brains doing the same thing, and there are probably a bunch of other things too.

All of them are me.

Even if the vast majority of entities with your current mental state are Boltzmann brains, you can only expect the mental operations to carry out the conclusion "and therefore I am likely a Boltzmann brain" to validly operate in the entities in which you are not, in fact, a Boltzmann brain. That operation, therefore, would only harm the accuracy of your beliefs. 

1Jack R2moWhy can't you have Boltzmann brains-that-carry-out-that-operation?
3Flaglandbase2moBecause most Boltzmann brains are so ephemeral they would instantaneously collapse.
1Jack R2moAgreed, I think that's a good reason. It's related to the reason I don't think I am a Boltzmann brain--most Boltzmann brains don't have the memory that they exist due to evolutionary processes, since brains with that memory are an extremely small sector of all possible Boltzmann brains. And so it seems like the simplest explanation for me having that memory is that evolution actually happened (since the B-brain explanation is kind of wild). Though I haven't thought super carefully about this and would like to hear other's thoughts.
2Jiro2moIf you're a Boltzmann brain, that chain of reasoning becomes suspect, since you may not have actually made it at all and only believe you've made it.

Do you think that "most" of you are Boltzmann brains?

1rosyatrandom2moI'm not sure we're dealing with quantifiable abstractions here
1Jack R2moI also had this thought, though I'm not sure--what kind of abstractions are we talking about?
1rosyatrandom2moThere's probably only one kind of fundamental abstraction: can A represent B if you squint real hard? Can 'nothing' represent 'something'*? If so, perhaps that's all you need to get 'everything'. * Like how you can build numbers up from the empty set: https://en.wikipedia.org/wiki/Set-theoretic_definition_of_natural_numbers

If there are no real worlds, but only BBs all along, this argument doesn't work. 

However, it is still not a big problem, as Dust theory still works, and for any BB there will be another BB which represent its next mental state. So from inside it will look like normal world. Mueller wrote a mathematical formalism for this.

1rosyatrandom2moOh yes, 'real' is a fuzzy concept once you allow Boltzmann/Dust approaches. Things just... are, and can be represented by other things that also just are...
2avturchin2moIn his article Mueller says that no physics exists at all. Only math world exists, and dust minds are just random strings of digits.
1rosyatrandom2moAnd strings/digits themselves are just a bunch of bits in fancy clothes. At some point, years ago, I decided that reality was basically just 'nothing', endlessly abstracted, and what can you do? :_D

No, because that's a meaningless claim about external reality. The only meaningful claims in this context are predictions.

"Do you expect to see chaos, or a well formed world like you recall seeing in the past, and why?"

The latter. Ultimately that gets grounded in Occam's razor and Solomonoff induction making the latter simpler.

I basically still endorse this, but have shifted even more in the direction of endorsing the simplicity prior: https://www.lesswrong.com/posts/yzrXFWTAwEWaA7yv5/boltzmann-brains-and-within-model-vs-between-models

This is a question similar to "am I a butterfly dreaming that I am a man?". Both statements are incompatible with any other empirical or logical belief, or with making any predictions about future experiences. Therefore, the questions and belief-propositions are in some sense meaningless. (I'm curious whether this is a theorem in some formalized belief structure.)

For example, there's an argument about B-brains that goes: simple fluctuations are vastly more likely than complex ones; therefore almost all B-brains that fluctuate into existence will exist for only a brief moment and will then chaotically dissolve in a kind of time-reverse of their fluctuating into existence.

Should a B-brain expect a chaotic dissolution in its near future? No, because its very concepts of physics and thermodynamics that cause it to make such predictions are themselves the results of random fluctuations. It remembers reading arguments and seeing evidence for Boltzmann's theorem of enthropy, but those memories are false, the result of random fluctuations.

So a B-brain shouldn't expect anything at all (conditioning on its own subjective probability of being a B-brain). That means a belief in being a B-brain isn't something that can be tied to other beliefs and questioned.

No.

Mathy-answer:

Because "thinking" is an ability that implies the  ability to predict future states off the  world based  off of previous states  of the world.  This is only possible  because the past is lower entropy than the future  and  both  are well below  the maximum  possible entropy.  A Boltzman brain  (on  average) arises  in a maximally entropic  thermal bath, so "thinking" isn't a meaningful activity  a Boltzman  brain  can  engage in.

 

Non Mathy answer:

Unlike  the majority of LW readers,  I don't  buy into the  MWI  or  Mathematical realism, or generally any exotic theory that allows for super-low-probability  events.  The  universe was created by a  higher  power, has  a beginning, middle  and end,  and the odds of  a Boltzman  brain  arising in that universe are basically zero.

In addition to what DanArmak said:

Even if you, in the moment, do not have good reason to be confident that you are not a Boltzmann brain, you do have much better reason to believe that any entity you create in the future is not a Boltzmann brain.

If you wish to improve the accuracy of that entity's beliefs, you can do so by instilling that entity with a low prior of being a Boltzmann brain.

Among the entities you will create in the future is your own future self. 

No, I don't. I think the argument for their existence is pretty weak at best, and if they exist and are common, so what? It's the sort of hypothesis for which no possible evidence can be given for or against and no action can be taken in any event.

Even given the (in my opinion pretty unlikely) hypotheses of their existence and ubiquity, what's the point of considering whether you're one of them? Such "observers", stretching the term to cover entities with essentially certain inability to form thoughts, lacking any sort of consistent memories, and hallucinating in their mean lifetime of less than a millisecond, can't do anything about it anyway.

3 comments, sorted by Highlighting new comments since Today at 2:36 PM

Sean Carroll talked about this just recently, in the context of Bayesianism https://www.preposterousuniverse.com/podcast/2021/09/16/ama-september-2021/

It's around 2:11:08, or Ctrl-F in transcript.

If I am a Boltzmann brain, and I guess correctly, what do I gain?

If I am not a Boltzmann brain, and I guess incorrectly, what do I lose?

In general, I think it does matter what you think "actually exists" even outside of what you can observe. For instance, to me it seems like your beliefs about what "actually exists" would affect how you acausally trade, but I haven't thought about this much.