**EDIT**: Donald Hobson has pointed out a mistake in the reasoning in the section on nucleation. If we know that an area of space-time has a disproportionately high number of observer moments, then it is very likely that these are from long-lived Boltzmann simulations. However, this does not imply, as I thought, that most observer moments are in long-lived Boltzmann simulations.

Most people on LessWrong are familiar with the concept of Boltzmann brains - conscious brains that are randomly created by quantum or thermodynamic interactions, and then swiftly vanish.

There seem to be two types of Boltzmann brains: quantum fluctuations, and nucleated brains (actual brains produced in the vacuum of space by the expanding universe).

The quantum fluctuation brains cannot be observed (the fluctuation dies down without producing any observable effect: there's no decoherence or permanence). If I'm reading these papers right, the probability of producing any given object of duration and mass is approximately

We'll be taking for a human brain, and for it having a single coherent though.

A few notes about this number. First of all, it is vanishingly small, as an exponential of a negative exponential. It's so small, in fact, that we don't really care too much over what volume of space-time we're calculating this probability. Over a Plank length four-volume, over a metre to the fourth power, or over a Hubble volume for 15 billion years: the probabilities of an object being produced in any of these spaces are approximately all of the same magnitude, (more properly, the probabilities vary tremendously, but any tiny uncertainty in the term dwarfs all these changes).

Similarly, we don't need to consider the entropy of producing a specific brain (or any other structure). A small change in mass overwhelms the probability of a specific mass setup being produced. Here's a rough argument for that: the Bekenstein bound puts a limit on the number of bits of information in a volume of space of given size and given mass (or energy). For a mass and radius , it is approximately

Putting and , we get that the number of possible different states in brain-like object of brain-like mass and size is less than

which is much, much, much, ..., much, much less than the inverse of .

## Quantum fluctuations and causality

What is interesting is that the probability expression is exponentially linear in :

Therefore it seems that the probability of producing one brain of duration , and another independent brain of duration , is the same as producing one brain of duration . Thus it seems that there is no causality in quantum fluctuating Boltzmann brains: any brain produced of long duration is merely a sequence of smaller brain moments that happen to be coincidentally following each other (though I may have misunderstood the papers here).

## Nucleation and Boltzmann simulations

If we understand dark energy correctly, it will transform our universe into a de Sitter universe. In such a universe, the continuing expansion of the universe acts like the event horizon of a black hole, and sometimes, spontaneous objects will be created, similarly to Hawking radiation. Thus a de Sitter space can nucleate: spontaneously create objects. The probability of a given object of mass being produced is given as

This number is much, much, much, much, ...., much, much, much, much smaller than the quantum fluctuation probability. But notice something interesting about it: it has no time component. Indeed, the objects produced by nucleation are actual objects: they endure. Think a brain in a sealed jar, floating through space.

Now, a normal brain in empty space (and almost absolute zero-temperatures) will decay at once; let's be generous, and give it a second of survival in something like a functioning state.

**EDIT**: there is a mistake in the following, see here.

Creating independent one-second brains is an event of probability:

But creating a brain that lasts for seconds will be an event of probability

where is the minimum mass required to keep the brain running for seconds.

It's clear that can be way below . For example, the longest moonwalk was 7 h 36 min 56 s (Apollo 17, second moonwalk), or 27,416 seconds. To do this the astronauts used spacesuits of mass around 82kg. If you estimate that their own body mass was roughly 100kg, we get .

This means that for nucleated Boltzmann brains, unlike for quantum fluctuations, most observer moments will be parts of long lived individuals, with a life experience that respects causality.

And we can get much much more efficient than that. Since mass is the real limit, there's no problem in using anti-matter as source of energy. The human brain runs at about 20 watts; one half gram of matter with one half gram of anti-matter produces enough energy to run this for about seconds, or 140 thousand years. Now, granted, you'll need a larger infrastructure to extract and make use of this energy, and to shield and repair the brain; however, this larger infrastructure doesn't need to have a mass anywhere near kilos (which is the order of magnitude of the mass of the moon itself).

And that's all neglecting improvements to the energy efficiency and durability of the brain. It seems that the most efficient and durable version of a brain - in terms of mass, which is the only thing that matters here - is to run the brain on a small but resilient computer, with as much power as we'd want. And, if we get to re-use code, then we can run many brains on a slightly larger computer, with the mass growth being less than the growth in the number of brains.

Thus, most nucleated Boltzmann brain observer-moments will be inside a Boltzmann simulation: a spontaneous (and causal) computer simulation created in the deep darkness of space.

One serious problem I see:

This whole setup presupposes something like a Standard Model spacetime as the 'seed substrate' upon which Boltzmann brains or Boltzmann simulations are generated.

It completely neglects the possibility that our entire universe, and all its rules, are themselves the result of a Boltzmann simulation spawned in some simpler and more inherently fecund chaos.

Ah yes, but if you start assuming that the standard model is wrong and start reasoning from the "what kind of reality might be simulating us", the whole issue gets much, much more complicated. And your priors tend to do all the work in that case.

While insightful I think the numbers are not to be taken too seriously - surely the uncertainty about the model itself (for example the uncorrelated nature of quantum fluctuations all the way up to mass scales of 1kg) is much larger than the in-model probabilities given here?

That is indeed the case.

Actually, you can scale up to even bigger brains. With serious nanotech stuff and a large scale, the limiting factor becomes energy again. To maximise thought per mass you need energy stores much larger than the brainware; near 100% mass to energy and an efficient processor.

The best solution is a black hole, with size on the order of the psudoradius of the De sitter spacetime. The radiating temperature of the blackhole is nanokelvin, only a few times hotter than the average de Sitter radiation. Thus the black hole is of galactic mass. All of its energy is slowly radiated away, and used to run an ultra cold computer at the lamdow limit. The result looks like what a far future civilization might build at the end of time.

Actually there are some subtle issues here that I didn't spot before. If you take a small (not exponentially vast) region of space-time, and condition on that region containing at least 100 observer seconds, it is

farmore likely that this is from a single Boltzmann astronaut, than from 100 separate Boltzmann brains.However if you select a region of space-time with hyper-volume

Then it is likely to contain a Boltzmann brain of mass 1kg, and we suppose that can think for 1 second. The chance of the same volume containing a 2kg Boltzmann brain is

So unless that extra 1 kg of life support can let the Boltzmann brain exist for exp(10^69) seconds, most observer moments should not have life support.

Imagine a lottery thats played by 1,000,000,000 people. there is 1 prize of £1,000,000 and 1,000 prizes of £100,000 each. If I say that my friends have won at least £1000,000 between them (and that I have a number of friends <<100,000) , then its likely that one friend who hit the jackpot. But if I pick a random £1 handed out by this lottery, and look at where it goes, it probably goes to a runner up.

This is directly analogous, except with smaller numbers, and £££'s instead of subjective experience. The one big win is the Boltzmann astronaut, the smaller prizes are Boltzmann brains.

The reason for this behaviour is that doubling the size of the spacetime considered makes a Boltzmann astronaut twice as likely, but makes a swarm of 100 Boltzmann brains 2^100 times as likely. For any small region of spacetime,

Nothing happensis the most likely option. ABoltzmann brain, is Far less likely, and aBoltzmann astronautFar less likely than that. The ratio of thinking times is small enough to be ignored.If we think that we are Boltzmann brains, then we should expect to freeze over in the next instant. If we thought that we were Boltzmann brains, and that there was at least a billion observer moments nearby, then we should expect to be a Boltzmann astronaut.

Let V be the hyper-volume where the probability of a Mkg BB is exactly exp[−M×1069]. Let's imagine a sequence of V's stretching forward in time. About exp[−1069] of them will contain one BB of mass 1 kg, and about exp[−2×1069] will contain a BB of mass 2kg, which is also the proportion that contains two brains of mass 1kg.

So I think you are correct; most observer-moments will still be in short-lived BBs. But if you are in an area with disproportionately many observer moments, then they are more likely to be in long-lived BBs. I will adjust the post to reflect this.

However, Boltzmann simulation may be much more efficient than biological brains. 1 g of advanced nanotech supercomputer could stimulate trillions observer-moments per second, and weight 1000 times less than "real" brain. This means that me are more likely to be inside BB-simulation when in a real BB. Also, most curse and primitive simulations with many errors should dominate.

That won't fix the issue. Just redo the analysis at whatever size is able to mereky do a few seconds of brain simulation.

It probably depends on how mass and time duration of the fluctuation are traded between themselves. For quantum fluctuations which return back to nothingness this relation is define by the principle of uncertainty, and for any fluctuations with significant mass, its time of existence would be minuscule share of a second, which would be enough only for one static observer-moment.

But if we able imagine very efficient in calculations computer, which could perform many calculations by the time allowed for its existence by uncertainty principle, it should dominate by number of observer-moments.

You are making some unjustified assumptions about the way computations can be embedded in a physical process. In particular we shouldn't presume that the only way to instantiate a computation giving rise to an experience is via the forward evolution of time. See comment below.

Hum, why use a black hole when you could have matter and anti-matter to react directly when needed?

Upper level for the energy of randomly appearing BB-simulation is 1 Solar mass. Because the whole new our Sun and whole new our planet could appear as a physical object and in that case it will be not be a simulation - it will be normal people living on the normal planet.

Moreover, it could be not a fluctuation creating a planet, but a fluctuation creating a gas cloud, which later naturally evolve in the formation of a star and planets. Not every gas cloud will create habitable planet, but given astronomically small probabilities we are speaking about, the change will be insignificant.

we even could suggest that what we observer as the Bing Bang could be such a cloud.

I also had similar idea, which I called "Boltzmann typewriter" - that random appearing of AI (or some other generator, like a planet) which creates many observer-moments, will result in simulated observer-moments domination.

As a result, we could be in a simulation without s simulator and with rather random set of rules and end goals. The observational consequences will be very deluted level of strangeness.

Another thought: smaller-size observer-moments would overwhelmingly dominate larger size ones in case of normal BB. An observer-moment which is 1 bit larger will 2 times less probable. My current observer-moment is larger than minimal needed to write this comment as I see a lot of visual background, so I am unlikely to be the pure Boltzmann-brain. But this argument is not working for the simulated Boltzmann brain.

A few typos: It's Bekenstein; exp[M×10^−69] should be exp[-M×10^69]

Thanks, corrected.

I think we need to be careful here about what constitutes a computation which might give rise to an experience. For instance suppose a chunk of brain pops into existence but with all momentum vectors flipped (for non-nuclear processes we can assume temporal symmetry) so the brain is running in reverse.

Seems right to say that could just as easily give rise to the experience of being a thinking human brain. After all we think the arrow of time is determined by direction of decreasing entropy not by some weird fact that only computations which proced in one direction give rise to experiences.

Ok so far no biggie but why insist computations be embedded temporally? One can reformulate the laws of physics to constrain events to the left given the complete (future and past) set of events to the right so why can't the computation be embedded from left to right (ie the arrow we of time points right) or in some completely other way we haven't thought of.

More generally, once we accept the possibility that the laws of physics can give rise to computations that don't run in what we would view as a casual fashion then it's no longer clear that the only kind of things which count as computations are those the above analysis considered.

I tend to see this as an issue of decision theory, not probability theory. So if causality doesn't work in a way we can understand, the situation is irrelevant (note that some backwards-running brains will still follow an understandable causality from within themselves, so some backwards-running brains are decision-theory relevant).

Some points I want to add to the discussion:

On the second point: I see Boltzmann brains as issues of decision theory, not probability theory, so I'm not worried about probability issues with them.

https://www.lesswrong.com/posts/ZvmicfmGg9LWBvy2D/boltzmann-brain-decision-theory

Also, here is assumed that there are only two types of BBs and that they have similar measure of existence.

However, there is a very large class of the thermodynamic BBs, which was described in Egan's dust theory: that is observer-moments, which appear as the a result of causal interaction of atoms in a thermodynamic gas, if such causal interaction has the same causal structure as of a moment of experience. They may numerically dominate, but additional calculations are needed and seem possible. There could be other types of BBs, like pure mathematical ones or products of quantum mind generators, which i describes in the post about resurrection of the dead.

Also, if we, for example, assume that the measure of existence is proportional to the energy used for calculations, when de Sitter Boltzmann brains will have higher measure as they have non-zero energy, and quantum fluctuation minds may have smaller calculation energy as their externally measurable energy is zero and the time of calculations is very short.

>They may numerically dominate, but additional calculations are needed and seem possible.

Over the far future of the universe (potentially infinite), we inhabit an essentially empty de Sitter space without gas for thermodynamical BBs (except for gas also created by nucleation).

But what about "dust minds" inside objects existing now, like my table? Given 10^80 particles in the universe and its existence up to date of like 10^17 seconds and their collision every few nanoseconds, where should be very large amount of randomly appearing causal structures which may be similar to experiences of observers?

I have opinions on this kind of reasoning that I will publish later this month (hopefully), around issues of syntax and semantics.

Did you publish it? link?

Mostly the symbol grounding posts: https://www.lesswrong.com/posts/EEPdbtvW8ei9Yi2e8/bridging-syntax-and-semantics-empirically https://www.lesswrong.com/posts/ix3KdfJxjo9GQFkCo/web-of-connotations-bleggs-rubes-thermostats-and-beliefs https://www.lesswrong.com/posts/XApNuXPckPxwp5ZcW/bridging-syntax-and-semantics-with-quine-s-gavagai

Thanks, I have seen them, but yet have to make a connection between the topic and Boltzmann brains.

Basically that the "dust minds" are all crazy, because their internal beliefs correspond to nothing in reality, and there is no causality for them, except by sheer coincidence.

See also this old post: https://www.lesswrong.com/posts/295KiqZKAb55YLBzF/hedonium-s-semantic-problem

My main true reason for rejecting BBs of most types is this causality breakdown: there's no point computing the probability of being a BB, because your decision is irrelevant in those cases. In longer-lived Boltzmann Simulations, however, causality matters, so you should include them.

There is a possible type of causal BBs: a process which has a sheer causal skeleton similar to a causal structure of an observer-moment (which itself has, – at first approximation, – a causal structure of convolutional neural net). In that case, there is causality inside just one OM.