I have a question on Boltzmann brains which I'd like to hear your opinions on - mostly because I don't know the answer to this....

First of all - a Boltzmann brain is the idea that - in a sufficiently large and long-lived universe, a brain just like mine or yours will occasionally pop into existence by sheer fluke. It doesn't happen very often - in fact it happens very, very infrequently. The brain will go on to have an experience or two before ceasing to exist on the grounds that it's surrounded by vacuum and cold - which is not a very good place for a disembodied brain to be.

Such a brain would have a very short life. Well, by a greater fluke, some of them might last for a longer time, but the balance of probabilities is that most boltzmann brains that think they had a long life merely had a lot of false memories of this life, planted by the fluke of their sudden pop into existence. And in their few seconds of life, they never got to realise that they didn't actually live that life, and their memories make no sense.

Well, boltzmann brains don't pop into existence by fluke very often - in the whole life of the observable universe it's overwhelmingly likely that it's never happened.

What might be more likely to happen? Well, you could have half a Boltzmann brain instead, and by sheer fluke, have the nerves leading from that half-brain stimulated as if the other half was there during the few seconds of the half-brain's life. This is still extremely unlikely, but tremendously more likely than having a whole Boltzmann brain appear. And the half-brain still thinks it has the same experience as before.

There is of course nothing to stop us from continuing this. Suppose we have a one-quarter brain? Much more probable. One millionth? Even more probable. Maybe even single elements of a nerve cell? More probable still. The smaller the piece is, the less of a fluke you need for it to come into existence, and the less of a fluke you need to continue to supply all the same inputs that it would have had in the event that the whole brain appeared.

So we keep dividing down until we hit the opposite problem. As we divide and divide the Boltzmann brain, all the time ensuring that the part we had left had the same experience as before, we eventually get down to really simple processes that we can easily simulate, and which happen in the real universe all the time. And inputs which also happen all the time. This type of 'boltzmann brain' is extremely likely to happen.

I can sort of answer part of it. Let's forget about brains for a moment and assume it's a PC that just bubbled out of the vacuum, along with something that can power it for a bit. We can do the same thing - dividing it down into smaller parts until we get just a transistor - then below that maybe just a minimal switch made out of maybe some atomic collisions. The complete PC that bubbled up can run my web browser or whatever. The subdivided parts of it continue to act in the same way as before, because by fluke we keep providing their inputs. Right at the bottom of the stack we have a simple switch, or perhaps memory that can either be 1 or 0 - it's still doing the same things that allow the complete PC to run a complex program, but the complexity of that higher level is no longer there.

The problem is partially a sensory one. With the brain case, we accept that we have consciousness when the whole brain is there. It's still there when we separate the parts and wire them together. It's still there when we fluke out the inputs on one side?? It's still there when all the parts are not actually connected any more?

Over to you....

New to LessWrong?

New Comment
27 comments, sorted by Click to highlight new comments since: Today at 8:59 AM

For decision-theoretic purposes, any statement which leads to the conclusion that none of your actions matter can be safely marked false without risk of negative consequences, since if it's true, no decision you make based on the belief that it's true can have consequences. "I am a Boltzmann brain" is one such statement - if you are, then you aren't embedded in a universe you can affect. So I mark that statement, and all its variations, false, and don't bother to guess their probability.

I don't think that completely follows. If someone is a Boltzmann brain then they should optimize their remaining fraction of a second to think thoughts that give them highest utility. This is an argument to think about having sex with whatever gender one prefers.

I agree with you when it comes to the decision theory aspects. I don't believe I am a BB, nor do I lose sleep over the possibility. Nor do I worry about the possibility that the universe may end abruptly... But I am interested in what philosophy these presumptions lead to.

There is of course nothing to stop us from continuing this. Suppose we have a one-quarter brain? Much more probable. One millionth? Even more probable. Maybe even single elements of a nerve cell? More probable still.

If the question is whether our conscious experience might be attributable to a Boltzmann thingy, the lower bound on the size of the thingy is the amount of brain it takes to be conscious. I haven't dissolved consciousness, of course, but I would be surprised if a single neuron could be conscious, no matter what state it was in.

Consciousness comes from many neurons working in unity, so one neuron could not create a consciousness.

Maybe a lifetime is an action potential. The "input" is provided because the precise configuration designating "existing" physically real sense impressions of a physically possible universe is necessary for such a configuration to even exist at all,- to the limits of the knowledge of that subjective being's imagined experience. That limit makes it still statistically more probable than the entire history of a universe happening the exact way required to produce that moment.

If you can reproduce all the inputs, you can theoretically do it with a single neuron. Like a hologram of a higher dimensional object encoded onto a lower dimension. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3741678/

The problem is partially a sensory one. With the brain case, we accept that we have consciousness when the whole brain is there. It's still there when we separate the parts and wire them together. It's still there when we fluke out the inputs on one side?? It's still there when all the parts are not actually connected any more?

Check this:

“Consciousness continues to confound us on all fronts — we haven’t even established what it’s good for,” Watts says. “It’s slow, metabolically expensive, and — as far as we can tell — unnecessary for intelligence. More fundamentally, we don’t have a clue how it works — how can the electrical firing of neurons produce the subjective sense of self? How can a bunch of ions hopping the synaptic gap result in the sense of this little thing behind the eyes that calls itself ‘I?’”

“One thing we have discovered is that consciousness involves synchrony — groups of neurons firing in sync throughout different provinces of the brain,” he says. “Something else we’ve known for some time is that when you split the brain down the middle — force the hemispheres to talk the long way around, via the lower brain, instead of using the fat high-bandwidth pipe of the corpus callosum — you end up with not one conscious entity but two, and those two entities develop different tastes, opinions, even different religious beliefs.”

“What this seems to point to is that consciousness is a function of latency — it depends upon the synchronous firing of far-flung groups of neurons, and if it takes too long for signals to cross those gaps, consciousness fragments. ‘I’ decoheres into ‘we,’” Watts says.

This seems to be really hitting on an issue that is only marginally related to Boltzmann brains and is made more confusing by the really counterintuitive stuff about Boltzmann brains.

Whenever one is trying to make any anthropic argument one has to ask what is an observer? If one believed in ontologically irreducible observers (something close to the classical notion of a soul in many cultures) this wouldn't be a problem. The problem here arises primarily from the difficulty in trying to understand what it means for something to be an observer in a universe where no observer seems to be irreducible.

Incidentally, I sometimes think that the Boltzmann brain argument is an argument that something is very wrong with our understanding of the eventual fate of the universe. The essential problem is that the idea doesn't add up to normality. I don't know how much this should impact my estimates at all (such as whether it should make me slightly doubt current estimates that say we won't have a Big Crunch). Anthropics can be really confusing.

UDT does anthropics without reference classes.

I think you're right - if there was a homunculus of some kind somewhere, then the problem apparently goes away (well, it goes inside the homunculus where it remains as unsolved as ever.) What is clear is that the complexity of our thoughts can't exist in a small enough partial brain, it needs the whole thing to be there - just as with the PC. The complexity is perhaps being hidden in the fiction of continuing to provide the inputs?

The problem can be solved by considering that only one moment needs to be statistically accounted for at a time and then the next moment becomes statistically just as likely as the one before, associated only by anticipation and memory, existing completely independently however by its own random nature. The frequencies of conscious experience range only 5-50 per second, much much less of a data burden than say a planck level simulated reconstruction of an entire physical universe exactly required to create those moments for that lifetime. http://discovermagazine.com/2007/jun/in-no-time

[-][anonymous]13y30

This idea is explored in Permutation City.

Thinking is a big pattern.

A single thread can be in a tapestry, but can't be in the pattern of a tapestry. Once there's the shape of a tapestry, there's an attendant minimum thread count. No thread is necessary, but it takes many of them.

Blue is a big pattern. An atom can be part of something blue, but it can't be blue.

I don't think I'm less than much of a brain because I think. There's a minimum requirement of parts.

And part of this problem is that the Boltzmann brain's thought pattern is likely to be sheer random chaos, wheras we all here believe that our actual thoughts are not like that. It's the organisation that requires the minimum set of parts to be expressed. But how do we know that this actually happens at all?

I think that its very unlikely for you to be able to give the correct inputs to a partial brain without simulating the rest of the brain.

Lets say you you split it down the right and left hemisphere.

I doubt if the inputs to the left hemisphere that the right hemisphere would output have a lower Kolmogorov complexity than the right hemisphere. Compressing the outputs of a half-brain would probably have to take advantage of the regularities given by the fact that its a half-brain.

What's worse, the right hemisphere responds to outputs of the left. Having the inputs be correlated appropriately without referencing the right hemisphere seems wildly implausible.

Assuming that you simulate the rest of the brain in order to get the inputs, then it seems like you're gradually switching substrate, rather than isolating consciousness.

I agree that there isn't likely to be any way of calculating the inputs without simulating the missing part of the brain. But that's not really my point. For every time the correct inputs happen by chance, there will be many other occasions where such a half-BB comes into existence and then gets completely incorrect information. But getting the correct information is definitely cheaper, in probabilistic terms, than getting the rest of the brain - it will happen more often.

That's what I'm disagreeing with -- the assertion that its more likely for you to "accidentally" get the other inputs than it is for you to just get the rest of the brain.

There's about 200-250 million axons in the corpus collosum, which goes between the right and left hemispheres. There's about 7000 synapses per neuron.

P(a, b, c, ...) = P(a)P(b|a)P(c|a,b)...

If you don't have a brain, the individual P(x)s are pretty much independent, and in order to get a particular stimulus pattern you need a few hundred billion fairly-unlikely things to happen at once. In order for the brain to have any sort of sustained existence, you need a few hundred billion fairly unlikely things to happen in a way that corresponds to brain states. So a few hundred billion times a few hundred billion a bunch of times, and more times the longer you run it.

If you have a brain, you take massive penalties on a few P(x)s in order to "buy" your neurons, but after that, things aren't at all independent. Given what one synapse is doing, you have a much better guess at the other 7000/neuron. So you're only guessing a few hundred billion things if you just connect neurons to the other half.

However, neurons interact with each other, and given what everything connected to it is doing, you have a pretty good guess of what it's doing.

So you're guessing a few orders of magnitude fewer things, on top of the 3 orders of magnitude savings from encoding in neurons.

On top of that, sustained interaction with the other hemisphere is much cheaper, given the fact that you already have the other half of the brain, it's fairly probable that it will respond the way the other half of the brain would over the next few seconds.

I didn't have the numbers of axons in the corpus callosum, and they are interesting. If we assume they either fire, or not, independently of each other, at a rate of up to 200Hz, then the bit rate for the bus is about 4 Gigabits per second. If the brain lives a couple of minutes, you'll need about 400 Gigabits, or 40 Gigabytes. This means you get about 4 bytes per brain cell in the other hemisphere.

A single brain cell is so complex that nothing that complex could come into existence as a sheer coincidence over all space and time. It requires an evolutionary process to make something that complex. 4 bytes worth of coincidence happens essentially instantaneously.

A single brain cell is so complex that nothing that complex could come into existence as a sheer coincidence over all space and time. It requires an evolutionary process to make something that complex.

I think you may have missed the point of the Boltzmann-brain hypothetical. As the volume of space and time goes to infinity, the chance of such a thing forming due to chance will converge to one.

4 bytes worth of coincidence happens essentially instantaneously.

I have no idea how to attach meaning to this sentence. Surely the frequency of a one-in-four-billion event depends how many trials you conduct per unit time.

My fault for not describing this more specifically. I know that in truly vast spaces of space and time, it eventually becomes quite likely that a Boltzmann brain emerges in the vastness of the space. But the space and time required is much greater than our observable universe, which is what I was referring to in the first case.

I guess my second sentence is intended to mean that any real universe gets through four billion events of the requisite size (cosmic rays) pretty quickly.

The interesting part of the hypothesis, as I understand it, is less that the probability of a Boltzmann brain approaches one as the universe grows older (trivially true) and more that the amount of negentropy needed to generate a universe is vastly, sillily larger than that needed to generate a small self-aware system that thinks it's embedded in a universe at some point in time -- and thus that anthropic considerations should guide us to favor the latter. This is of course predicated on the idea that the universe arose from a random event obeying the kind of probability distributions that govern vacuum fluctuations and similar events.

This may be accurate when you're still dealing with a hemisphere, but if you accept the principle that it would work, the probabilities are different with a single neuron. The inputs of one neuron over the course of one calculation are far simpler than the rest of the brain. In fact, any combination of inputs would correspond to something from some brain.

Technically, I guess even one neuron might be too much. It has a lot of inputs. Get it down to a few atoms and it definitely works.

While you are describing the changing physical "instantiation", you are keeping the idea of a brain fixed, and that fixed idea enforces the properties of the remaining pieces. The high-level properties then describe that idea, but not the physical pieces, that are given no control over the idea and its properties.

As you lose more brain, the "supply all the same inputs" task grows increasingly complex; I don't think you increase probability by assigning this task to a "different" entity.

It grows more complex at first, and then gets much less complex as the size of the part you need inputs for diminishes.

If my grandmother had wheels, she'd be a wagon.