Against Boltzmann Brain

"Cognitive Instability" is a term coined by Sean Carroll for a type of argument against Boltzmann brain. Simply put: if you believe you are a disembodied brain that came into existence due to random fluctuation as the theory suggests, then you have no logical basis in believing said theory. Because the supporting evidence: your perceptions and memories, your very thought that the theory is correct, are the result of said fluctuation. You can't expect them to reflect true physics. It is unstable because it cannot be true and justifiably believed at the same time. 

This counter doesn't completely rule out you being a Boltzmann brain as a logical possibility. No argument can do that. Yet it convincingly shows why we shouldn't put our faith in it: it's unlikely a fluctuation just happens to produce the brain, i.e. yourself, with the correct physical theory of the universe in its mind. Claiming the contrary would be arrogant and teleological. Like many ingenious arguments, "cognitive instability" has an element of self-reference in it. 

What About Free Will?

Sean Carroll is probably not going to like this. But the same argument can be used against physicalism and in particular the denial of free will. 

For this discussion, let's say something's behaviour is "free" as in it cannot be fully explained by physics. In another word, it's not physically causally closed. Here it's not about if we currently have the technical ability to analyze it, or if the world is ultimately deterministic or stochastic. Free will means it fundamentally cannot be physically reduced, not even in principle. Most people, physicalists for certain, would say this kind of free will cannot be real. Otherwise, it would be a direct violation of physical laws. 

This view would regard your thoughts and actions as the outcome of the underlying physical processes. There is no other contributing factor. Our sense of free conscious choices can be seen as an illusion, or "what it feels like from the inside" of the actual processes, or maybe something else. But no matter how we choose to interpret that, it has no causal power. 

But this means your belief is merely a product of this physical system called a person, by its internal dynamics and interactions with the environment. The feeling of thinking based on reasoning and rationality has no bearing on the outcome. Then how can you be certain that, this particular physical system happens to believe in the correct theories about the universe? Including theories like "I have no free will"? 

Reviewing your own reasoning can't justify it, even if it seems impeccable. Because all evaluation supervenes on some underlying physical process. We can write a program to return "1+1=3". Say it can double-check to see whether the code has been correctly executed. It would still hold that "1+1=3" is the right answer. Consider your body as a machine running an algorithm. What reason there is to say: my algorithm must be nothing like that program. It ought to produce the correct metaphysical theories. Isn't that just as arrogant and teleological?

It has the same instability. That can't be true and justifiably believed at the same time.

So What?

One can argue this changes nothing. Because unlike in Boltzmann brain's case, the alternative here is so outlandishly absurd. I can't be seriously suggesting that I have the supernatural power of altering physical reality with my mind. That's even more arrogant and arguably inconsistent. It must be false. "Once eliminate the impossible, whatever remains, however improbable, must be the truth". And "cognitive instability" only says it's improbable. It does not rule "no free will" out.  

That is certainly a reasonable position. But it still makes me uncomfortable. That means it just so happens I really am a physical system producing the correct metaphysical theory. That's a suspicious coincidence. It might have settled on the improbable too soon instead of looking harder for other alternatives. 

Personally, I think we shouldn't view physics as an objective description of the world from the uniquely privileged view from nowhere. This cognitive instability comes down to the dilemma of thinking theories ought to be judged from this transcendental viewpoint yet they are produced and reviewed by worldly things, like you and me. Instead, maybe we should treat physics as inherently perspective-dependent. Among other benefits, it would not have this conflict and be stable. 

New Comment
27 comments, sorted by Click to highlight new comments since: Today at 8:20 AM

Start with being an embedded agent in a universe that is partially predictable from the inside, that agent has an internal model of the universe in order to survive. Some parts of this model are necessary, others are spurious. The belief in free will is an artifact of an introspecting subsystem colliding with an opaque decision-making subsystem. Knowledge is not "justified true belief" or whatever the heck, but an abstraction for that part of the internal model of the universe which is reasonably accurate for the purpose of making decisions. There are other parts of your internal model, the spurious ones, which do not reflect the universe. These give you the musings like "what if my brain is an algorithm that produces 1+1=3". 

There is nothing more to it.

I really like "starting with being an agent". In fact, I strongly argued for it. But the reality is people often would forgo this and regard "view from nowhere" as the foundation and attempt to draw the map with that perspective. (Anthropics being the prime example, IMO) Allowing this switch of viewpoints, there is no way to say if "the internal model for decision-making" really "reflects the universe". E.g. the debate if quantum states are just epistemological or ontological. 

Even the idea of "decision" is challenged when the decision-maker is physically analyzed. We won't say water decides to flow toward lower places. Looking at a decision-maker physically, where is the sense of decision in his actions? I think that's why "decision-making" problems like Newcomb and Twin Prison Dilemma are paradoxical. It asks what would the decision-maker do using the introspecting sense while also made sure physically analyzing the decision-maker is part of the problem.

Yeah this is exactly right imo.  Thinking about good epistemics as about believing what is "justified" or what you have "reasons to believe" is unimportant/useless insofar as it departs from "generated by a process makes that the ensuing map correlate with the territory."  In the world where we don't have free will, but our beliefs are produced deterministically by our observations and our internal architecture in a way such that they are correlated with the world, we have all the knowledge that we need.

[-]TAG2y21

We don't have processes for ensuring correspondence to reality...what we have are processes for ensuring predictive accuracy, which is it not the same thing.

We get correspondence to reality through predictive accuracy; we can predict experience well using science because scientific theories are roughly isomorphic to the structures in reality that they are trying to describe.

[-]TAG2y0-1

We have no way of knowing that, because such isomporphism cannot be checked directly.

Also, small increments in predictivity can be associated with major shifts in ontology. We do not know that we are on the final theory, and the next theory could have a different ontology, but only one extra significant digit of accuracy.

By "process," I don't mean internal process of thought involving an inference from perceptions to beliefs about the world, I mean the actual perceptual and cognitive algorithm as a physical structure in the world.  Because of the way the brain actually works in a deterministic universe, it ends up correlated with the external world.  Perhaps this is unknowable to us "from the inside," but the OP's argument is not about external world skepticism given direct access only to what we perceive, but rather that given normal hypotheses about how the brain works, we should not trust the beliefs it generates.  I am simply pointing out that this is false, because these normal hypotheses imply the kind of correlation that we want.

[-]TAG2y0-1

By “process,” I don’t mean internal process of thought involving an inference from perceptions to beliefs about the world, I mean the actual perceptual and cognitive algorithm as a physical structure in the world.

How do you know what that is? You don't have the ability to stand outside the mind-world relationship and percieve it, any more than anything else. You have beliefs about the mind-world relationship, but they are all generated by inference in your mind. If there were some hard core of non-inferential knowledge about he ontological nature of reality, you might be able to lever it to gain more knowledge, but there isn't because because the same objections apply.

Because of the way the brain actually works in a deterministic universe,

We don't know that the universe is deterministic. You are confusing assumptions with knowledge.

it ends up correlated with the external world.

The point is about correspondence. Neither correlations nor predictive accuracy amount to correspondence to a definite ontology.

I am simply pointing out that this is false, because these normal hypotheses imply the kind of correlation that we want.

We dont' want correlation, we want correspondence. Correlation isn't causation, and it isn't correspondence.

Assuming the scientific model doesn't help, because the scientific model says that the way perceptions relate to the world is indirect, going through many intermediate causal stages. Since multiple things could possible give rise to the same perceptions, a unique cause (ie a definite ontology) can't be inferred from perception alone.

How do you know what that is? You don't have the ability to stand outside the mind-world relationship and perceive it, any more than anything else. You have beliefs about the mind-world relationship, but they are all generated by inference in your mind. If there were some hard core of non-inferential knowledge about he ontological nature of reality, you might be able to lever it to gain more knowledge, but there isn't because because the same objections apply

I'm not making any claims about knowing what it is.  The OP's argument is that our normal deterministic model is self refuting because it undermines our ability to have knowledge, so the truth of the model can be assumed in the first place.  

The point is about correspondence. Neither correlations nor predictive accuracy amount to correspondence to a definite ontology.

Yes, a large range of worlds with different ontologies imply the same observations.  The further question of assigning probabilities to those different worlds comes down to how to assign initial priors, which is a serious epistemological problem.  However, this seems unrelated to the point made in the OP, which is that determinism is self-undermining.  

More broadly, I am confused as to what claim you think that I am making which you disagree with.

For what it's worth I think there needs to be some clarification. 

I didn't say our model is deterministic nor should it be or not. And my argument is not about whether the correct definition of knowledge should be "justified true belief". And unless I have had the wrong impression, I don't think Sean Carrol's focus is on the definition of knowledge either. Instead, it's about what should be considered "true".

The usual idea of a theory being true if it faithfully describes an underlying objective physical reality (deterministic or not)  is problematic. It suffers the same pitfall of believing I am a Boltzmann brain. It is due to the dilemma that theories are produced and evaluated by worldly objects while their truth ought to be judged with "a view from nowhere", a fundamentally objective perspective. 

Start reasoning by recognizing I am a particular agent, then you will not have this problem. I don't deny that. In fact, I think that is the solution to many paradoxes. But the majority of people would start reasoning from the "view from nowhere" and regard that as the only way. I think that is what has led people astray in many problems. Like decision paradoxes such as Newcomb, anthropics and to a degree, quantum interpretations. 

[-]TAG2y0-2

More broadly, I am confused as to what claim you think that I am making which you disagree with.

What was the first thing you said I disagreed with?

“generated by a process makes that the ensuing map correlate with the territory.” In the world where we don’t have free will, but our beliefs are produced deterministically by our observations and our internal architecture in a way such that they are correlated with the world, we have all the knowledge that we need.

I disagree with all of that.

I disagree that the world is known to be deterministic.

I disagree that there you can found epistemology on ontology. You don't know that the mind-world relationship works in a certain way absent having an epistemology that says to.

I disagree that we have all the knowledge we want or need.

I disagree that correlation is sufficient to solve the problem.

I think that Carroll's argument is wrong because it already assumes as true what it wants to disprove. 

His argument is the following claim: If I am BB, I can't reason about physics and the probabilities of being BB. However, this claim is true only if I am BB. But if I am interested only in the question "Am I BB?", it doesn't matter how good I am thinking of being a BB or everything else, because the only thing which I am interested in is already assumed as true. 

The same logic was used in the attempts to disprove the simulation argument by Danila Medvedev (and maybe others). If I am in a simulation, I can't have a perfect model of the distribution of simulations in the universe and - Danila suggests, – it disproves the simulation argument. 

Let's elaborate: One of two statements is true:

  1. I am real and simulations are impossible.
  2. I am in simulation, but in that case, I can't perform perfect probability estimations.

The italic part doesn't have any logical power over the whole argument.

[-]TAG2y10

If you can't perform any probability estimates, it's pretty devastating.

If I can't perform any probability estimates, then I am Boltzmann brain. ))

[-]TAG2y10

If you are BB,you can't perform probability estimates, and Carrol's argument goes through.

Carroll argument resembles a typical proof by contradiction: let's assume A, but it produces a contradiction, so A is false. However, in Carroll's case, no matter what is happening with BB, there is no contradiction, so it is not a proof that I am not BB. This is even stronger in the case when it is applied to the simulation argument, as a sim can think coherently. 

But what Carroll actually said is that BB can't know for sure that it is BB. 

But from my point of view, its all is compensated by the dust theory: for any BB1 there is a BB2 which looks like logical continuation of BB1. Like for any random integer N there is another integer N+1. 

Therefore, for any BB which sees absurdity, there is another BB which recognise it as absurdity. Thus in dust theory, chains of BBs can calculate probability that they are BB.

[-]TAG2y10

Carroll argument resembles a typical proof by contradiction

It actually is an argument that belief in BBs is self undermining.

"A Bayesian analysis of self-undermining arguments in physics"

http://philsci-archive.pitt.edu/20487/

Therefore, for any BB which sees absurdity, there is another BB which recognise it as absurdity. Thus in dust theory, chains of BBs can calculate probability that they are BB.

If you are a BB at all, you have a vanishing likelihood of being a sane BB.

Thanks for the link to Wallace's work. I think that he also fails in the same trap, that is he fails to distinguish "large" and "small" forms of self-undermining arguments. 


The large one is a choice between two alternatives: "1) I am real; 2) I am BB, and therefore can't have reasonable thoughts." 


The small argument is only (2) from above: "I am BB, and therefore I can't have coherent thoughts. "Only the small argument is self-undermining but the large one stands because there is no probability update in (1) horn. 


For example, I have a 10 per cent chance to believe that I am real a priory, based on theoretical physics. I know that BBs can't update properly but it is not evidence on which I should update, as it is not news for me. 


Another example: I can be either awake or dreaming. During dreaming I can't effectively decide, am I awake or dreaming and properly calculate the chances of being in a dream. Sometimes I have lucid dreams and sometimes I have very realistic dreams. If we follow Carroll's logic, we should conclude that dreaming is impossible, because it is cognitively unstable, and it could be that I am dreaming now about commenting in LW. But in real life, we assume that dreaming takes, say, 0.3 time of my subjective experiences and we ignore any probability computations about dreaming that we do during dreaming.  

I don't know if you're intentionally recapitulating this line of argument, but C.S. Lewis makes this argument in Miracles. There's a long history of the back and forth on wikipedia

I don't think it works, mostly because the fact that a belief is result of a physical process doesn't tell my anything at all about the rationality / irrationality of belief. Different physical processes should be judged differently; some are entangled with the resulting state of belief and others aren't.

Not intentional, but didn't expect it to be a novel argument either. I suspect everyone has thought about it sometime during their life, likely while learning physics in secondary school. I just think "cognitive instability" is a nice handle for the discussion.

But this means your belief is merely a product of this physical system called a person, by its internal dynamics and interactions with the environment. The feeling of thinking based on reasoning and rationality has no bearing on the outcome. Then how can you be certain that, this particular physical system happens to believe in the correct theories about the universe? Including theories like "I have no free will"? 

This seems to contradict itself. Before, you said that """free will""" was defined as decisions that cannot be explained by physical theories. But such """free""" decisions would not involve reasoning and rationality, as reasoning and rationality requires a certain amount of order similar to what is sufficient for physical and computational theories. So it is precisely if decisions are based on non-"""free""" factors that you can come to believe in correct things about the universe.

I am a human.  I believe that only a minuscule percentage of all humans who have ever lived are capable of reasoning about physics concerning environments in which evolution has not directly equipped them to innately understand.  If I think that I am one of the few people who actually can reason about physics outside of such environments, don't I also have to think that I am probably mistaken?  If not, than if I think I am a  Boltzmann brain and accept that most Boltzmann brains can't properly reason, can't I think that I am an exception? 

There is a slight problem with magnitude here.

At least 0.1% of all humans to have ever lived can do such reasoning. Furthermore if you add other observable qualifiers to the set - "attended an educational institution", "citizen of a wealthy nation", "born after quantum mechanics was devised", "discusses physics with other people at all" - then your posterior probabilities go way up from prior.

For Boltzmann brains the posterior probabilities of reasonably accurate reasoning start at less than 1 in 10^9999999999 and basically don't budge at all given even extremely narrow observations. Even in a highly lawful universe, the incredibly vast majority of Boltzmann brains will have no correlation between their observations and ability to reason (even if we restrict to only those brains where what goes on in it might constitute a mental state at all).

That's why I don't think these situations are even qualitatively similar.

What if we restrict ourselves to the class of Boltzmann brains that understand the concept of Boltzmann brains and have memories of having attended an educational institution and of having discussed quantum physics with other people?

If you're restricting to the class of "Boltzmann brains that understand the concept of Boltzmann brains", then you are conditioning on something other than observations. All you can observe is that you believe that you understand Boltzmann brains.

The proportion of Boltzmann brains making that observation and being essentially correct in that belief will have so many zeroes after the decimal point that you easily could fill a set of encyclopedias with them and have more to spare.

It is difficult to convey how ridiculously uncorrelated Boltzmann brains would be if they could exist. Yes, with infinite time or space many of them will be able to have sensory experiences, memories, and thoughts. Some microscopic fraction of those would have memories about attending university and discussing quantum physics, and thoughts along the lines of "If I were a Boltzmann brain then ...".

The incomprehensibly vast majority of those will have no further thoughts at all, being dead or otherwise incapable of thought any more. Of those that do go on to have more thoughts, the incomprehensibly vast majority will have thoughts as sensible and human-like as "nYqR8pwckvOBE84fKJ8vPMUWR3eYEbO6nXyOuSC". Of those that have any thoughts that we would recognize as vaguely human, the incomprehensibly vast majority will go on to continue in a manner less logically coherent than "... a ten foot bull dyke shoots pineapples with a machinegun."

So yes, if you were actually a Boltzmann brain then you could think you were an exception. But almost certainly, you would not. You would almost certainly not think anything even slightly related to your previous thoughts at all.

[-]TAG2y10

But this means your belief is merely a product of this physical system called a person, by its internal dynamics and interactions with the environment. The feeling of thinking based on reasoning and rationality has no bearing on the outcome. Then how can you be certain that, this particular physical system happens to believe in the correct theories about the universe?

You cant, but you can't under any alternative, either. You could argue that you can have certainty about your own qualia , if you accept qualia, but that still doesn't mean you have certainty about reasoning processes.

Instead, maybe we should treat physics as inherently perspective-dependent. Among other benefits, it would not have this conflict and be stable.

I don't see why.

Then how can you be certain that, this particular physical system happens to believe in the correct theories about the universe? Including theories like "I have no free will"?

"Certain to be correct?" No. All you can hope for is to be Less Wrong, hence the name of this site.

How you get there in a materialistic view is fairly straightforward: by the same type of process operating on different timescales and levels of abstraction.

Firstly by evolution of nervous systems and brains, encoding behaviour that match better what the organisms need to do in the world to survive. Secondly, with more powerful brains enabling complex behaviour to be learned, subsequent evolution of acquired behaviour. Thirdly with more powerful brains still, cultural evolution affecting patterns of thought and formation of systems of mass dissemination of ideas. Fourthly, a process of evolution via science of more detailed and explicit models of how things work.

By this stage of the process you should expect the models that survive to be more accurate, not because they are willed to be so but because the patterns of thought that produce such models enable success in the physical world for the organisms using them. It is of course possible that they're complete bunk that happens to work anyway, in which case too bad for us. What we can be pretty sure of though is that they're better than literally random noise.