Boltzmann Brains and Anthropic Reference Classes (Updated)

by pragmatist3 min read4th Jun 2012113 comments

-7

Anthropics
Personal Blog

Summary: There are claims that Boltzmann brains pose a significant problem for contemporary cosmology. But this problem relies on assuming that Boltzmann brains would be part of the appropriate reference class for anthropic reasoning. Is there a good reason to accept this assumption?

Nick Bostrom's Self Sampling Assumption (SSA) says that when accounting for indexical information, one should reason as if one were a random sample from the set of all observer's in one's reference class. As an example of the scientific usefulness of anthropic reasoning, Bostrom shows how the SSA rules out a particular cosmological model suggested by Boltzmann. Boltzmann was trying to construct a model that is symmetric under time reversal, but still accounts for the pervasive temporal asymmetry we observe. The idea is that the universe is eternal and, at most times and places, at thermodynamic equilibrium. Occasionally, there will be chance fluctuations away from equilibrium, creating pockets of low entropy. Life can only develop in these low entropy pockets, so it is no surprise that we find ourselves in such a region, even though it is atypical.

The objection to this model is that smaller fluctuations from equilibrium will be more common. In particular, fluctuations that produce disembodied brains floating in a high entropy soup with the exact brain state I am in right now (called Boltzmann brains) would be vastly more common than fluctuations that actually produce me and the world around me. If we reason according to SSA, the Boltzmann model predicts I am one of those brains and all my experiences are spurious. Conditionalizing on the model, the probability that my experiences are not spurious is minute. But my experiences are in fact not spurious (or at least, I must operate under the assumption that they are not if I am to meaningfully engage in scientific inquiry). So the Boltzmann model is heavily disconfirmed. [EDIT: As AlexSchell points out, this is not actually Bostrom's argument. The argument has been made by others. Here, for example.]

Now, no one (not even Boltzmann) actually believed the Boltzmann model, so this might seem like an unproblematic result. Unfortunately, it turns out that our current best cosmological models also predict a preponderance of Boltzmann brains. They predict that the universe is evolving towards an eternally expanding cold de Sitter phase. Once the universe is in this phase, thermal fluctuations of quantum fields will lead to an infinity of Boltzmann brains. So if the argument against the original Boltzmann model is correct, these cosmological models should also be rejected. Some people have drawn this conclusion. For instance, Don Page considers the anthropic argument strong evidence against the claim that the universe will last forever. This seems like the SSA's version of Bostrom's Presumptuous Philosopher objection to the Self Indication Assumption, except here we have a presumptuous physicist. If your intuitions in the Presumptuous Philosopher case lead you to reject SIA, then perhaps the right move in this case is to reject SSA.

But maybe SSA can be salvaged. The rule specifies that one need only consider observers in one's reference class. If Boltzmann brains can be legitimately excluded from the reference class, then the SSA does not threaten cosmology. But Bostrom claims that the reference class must at least contain all observers whose phenomenal state is subjectively indistinguishable from mine. If that's the case, then all Boltzmann brains in brain states sufficiently similar to mine such that there is no phenomenal distinction must be in my reference class, and there's going to be a lot of them.

Why accept this subjective indistinguishability criterion though? I think the intuition behind it is that if two observers are subjectively indistinguishable (it feels the same to be either one), then they are evidentially indistinguishable, i.e. the evidence available to them is the same. If A and B are in the exact same brain state, then, according to this claim, A has no evidence that she is in fact A and not B. And in this case, it is illegitimate for her to exclude B from her anthropic reference class. For all she knows, she might be B!

But the move from subjective indistinguishability to evidential indistinguishability seems to ignore an important point: meanings ain't just in the head. Even if two brains are in the exact same physical state, the contents of their representational states (beliefs, for example) can differ. The contents of these states depend not just on the brain state but also on the brain's environment and causal history. For instance, I have beliefs about Barack Obama. A spontaneously congealed Boltzmann brain in an identical brain state could not have those beliefs. There is no appropriate causal connection between Obama and that brain, so how could its beliefs be about him? And if we have different beliefs, then I can know things the brain doesn't know. Which means I can have evidence the brain doesn't have. Subjective indistinguishability does not entail evidential indistinguishability.

So at least this argument for including all subjectively indistinguishable observers in one's reference class fails. Is there another good reason for this constraint I haven't considered?

Update: There seems to be a common misconception arising in the comments, so I thought I'd address it up here. A number of commenters are equating the Boltzmann brain problem with radical skepticism. The claim is that the problem shows that we can't really know we are not Boltzmann brains. Now this might be a problem some people are interested in. It is not one that I am interested in, nor is it the problem that exercises cosmologists. The Boltzmann brain hypothesis is not just a physically plausible variant of the Matrix hypothesis.

The purported problem for cosmology is that certain cosmological models, in conjunction with the SSA, predict that I am a Boltzmann brain. This is not a problem because it shows that I am in fact a Boltzmann brain. It is a problem because it is an apparent disconfirmation of the cosmological model. I am not actually a Boltzmann brain, I assure you. So if a model says that it is highly probable I am one, then the observation that I am not stands as strong evidence against the model. This argument explicitly relies on the rejection of radical skepticism.

Are we justified in rejecting radical skepticism? I think the answer is obviously yes, but if you are in fact a skeptic then I guess this won't sway you. Still, if you are a skeptic, your response to the Boltzmann brain problem shouldn't be, "Aha, here's support for my skepticism!" It should be "Well, all of the physics on which this problem is based comes from experimental evidence that doesn't actually exist! So I have no reason to take the problem seriously. Let me move on to another imaginary post."

-7

113 comments, sorted by Highlighting new comments since Today at 12:57 AM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

[M]eanings ain't just in the head. Even if two brains are in the exact same physical state, the contents of their representational states (beliefs, for example) can differ.

They can differ, in the sense you specified, but they can't be distinguished by the brains themselves, and so the distinction can't be used in reasoning and decision making performed by the brains.

-4pragmatist9yDo you really think the assumption that the external world (as we conceive it) is real can't be used in reasoning and decision making performed by us? It is used all the time, to great effect. Do you think Bostrom is wrong to use anthropic reasoning as a basis for disconfirming Boltzmann's model? After all, the assumption there is that Boltzmann's model goes wrong in predicting that we are Boltzmann brains. We know that we're not, so this is a bad prediction. This piece of reasoning seems to be something you deny we can actually do.
4Nornagest9yGiven any particular instantaneous brain state, later evidence consistent with that brain state is evidence against the Boltzmann model, since random fluctuation is vastly more likely to generate something subjectively incoherent. With a sufficient volume of such evidence I'd feel comfortable concluding that we reside in an environment or simulation consistent with our subjective perceptions. But that doesn't actually work: we only have access to an instantaneous brain state, not a reliable record of past experience, so we can't use this reasoning to discredit the Boltzmann model. In a universe big enough to include Boltzmann brains a record of a causal history appears less complex than an actual causal history, so we should favor it as an interpretation of the anthropic evidence. I'll admit I find this subjectively hard to buy, but that's not the same thing as finding an actual hole in the reasoning. Starting with "we know we're not Boltzmann brains" amounts to writing your bottom line first.
2wedrifid9yIt is evidence that the model of the past brain state being a Boltzmann brain is incorrect. It unfortunately can't tell you anything about whether you are a Boltzmann brain now who just thinks that he had a past where he thought he might have been a Boltzmann brain.
0Nornagest9yYeah, that's what I was trying to convey with the second half of that paragraph. I probably could have organized it better.
-1pragmatist9yOnly if my argument is intended to refute radical skepticism. It's not. See the update to my post. It's true that the argument, like every other argument in science, assumes that external world skepticism is false. But I guess I don't see that as a problem unless one is trying to argue against external world skepticism in the first place.
5Nornagest9yThis seems confused. Boltzmann's model only has any interesting consequences if you at least consider external-world skepticism; if you use a causal history to specify any particular agent and throw out anything where that doesn't line up with experiential history, then of course we can conclude that Boltzmann brains (which generally have a causal history unrelated to their experiential history, although I suppose you could imagine a Boltzmann brain with a correct experiential history as a toy example) aren't in the right reference class. But using that as an axiom in an argument intended to prove that Boltzmann brains don't pose a problem to current cosmological models amounts to defining the problem away.
-1pragmatist9yHere's the structure of the purported problem in cosmology: (1) Model X predicts that most observers with subjective experience identical to mine are Boltzmann brains. (2) I am not a Boltzmann brain. (3) The right way to reason anthropically is the SSA. (4) The appropriate reference class used in the SSA must include all observers with subjective experience identical to mine. CONCLUSION: Model X wrongly predicts that I'm a Boltzmann brain. I am not attacking any of the first 3 premises. I am attacking the fourth. Attacking the fourth premise does not require me to establish that I'm not a Boltzmann brain. That's a separate premise in the original argument. It has already been granted by my opponent. So I don't see how assuming it, in an objection to the argument given above, amounts to writing my bottom line first.
1Nornagest9yYour objection assumes that we can distinguish observers by their causal history rather than their subjective experience, and that we can discard agents for whom the two don't approximately correspond. This is quite a bit more potent than simply assuming you're not a Boltzmann brain personally: if extrapolated to all observers, then no (or very few) Boltzmann brains need be considered. The problematic agents effectively don't exist within the parts of the model you've chosen to look at. Merely assuming you're not a Boltzmann brain, on the other hand, does lead to the apparent contradiction in the parent -- but I don't think it's defensible as an axiom in this context. Truthfully, though, I wouldn't describe the cosmological problem in the terms you've used. It's more that most observers with your subjective experience are Boltzmann brains under this cosmological model, and Boltzmann brains' observations do not reliably reflect causal relationships, so under the SSA this cosmology implies that any observations within it are most likely invalid and the cosmology is therefore unverifiable. This does have some self-reference in it, but it's not personal in the same sense, and including "I am not a Boltzmann brain" in the problem statement is incoherent.
0pragmatist9yI'm not sure what you mean by this. I'm claiming we need not consider the possibility that we are Boltzmann brains when we are reasoning anthropically. I'm not claiming that Boltzmann brains are not observers (although they may not be), nor am I claiming that they do not exist. I also think that if a Boltzmann brain were reasoning anthropically (if it could), then it should include Boltzmann brains in its reference class. So I don't think the claims I'm making can be extrapolated to all observers. They can be extrapolated to other observers sufficiently similar to me.
0pragmatist9yI hope this is not the case, since I don't believe this. I think it's pretty likely that our universe will contain many Boltzmann brain type observers whose subjective experience is not a reliable record of their causal history (or any sort of record at all, really). Could you clarify where my objection relies on this assumption? The problem is often presented (including by Bostrom) as a straight Bayesian disconfirmation of models like Boltzmann's. That seems like a different argument from the one you present. Why? The other three premises do not imply that I am a Boltzmann brain. They only imply that model X predicts I'm a Boltzmann brain. That doesn't conflict with the second premise.
0Nornagest9yThat was poorly worded. I'd already updated the grandparent before you posted this; hopefully the revised version will be clearer. I was talking about my formulation of the problem, not yours. Assuming you're not a Boltzmann brain does lead to a contradiction with one of my premises, specifically the one about invalid observations.
1Dolores19849yThat's because it is. We DON'T know that we're not Boltzman brains. There would be no possible way for us to tell.

My question is why ever exclude a conscious observer from your reference class? You're reference class is basically an assumption you make about who you are. Obviously, you have to be conscious, but why assume you're not a Boltzmann brain? If they exist, and one of them. A Boltzmann brain that uses your logic would exclude itself from its reference class, and therefore conclude that it cannot be itself. It would be infinitely wrong. This would indicate that the logic is faulty.

There is no appropriate causal connection between Obama and that brain, so how could its beliefs be about him?

That's just how you're defining belief. If the brain can't tell, it's not evidence, and therefore irrelevant.

-2pragmatist9yOne way to see the difference between my representational states and the Boltzmann brains' is to think counterfactually. If Barack Obama had lost the election in 2008, my current brain state would have been different in (at least partially) predictable ways. I would no longer have the belief that he was President, for instance. The Boltzmann brain's brain states don't possess this counterfactual dependency. Doesn't this suggest an epistemic difference between me and the Boltzmann brain?
-2pragmatist9yI don't think this is a mere definitional matter. If I have evidence, it must correspond to some contentful representation I possess. Evidence is about stuff out there in the world, it has content. And it's not just definitional to say that representations don't acquire content magically. The contentfulness of a representation must be attributable to some physical process linking the content of the representation to the physical medium of the representation. If a piece of paper spontaneously congealed out of a high entropy soup bearing the inscription "BARACK OBAMA", would you say it was referring to the President? What if the same inscription were typed by a reporter who had just interviewed the President? Recognizing that representation depends on physical relationships between the object (or state of affairs) represented and the system doing the representing seems to me to be crucial to fully embracing naturalism. It's not just a semantic issue (well, actually, it is just a semantic issue, in that its an issue about semantics, but you get what I mean). And I don't know what you mean when you say "If the brain can't tell...". Not only does the Boltzmann brain lack the information that Barack Obama is President, it cannot even form the judgment that it possesses this information, since that would presuppose that it can represent the content of the belief. So in this case, I guess my brain can tell that I have the relevant evidence, and the Boltzmann brain cannot, even though they are in the same state. Or did you mean something about identical phenomenal experience by "the brain can't tell..."? That just begs the question. The Boltzmann brain would not be using my logic. In my post, I refer to a number of things to which a Boltzmann brain could not refer, such as Boltzmann. I doubt that one could even call the brain states of a Boltzmann brain genuinely representational, so the claim that it is engaged in reasoning is itself questionable. I am reminded here of a
2DanielLC9yDo beliefs feel differently from the inside if they are internally identical, but don't correspond to the same outside world?
0pragmatist9yI'm pretty sure identical brain states feel the same from the inside. I'm not sure that it feels like anything in particular to have a belief. What do you think about what I say in this comment [http://lesswrong.com/r/discussion/lw/cuj/boltzmann_brains_and_anthropic_reference_classes/6qy0] ?
[-][anonymous]9y 6

But the move from subjective indistinguishability to evidential indistinguishability seems to ignore an important point: meanings ain't just in the head. Even if two brains are in the exact same physical state, the contents of their representational states (beliefs, for example) can differ. The contents of these states depend not just on the brain state but also on the brain's environment and causal history. For instance, I have beliefs about Barack Obama. A spontaneously congealed Boltzmann brain in an identical brain state could not have those beliefs.

... (read more)

I once ran across OP's argument as an illustration of the Twin Earth example applied to the simulation/brain-in-a-vat argument: "you can't be a brain in a vat because your beliefs refer to something outside yourself!" My reaction was, how do you know what beliefs-outside-your-head feel like as compared to the fake vat alternative? If there is no subjective difference, then it does no epistemological work.

8Alejandro19yIt was Putnam who started the idea of refuting the brain-in-vat hypothesis, with sematic externalism, in this paper [http://www.cavehill.uwi.edu/bnccde/ph29a/putnam.html]. The money quote: And a nice counterargument from Nagel's The View From Nowhere:
0gwern9yOur teacher always made us read the original papers, so this must be it.
0pragmatist9yHow could it know its beliefs look like they are about Obama? How does it even know who Obama is?
2[anonymous]9yWhy do you think you know who Obama is? Because your neurons are arranged with information that refers to some Obama character. From the inside, you think "Obama" and images of a nice black man in a suit saying things about change play through your mind. The point of the Boltzmann brain is that it is arranged to have the same instantaneous thoughts as you.
-1pragmatist9yThat's not all there is to my belief that I know who Obama is. The arrangement of neurons in my brain is just syntax. Syntax doesn't come pre-equipped with semantic content. The semantics of my belief -- the fact that it's a belief about Obama, for instance -- comes from causal interactions between my brain and the external world. Causal interactions that the Boltzmann brain has not had. The particular pattern of neuronal activation (or set of such patterns) that instantiates my concept of Obama corresponds to a concept of Obama because it is appropriately correlated with the physical object Barack Obama. The whole point of semantic externalism is that the semantic content of our mental representations isn't just reducible to how they feel from the inside.
1TheOtherDave9yJust to make sure I understand your claim... a question. My brain has a set of things that, in normal conversation, I would describe as beliefs about the shoes I'm wearing. For convenience, I will call that set of things B. I am NOT claiming that these things are actually beliefs about those shoes, although they might be. Suppose B contains two things, B1 and B2 (among others). Suppose B1 derives from causal interactions with, and is correlated with, the shoes I'm wearing. For example, if we suppose my shoes are brown, B1 might be the thing that underlies my sincerely asserting that my shoes are brown. Suppose B2 is not correlated with the shoes I'm wearing. For example, B2 might be the thing that underlies my sincerely asserting that my shoes are made of lithium. If I'm understanding you correctly, you would say that B1 is a belief about my shoes. I'm moderately confident that you would also say that B2 is a belief about my shoes, albeit a false one. (Confirm/deny?) Supposing that's right, consider now some other brain that, by utter coincidence, is identical to mine, but has never in fact interacted with any shoes in any way. That brain necessarily has C1 and C2 that correspond to B1 and B2. But if I'm understanding you correctly, you would say that neither C1 nor C2 are beliefs about shoes. (Confirm/deny?) Supposing I've followed you so far, what would you call C1 and C2?
2pragmatist9y"Correlation" was a somewhat misleading word for me to use. The sense in which I meant it is that there's some sort of causal entanglement (to use Eliezer's preferred term) between the neuronal pattern and an object in the world. That entanglement exists for both B1 and B2. B2 is still a belief about my shoes. It involves the concept of my brown shoes, a concept I developed through causal interaction with those shoes. So both B1 and B2 have semantic content related to my shoes. B2 says false things about my shoes and B1 says true things, but they both say things about my shoes. C1 and C2 are not beliefs about my shoes. There is no entanglement between those brain states and my shoes. What I would call C1 and C2 depends on the circumstances in which they arose. Say they arose through interaction with extremely compelling virtual reality simulations of shoes that look like mine. Then I'd say they were beliefs about those virtual shoes. Suppose they arose randomly, without any sort of appropriate causal entanglement with macroscopic objects. Then I'd say they were brain states of the sort that could instantiate beliefs, but weren't actually beliefs due to lack of content.
0TheOtherDave9yCool, thanks for the clarification. Two things. First, and somewhat tangentially: are you sure you want to stand by that claim about simulations of shoes? It seems to me that if I create VR simulations of your shoes, those simulations are causally entangled (to use the same term you're using) with your shoes, in which case C1 and C2 are similarly entangled with your shoes. No? Second, and unrelatedly: OK, let's suppose C1 and C2 arise randomly. I agree that they are brain states, and I agree that they could instantiate beliefs. Now, consider brain states C3 and C4, which similarly correspond to my actual brain's beliefs B3 and B4, which are about my white socks in the same sense that B1 and B2 are about my brown shoes. C3 and C4 are also, on your model, brain states of the sort that could instantiate beliefs, but aren't in fact beliefs. (Yes?) Now, we've agreed that B1 and B2 are beliefs about brown shoes. Call that belief B5. Similarly, B6 is the belief that B3 and B4 are beliefs about white socks. And it seems to follow from what we've said so far that brain states C5 and C6 exist, which have similar relationships to C1-C4. If I understand you, then C5 and C6 are beliefs on your model, since they are causally entangled with their referents (C1-C4). (They are false, since C1 and C2 are not in fact beliefs about brown shoes, but we've already established that this is beside the point; B2 is false as well, but is nevertheless a belief.) Yes? If I've followed you correctly so far, my question: should I expect the brain that instantiates C1-C6 to interact with C5/C6 (which are beliefs) any differently than the way it interacts with C1-C4 (which aren't)? For example, would it somehow know that C1-C4 aren't beliefs, but C5-C6 are?
0pragmatist9yI'm not sure I'd call C5 and C6 full-fledged beliefs. There is still content missing. C5, as you characterized it, is the brain state in the BB identical to my B5. B5 says "B1 and B2 are beliefs about brown shoes." Now B5 gets it content partially through entanglement with B1 and B2. That part holds for C5 as well. But part of the content of B5 involves brown shoes (the "... about brown shoes" part), actual objects in the external world. The corresponding entanglement is lacking for C5. If you change B5 to "B1 and B2 are beliefs", then I think I'd agree that C5 is also a belief, a false belief that says "C1 and C2 are beliefs." Of course this is complicated by the fact that we don't actually have internal access to our brain states. I can refer to my brain states indirectly, as "the brain state instantiating my belief that Obama is President", for instance. But this reference relies on my ability to refer to my beliefs, which in turn relies on the existence of those beliefs. And the lower-order beliefs don't exist for the BB, so it cannot refer to its brain states in this way. Maybe there is some other way one could make sense of the BB having internal referential access to its brain states, but I'm skeptical. Still, let me grant this assumption in order to answer your final questions. Not really, apart from the usual distinctions between the way we interact with higher order and lower order belief states. No.
0TheOtherDave9yOK, cool. I think I now understand the claim you're making... thanks for taking the time to clarify.

I think the intuition behind it is that if two observers are subjectively indistinguishable (it feels the same to be either one), then they are evidentially indistinguishable, i.e. the evidence available to them is the same ... But the move from subjective indistinguishability to evidential indistinguishability seems to ignore an important point: meanings ain't just in the head. Even if two brains are in the exact same physical state, the contents of their representational states (beliefs, for example) can differ. The contents of these states depend not j

... (read more)
0pragmatist9yLet's say you have a computer set up to measure the temperature in a particular room to a high precision. It does this using input from sensors placed around the room. The computer is processing information about the room's temperature. Anthropomorphizing a little, one could say it has evidence of the room's temperature; evidence it received from the sensors. Now suppose there's another identical computer somewhere else running the same software. Instead of receiving inputs from temperature sensors, however, it is receiving inputs from a bored teenager randomly twiddling a dial. By a weird coincidence, the inputs are exactly the same as the ones on your computer, to the point that the physical states of the two computers are identical throughout the processes. Do you want to say the teenager's computer also has evidence of the room's temperature? I hope not. Would your answer be different if the computers were sophisticated enough to have phenomenal experience? As for your example, the criterion of ontological identity you offer seems overly strict. I don't think failing to eat the sandwich would have turned you into a different person, such that my duplicate's beliefs would have been about something else. But this does seem like a largely semantic matter. Let's say I accept your criterion of ontological identity. In that case, yes, me and my duplicate will be (slightly) evidentially distinguishable. This doesn't seem like that big of a bullet to bite.
6Mitchell_Porter9yBut they have no information about what I actually ate for breakfast! What is the "evidence" that allows them to be distinguished? This term "evidentially distinguishable" is not the best because it potentially mixes up whether you have evidence now, with whether you could obtain evidence in the future. You and your duplicate might somehow gain evidence, one day, regarding what I had for breakfast; but in the present, you do not possess such evidence. This whole line of thought arises from a failure to distinguish clearly between a thing, and your concept of the thing, and the different roles they play in belief. Concepts are in the head, things are not, and your knowledge is a lot less than you think it is.
-3pragmatist9yI have evidence that Mitchell1 thinks there are problems with the MWI. My duplicate has evidence that Mitchell2 thinks there are problems with the MWI. Mitchell1 and Mitchell2 are not identical, so me and my duplicate have different pieces of evidence. Of course, in this case, neither of us knows (or even believes) that we have different pieces of evidence, but that is compatible with us in fact having different evidence. In the Boltzmann brain case, however, I actually know that I have evidence that my Boltzmann brain duplicate does not, so the evidential distinguishability is even more stark. I don't think I'm failing to distinguish between these. Our mental representations involve concepts, but they are not (generally) representations of concepts. My beliefs about Obama involve my concept of Obama, but they are not (in general) about my concept of Obama. They are about Obama, the actual person in the external world. When I talk of the content of a representation, I'm not talking about what the representation is built out of, I'm talking about what the representation is about. Also, I'm pretty sure you are using the word "knowledge" in an extremely non-standard way (see my comment below).
1DanielLC9yYes. It has not proven that input is not connected to sensors in that room. There is a finite prior probability that they are. As such, that output is more likely given that that room is that temperature.
0pragmatist9yWe could set up the thought experiment so that it's extraordinarily unlikely that the teenager's computer is receiving input from the sensors. It could be outside the light cone, say. This might still leave a finite prior probability of this possibility, but it's low enough that even the favorable likelihood ratio of the subsequent evidence is insufficient to raise the hypothesis to serious consideration. In any case, the analog of your argument in the Boltzmann brain case is that there might be some mechanism by which the brain is actually getting information about Obama, and its belief states are appropriately caused by that information. I agree that if this were the case then the Boltzmann brain would in fact have beliefs about Obama. But the whole point of the Boltzmann brain hypothesis is that its brain state is the product of a random fluctuation, not coherent information from a distant planet. So in this case, the hypothesis itself involves the assumption that the teenager's computer is causally disconnected from the temperature sensors. Do you agree that if the teenager's computer were not receiving input from the sensors, it would be inaccurate to say it has evidence about the room's temperature?
1DanielLC9yIf the computer doesn't know it's outside of the lightcone, that's irrelevant. The room may not even exist, but as long as the computer doesn't know that, it can't eliminate the possibility that it's in that room. The probability of it being that specific room is far too low to be raised to serious consideration. That said, the utility function of the computer is such that that room or anything even vaguely similar will matter just about as much. Only if the computer knows it's not receiving input from the sensors. It has no evidence of the temperature of the room given that it's not receiving input from the sensors, but it does have evidence of the temperature of the room given that it is receiving input from the sensors, and the probability that it's receiving input from the sensors is finite (it isn't, but it doesn't know that), so it ends up with evidence of the temperature of the room.

Is it legitimate to hold that the possibility of being a Boltzman brain doesn't matter because there's no choice a Boltzman brain can make which make any difference? Therefore, you might as well assume that you're at least somewhat real.

Boltzman brains don't seem like the same sort of problem as being in a simulation-- if you're in a simulation, there might be other entities with similar value to yourself, you could still have quality of life (or lack of same), and it might be to your advantage to get a better understanding of the simulation.

At this point, I'm thinking about simulated Boltzman brains, and in fact, this conversation leads to very sketchy simulated Boltzman brains in anyone who reads it.

2JoshuaZ9yWell, if you are a Boltzmann brain than your best bet may be to maximize enjoyment for the few fractions of a second you have before dissolving into chaos. So if you assign a high probability that you are a Boltzmann brain maybe you should spend the time fantasizing about whatever gender or genders you prefer.
5NancyLebovitz9yHow long are Boltzman brains likely to last? I'd have thought that the vast majority of them flick in and out of existence too quickly for any sort of thought or choice. On the other hand, I suppose that if you have an infinite universe, there might be a infinite number of Boltzman brains which last long enough for a thought, and even an infinite number which last long enough to die in vacuum.

For instance, I have beliefs about Barack Obama. A spontaneously congealed Boltzmann brain in an identical brain state could not have those beliefs. There is no appropriate causal connection between Obama and that brain, so how could its beliefs be about him?

This is playing games with words, not saying anything new or useful. It presumes a meaning of "belief" such that there can be no such thing as an erroneous or unfounded belief, and that's just not how the word "belief" is used in English.

-1pragmatist9yYou have misunderstood what I am saying. It is definitely not a consequence of my claim that there are no erroneous or unfounded beliefs. One can have a mistaken belief about Obama (such as the belief that he was born in Kenya), but for it to be a belief about Obama, there must be some sort of causal chain linking the belief state to Obama.
5David_Gerard9ySo what you mean is that the Boltzmann brain can have no causally-connected beliefs about Obama, not no beliefs-as-everyone-else-uses-the-word about Obama. Fine, but your original statement and your clarification still gratuitously repurpose a word with a conventional meaning in a manner that will be actively misleading to the reader, and doing this is very bad practice.

But the move from subjective indistinguishability to evidential indistinguishability seems to ignore an important point: meanings ain't just in the head.

Whether or not a Boltzman brain could successfully refer to Barack Obama doesn't change the fact that your Boltzman brain copy doesn't know it can't have beliefs about Barack Obama. It's a scenario of radical skepticism. We can deny that Boltzman brains have knowledge but they don't know any better.

-10pragmatist9y

But the move from subjective indistinguishability to evidential indistinguishability seems to ignore an important point: meanings ain't just in the head. Even if two brains are in the exact same physical state, the contents of their representational states (beliefs, for example) can differ. The contents of these states depend not just on the brain state but also on the brain's environment and causal history.

You're assuming that there exists something like our universe, with at least one full human being like you having beliefs causally entwined with Oba... (read more)

[-][anonymous]9y 2

I found an article that claims to debunk the boltzmann brain hypothesis, but I can't properly evaluate everything he is saying. http://motls.blogspot.com.es/2008/08/boltzmann-brains-trivial-mistakes.html

1khafra9yInteresting article; and Motl's a well-known knowledgeable and sometimes correct debunker. I didn't disagree substantially with anything up until here: I'm not exactly sure what he means, but it seems like even if the brains are not local to anything else, they are the observers; so the objection seems moot.

Two points, in response to your update:

Firstly, I'd say that the most common point of disagreement between you and the people who have responded in the thread is not that they take skepticism more seriously than you, it is that they disagree with you about the implications of semantic externalism. You say "Subjective indistinguishability does not entail evidential indistinguishability." I think most people here intuitively disagree with this, and assume your "evidence" (in the sense of the word that comes into Bayesian reasoning) includ... (read more)

3pragmatist9yYeah, I'm getting this now, and I must admit I'm surprised. I had assumed that accepting some form of semantic externalism is obviously crucial to a fully satisfactory naturalistic epistemology. I still think this is true, but perhaps it is less obvious than I thought. I might make a separate post defending this particular claim. You're right that the BB-based skeptical argument you offer is a different argument for skepticism than brains-in-vats. I'm not sure it's a more serious argument, though. The second premise in your argument ("If current physics is not essentially correct, I know nothing about the universe.") seems obviously false. Also the implication that I am very likely to be a BB does not come just from current physics. It comes from current physics in conjunction with something like SSA. So there's a third horn here, which says SSA is incorrect. And accepting this doesn't seem to have particularly dire consequences for our epistemological status.

I feel like I should be able to hyperlink this to something, but I can't find anything as relevant as I remembered. So here goes:

Your reference class is not fixed. Nor is it solely based on phenomenal state, I'd argue, although this second claim is not well-supported.

That is, Boltzmann brains are in your reference class when dealing with something all sentiences deal with; for progressively more situation-specific reasoning, the measure of Boltzmann brains in your reference class shrinks. By dealing with concrete situations one ought to be able to shrink the measure to epsilon.

-2pragmatist9yI think Bostrom's claim is that no matter what situation you're dealing with, all observers that are subjectively indistinguishable from you must be part of your reference class. Whether (and which) observers who are subjectively distinguishable get to be part of it will depend on what you're reasoning about.
[-][anonymous]9y 0

You're right of course that Bostrom is not engaging with the problem you're focusing on. But the context for discussing Boltzmann's idea seems different from what he says about "freak observers" -- the former is about arguing that the historically accepted objection to Boltzmann is best construed as relying on SSA, whereas the rationale for the latter is best seen in his J Phil piece: http://www.anthropic-principle.com/preprints/cos/big2.pdf ). But I'll grant you that his argument about Boltzmann is suboptimally formulated (turns out I remembered... (read more)

[This comment is no longer endorsed by its author]Reply

I can only skim most of this right now, but you're definitely misconstruing what Bostrom has to say about Boltzmann. He does not rely on our having non-qualitative knowledge that we're not Boltzmann brains. Please re-read his stuff: http://anthropic-principle.com/book/anthropicbias.html#5b

0pragmatist9yHuh, you're right. I totally misremembered Bostrom's argument. Reading it now, it doesn't make much sense to me. He moves from the claim that proportionally very few observers would live in low entropy regions as large as ours to the claim that very few observers would observe low entropy regions as large as ours. The former claim is a consequence of Boltzmann's model, but it's not at all obvious that the latter claim is. It would be if we had reason to think that most observers produced by fluctuations would have veridical observations, but why think that is the case? The veridicality of our observations is the product of eons of natural selection. It seems pretty unlikely that a random fluctuation would produce veridical observers. Once we establish this, there's no longer a straightforward inference from "lives in higher entropy region" to "observes higher entropy region". Later, he offers this justification for neglecting "freak observers" (observers whose beliefs about their environment are almost entirely spurious): But this is just false for our current cosmological models. They predict that freak observers predominate (as does Boltzmann's own model). So it seems like Bostrom isn't even really engaging with the actual Boltzmann brain problem. The argument I attribute to him I probably encountered elsewhere. The argument is not uncommon [http://arxiv.org/abs/1008.0808] in the literature. [http://www.sciencedirect.com/science/article/pii/S0370269308010459]
0AlexSchell9yYou're right of course that Bostrom is not engaging with the problem you're focusing on. But the context for discussing Boltzmann's idea seems different from what he says about "freak observers" -- the former is about arguing that the historically accepted objection to Boltzmann is best construed as relying on SSA, whereas the rationale for the latter is best seen in his J Phil piece: http://www.anthropic-principle.com/preprints/cos/big2.pdf [http://www.anthropic-principle.com/preprints/cos/big2.pdf] ). But I'll grant you that his argument about Boltzmann is suboptimally formulated (turns out I remembered it being better than it actually was). However, there is a stronger argument (obvious to me, and maybe charitably attributable to Bostrom-sub-2002) that you seem to be ignoring, based on the notion that SSA can apply even if you can (based on your evidence) exclude certain possibilities about who you are. The argument doesn't need the absurd inference from "lives in higher entropy region" to "observes higher entropy region" but rather needs Boltzmann's model to suggest that "very few observers observe large low-entropy regions". Since most Boltzmann brains have random epistemic states and can't be described as observing anything, the latter sentence is of course true conditional on Boltzmann's model. Most observers don't observe anything (and even the lucky ones who do tend to have tiny-sized low-entropy surroundings) so virtually all observers do not observe large low-entropy regions. Anyhow, if a Boltzmann-brain-swamped scenario is true, a very tiny fraction of observers make our observations, whereas if the world isn't Boltzmann-brain-swamped (e.g. if universes reliably stop existing before entering the de Sitter phase), a much less tiny fraction of observers make our observations. The SSA takeaway from this is that our observations disconfirm Boltzmann-brain-swamped scenarios. You can of course exclude Boltzmann brains from the reference class, but this doe

Occasionally, there will be chance fluctuations away from equilibrium, creating pockets of low entropy. Life can only develop in these low entropy pockets, so it is no surprise that we find ourselves in such a region, even though it is atypical.

So the idea is that Boltzmann brains would form in smaller fluctuations, while a larger fluctuation would be required to account for us. Since smaller fluctuations are more common, it's more likely that a given brain is a Boltzmann one.

But does this take into account the fact that one large fluctuation can give ... (read more)

2DanArmak9yYour wording implicitly assumes you're not a Boltzmann brain. If you are one, the "us" is an illusion and no larger fluctuation is necessary.
0pragmatist9yThat's because I'm not one, and I know this! Look, even in the Bostrom argument, the prevalence of Boltzmann brains is the basis for rejecting the Boltzmann model. The argument's structure is: This model says that it is highly improbable that I am not a Boltzmann brain. I am in fact not a Boltzmann brain. Therefore, this model is disconfirmed. People seem to be assuming that the problem raised by the possibility of Boltzmann brains is some kind of radical skepticism. But that's not the problem. Maybe some philosophers care about that kind of skepticism, but I don't think it's worth worrying about. The problem is that if a cosmological model predicts that I am a Boltzmann brain, then that model is disconfirmed by the fact that I'm not. And some people claim that our current cosmological models do in fact predict that I am a Boltzmann brain. Everyone in this debate takes it as granted that I am not actually a Boltzmann brain. I'm surprised people here regard this as a controversial premise.
1DanArmak9ySee my reply to you elsethread. I also agree with this reply [http://lesswrong.com/lw/cuj/boltzmann_brains_and_anthropic_reference_classes/6r3e] .
0Adele_L9yOh, okay. Is there a good introduction to Boltzmann brains somewhere? I don't seem to understand it very well.
0pragmatist9yThis is a good introduction: http://blogs.discovermagazine.com/cosmicvariance/2006/08/01/boltzmanns-anthropic-brain/ [http://blogs.discovermagazine.com/cosmicvariance/2006/08/01/boltzmanns-anthropic-brain/]
1pragmatist9yYeah, it does. The probabilities involved here are ridiculously unbalanced. The frequency of a fluctuation (assuming ergodicity [http://en.wikipedia.org/wiki/Ergodicity]) is exponentially related to its entropy, so even small differences in entropy correspond to large differences in probability. And the difference in entropy here is itself huge. For comparison, it's been estimated that a fluctuation into our current macroscopic universe would be likelier than a fluctuation into the macroscopic state of the very early universe by a factor of about 10^(10^101). Not sure what you're getting at here. The belief state in the Boltzmann brain wouldn't be caused by some external stable macroscopic object. It's produced by the chance agglomeration of microscopic collisions (in Boltzmann's model).
1pragmatist9yI get why many of my other comments on this post (and the post itself) have been downvoted, but I can't figure out why the parent of this comment has been downvoted. Everything in it is fairly uncontroversial science, as far as I know. Does someone disagree with the claims I make in that comment? If so, I'd like to know! The possibility that I might be saying false things about the science bothers me.
0Adele_L9yOh okay then. I don't think it matters what caused the belief. Just that if it had the same state as your brain, that state would correspond to a brain that observed a place with low entropy.
-2pragmatist9yI'm still having trouble understanding your point. I think there is good reason to think the brain does not in fact have any beliefs. Beliefs, as we understand them, are produced by certain sorts of interactions between brains and environments. The Boltzmann brain's brain states are not attributable to interactions of that sort, so they are not beliefs. Does that help, or am I totally failing to get what you're saying?
1Adele_L9yBut wouldn't a Boltzmann brain understand its "beliefs" the same way, despite them not corresponding to reality?