Wiki Contributions


The more I think about anthropics the more I realize there is no rational theory for anthropic binding. For the question "what is the probability that I am the heavy brain?" there really isn't a rational answer. 

This experimental outcome will not produce a disagreement between Alice and Bob. As long as they are following the same anthropic logic. 

When saying Bob's chance of survival is 100% according to MWI,  the statement is made from a god's eye view discussing all post-experiment worlds: Bob will for sure survive: in one/some of the branches. 

By the same logic, from the same god's eye view, we can say, Alice will meet Bob for sure: in one/some of the branches, if the MWI is correct. 

By saying Alice shall see Bob with a 0.1% chance no matter if MWI is correct, you are talking about the specific Alice's first-person perspective, which is a self-locating probability according to MWI. As in "what is the probability I am the Alice who's in the branch where Bob survives?". 

By taking the specific subject's perspective, Bob's chance of survival is also 0.1% according to MWI. As in "what is the probability that I am actually in the branches where Bob survives?" 

As long as their reasonings are held at the same level, their answers would be the same. 

The real kicker is whether or not they should actually increase their confidence in MWI after the experiment ends (especially in the case where Bob survives). The popular anthropic camps such as SIA seem to say yes. But that would mean any quantum event, no matter the outcome would be evidence favouring MWI. So an armchair philosopher could say with categorical confidence that MWI is correct. (This is essentially the same problem as Nick Bostrom's Presumptuous Philosopher but in the quantum worlds) So SIA supporters and Thirders have been trying to argue their positions do not necessarily lead to such an update (which they called the naive confirmation of the MWI). Whether or not that defence is successful is up for debate. For more information, I recommend the papers by Darren Bradley and Alastar Wilson. 

On the other hand, if you think finding oneself exist is a logical truth, thus has 100% probability, then it is possible to produce disagreement against Aumann's Agreement Theorem. And the disagreement is valid and can be logically explained. I have discussed it here. I think this is the correct anthropic reasoning. However, this idea does not recognize self-locating probability thus fundamentally incompatible with the MWI. Therefore if Alice and Bob both favour this type of anthropic reasoning, they would still have the same confidence in the validity of MWI, 0%. 

Try this for practice, reasoning purely objectively and physically, can you recreate the anthropic paradoxes such as the Sleeping Beauty Problem?

That means without resorting to any particular first-person perspective, nor using words such as "I" "now" "here", or putting them in a unique logical position. 

One way to understand the anthropic debate is to consider them as different ways of interpreting the indexicals (such as "I" "now" "today" "our generation" etc) in probability calculation. And they are based on the first-person perspective. Furthermore, there is the looming question of "what should be considered observers?". Which lacks any logical indicator, unless we bring in the concept of consciousness. 

We can easily make the sleeping beauty problem more undefined. For example, by asking "Is the day Monday?". Before attempting to answer it one would have to ask: "which day exactly are we talking about?". Compare that question to "is today Monday?", the latter is obviously more defined. Even though by using "now" or "today" no physical feature is used, we inherently think the latter question is clear because we can imagine being in Beauty's perspective as she wakes up during the experiment: "today" is the one most closely connected to the first-person experience. 

The two are incompatible. Anthropic reasoning makes explicit use of first-person experience in their question formulation. E.g. in the sleeping beauty problem, "what is the probability that now is the first awakening?" or "today is Monday?" The meaning of "now", and "today" is considered to be apparent, it is based on their immediacy to the subjective experience. Just like which person "I" am is inherently obvious based on a first-person experience. Denying first-person experience would make anthropic problems undefined.

Another example is the doomsday argument. Which says my birth rank, or the current generation's birth rank, is evidence for doom-soon. Without a first-person experience who "me" or "the current generation" refers to would be unclear. 

Whether computer-simulated minds or people from other universes (or beyond the event horizon in this post) have subjective experiences is essentially the reference class problem, a category of observers that "I could be" in anthropic arguments: Whether the reference class should include them.

I have a major problem with this "observation selection" type of anthropic reasoning, which pretty much is all that ever gets discussed such as SSA, SIA and their variants. In my opinion, there isn't any valid reference class. Each person's perspective, e.g. who I am, when is now etc, is primitive. Not something to be explained by reasoning.  There is no logical explanation or deduction for it. The first-person is unique, and subjective experience is intrinsic to the first-person perspective only. 

We can all imagine thinking from other people's perspectives. Do you think it is ethically relevant to reason from the perspective of a simulated mind? If so, then you should consider them conscious. Otherwise they are not. But as perspectives are primitive, these types of questions can only be answered by stipulations. Not as a conclusion from some carefully conducted reasoning. Rationality cannot provide any answer here. 

For what it's worth I think there needs to be some clarification. 

I didn't say our model is deterministic nor should it be or not. And my argument is not about whether the correct definition of knowledge should be "justified true belief". And unless I have had the wrong impression, I don't think Sean Carrol's focus is on the definition of knowledge either. Instead, it's about what should be considered "true".

The usual idea of a theory being true if it faithfully describes an underlying objective physical reality (deterministic or not)  is problematic. It suffers the same pitfall of believing I am a Boltzmann brain. It is due to the dilemma that theories are produced and evaluated by worldly objects while their truth ought to be judged with "a view from nowhere", a fundamentally objective perspective. 

Start reasoning by recognizing I am a particular agent, then you will not have this problem. I don't deny that. In fact, I think that is the solution to many paradoxes. But the majority of people would start reasoning from the "view from nowhere" and regard that as the only way. I think that is what has led people astray in many problems. Like decision paradoxes such as Newcomb, anthropics and to a degree, quantum interpretations. 

Not intentional, but didn't expect it to be a novel argument either. I suspect everyone has thought about it sometime during their life, likely while learning physics in secondary school. I just think "cognitive instability" is a nice handle for the discussion.

I really like "starting with being an agent". In fact, I strongly argued for it. But the reality is people often would forgo this and regard "view from nowhere" as the foundation and attempt to draw the map with that perspective. (Anthropics being the prime example, IMO) Allowing this switch of viewpoints, there is no way to say if "the internal model for decision-making" really "reflects the universe". E.g. the debate if quantum states are just epistemological or ontological. 

Even the idea of "decision" is challenged when the decision-maker is physically analyzed. We won't say water decides to flow toward lower places. Looking at a decision-maker physically, where is the sense of decision in his actions? I think that's why "decision-making" problems like Newcomb and Twin Prison Dilemma are paradoxical. It asks what would the decision-maker do using the introspecting sense while also made sure physically analyzing the decision-maker is part of the problem.

That's quite alright, none taken. All I was getting at was a uniquely "physically real" analysis is actually an additional assumption. 

Load More