Anthropic Paradoxes and Self Reference

They are false, yet often considered as valid possibilities in anthropic problems. So in their reasoning there have to change the self-reference "I" to someone objectively defined.

SIA is basically just Bayesian updating on existence

SIA is the Bayesian update on knowing your existence (ie if they were always going to ask if dadadarren existed, and get a yes or no answer)

The i.e. part is what I was trying to point, thus I agree with this assessment. So the anthropic debate can be presented as whether I should give the same probability as some outsider learns of my existence by this process. SIA suggests yes. Halfers in general suggest no.

Here SIA has the advantage of not going to generate any disagreement when the outsider and I can communicate. We would give the same probability values. Whereas all halfers (not just SSA supporters) must find a way to explain why the outsider and I must give different probabilities even when we share all information. i.e. what aspect of the problem is incommunicable.

SIA is basically just Bayesian updating on existence

SIA is just Bayesian updating on the fact that you exist; this is the same update that an outside observer would make, if informed of your existence.

Actually, It also assumes the outside observer learns of your existence by a certain process. E.g. by randomly sampling all potentially existing observers (in a reference class) and finds you. (If on the contrary, the outsider learns about your existence by sampling all actual observers then the update would follow SSA.) Of course, if the outsider defines the reference class by all the definable characteristics of you then he would effectively get a reference class of one. But it seems dubious why would an outsider pay attention to this particular set of characteristics in the first place.

Anthropics: different probabilities, different questions

In the classical sleeping beauty problem, if I guess the coin was tails, I will be correct in 50% of the experiments, and in 67% of my guesses.

In this case, how do you define "my guesses"? Does that mean guesses made by the same physical person? That would lead to different answers in the sleeping beauty problem vs the above cloning/spliting problem.

To translate my position from the previous comment would be "my guesses" are primitively clear to me because I have the subjective experience of them. So when faced with the question "would I want most of my copies to be correct (SIA) or my copies to be correct in most universes (SSA)?", I would simply say "I just want myself to be correct." Do you think that is an invalid position?

Put it in the classical sleeping beauty problem, the exact duration of the experiment is inconsequential. The two awakenings can be one day apart (as the usual formulation), or a week apart, or an hour apart. It is still the same experiment/problem. I think we can all agree on that.

So imagine I wake up in the experiment. I can enter another iteration of the experiment right away. As long as the second experiment finishes before the potentially incoming memory wipe of the first experiment. For example, let the first experiment have awakenings 1 day apart, the second experiment with awakenings 1/2 day apart, the third experiment with awakenings 1/4 day apart, etc. Theoretically, the experiment can be repeated infinite times. (This is of course assuming the actual awakenings take insignificant time and the memory wipes happen just before the second awakening). When repeating the sleeping beauty problem this way, I have a clear track of how many iterations have I entered, what was my guesses in those iterations, and how many of them are right. And that number would approach 50% if I guess tails every time.

In this process, I never have to consider "this awakening" as a member of any reference class. Do you think "keeping the score" this way invalid?

A Simplified Version of Perspective Solution to the Sleeping Beauty Problem

Sorry for the late reply, didnt check lesswrong for a month. Hope you are still around.

After your red/blue example I realized I was answering to rashly and made a mistake. Somehow I was under the impression the experiment just have multiple awakenings without memory wipes. That was silly of me cause then it wont even be an anthropic problem.

Yes, you are right. With memory wipes there should be no update. The probability of different toss result should remain at 1/2.

Anthropics: different probabilities, different questions

Didn't check lesswrong for a month, almost missed this post. I have followed your work on anthropic for quite sometime. I want to ask a quick question:

An alternate characterisation of the SIA and SSA questions could be to ask, "If I said 'I have X', would I want most of my copies to be correct (SIA) or my copies to be correct in most universes (SSA)?"

What if I just want to be correct. Not giving any thought to other copies at all? Do you consider that response invalid?

For some context consider this example: Tonight during my sleep an alien Omega is going to toss a coin. If it land Tails it splits me in halves, and make two copies of me by molecular cloning the other half respectively. The process is accurate enought the resulting copies won't be able to tell. If Heads Omega won't do anything. After waking up Omega would ask one/both copies about the coin toss (guess the result or give a probabililty).

Now I wake up from the experiement, how should I guess the result? I think it can be answered without even care about what the other copy of me (if it exist) thinks. I can participate this experiment repeatedly and will experience about equal numbers of Heads vs Tails.

"Who I am" is an axiom.

If each human makes predictions as if they are a randomly chosen human, then the predictions of humanity as a whole will be well-calibrated. That seems, to me, to be a property we shouldn't ignore.

Very true. I think one major problem with anthropic reasoning is we tend to treat the combined/average outcome of the group (e.g. all humans) as the expectation of the outcome for an individual. So when there is no strategy to make rational conclusions from the individual perspectives, we automatically try to make conclusions, if followed by everyone in the group, would be most beneficial for the whole group.

Then what's the appropriate group? It would lead to different conclusions. Thus the debates, most notably SSA vs SIA.

"Who I am" is an axiom.

Explaining "I" as the result of introspection of "self" seems reasonable. But I think that is a circular definition. Yes, the access of "this" defines "I". But why I have immediate access to "this" instead of other things? That is the part that can only be taken as given. Not further reduced by logic.

For example, I would try to maximize the interest of dadadarren, and you would shminux. Obviously both are rational. The difference can only be explained by our different reasoning starting point: Because for me, I am dadadarren, for you shminux. There is no point further analyzing our difference to determine which is correct in terms of logic.

"Who I am" is an axiom.

The Doomsday Argument does not depend on any empirical evidence. It is a pure logic deduction. So even if we consider the typical x-risks threatening our existence: climate change, AI boom, nuclear war, etc thus think our doom is probably coming soon. It still cannot be used as evidence favoring the Doomsday argument. Because if the Argument is true, we should expect our extinction is even more imminent, on top of all the x-risks considered.

A Simplified Version of Perspective Solution to the Sleeping Beauty Problem

For the first question, perspective-based reasoning would still give the probability of 2/3 simply because there is no guaranteed awakening in the experiment. So finding myself awake during the experiment is new information even from the first-person perspective, eliminating the possibility of TT.

For the second question, the probability remains at 1/2. Due to no new information.

For either question "the probability of today being the first day" is not meaningful and has no answer.

Load More