Posts

Sorted by New

Wiki Contributions

Comments

And the answer is no, you shouldn’t. But probability space for Technicolor Sleeping beauty is not talking about probabilities of events happening in this awakening, because most of them are illdefined for reasons explained in the previous post.

So probability theory can't possibly answer whether I should take free money, got it.

And even if "Blue" is "Blue happens during experiment", you wouldn't accept worse odds than 1:1 for Blue, even when you see Blue?

No, I mean the Beauty awakes, sees Blue, gets a proposal to bet on Red with 1:1 odds, and you recommend accepting this bet?

You observe outcome “Blue” which correspond to event “Blue or Red”.

So you bet 1:1 on Red after observing this “Blue or Red”?

mathematically sound

*ethically

Utility Instability under Thirdism

Works against Thirdism in the Fissure experiment too.

Technicolor Sleeping Beauty

I mean, if you are going to precommit to the right strategy anyway, why do you even need probability theory? The whole question is how do you decide to ignore that P(Head|Blue) = 1/3, when you chose Red and see Blue. And how is it not "a probabilistic model produces incorrect betting odds", when you need to precommit to ignore it?

Somehow every time people talk about joints, it turns out being more about naive intuitions of personal identity, than reality^^.

I don’t see how it is possible in principle. If the Beauty in the middle of experiment how can she starts participating in another experiment without breaking the setting of the current one?

If you insist on Monday and Tuesday being on the same week, then by backing up her memory: after each awakening we save memory and schedule memory loading and new experiment to a later free week. Or we can start new experiment after each awakening and schedule Tuesdays for later. Does either of these allow you to change your model?

In what sense is she the same person anyway if you treat any waking moment as a different person?

You can treat every memory sequence as a different person.

No, they are not. Events that happen to Beauty on Monday and Tuesday are not mutually exclusive because they are sequential. On Tails if an awakening happened to her on Monday it necessary means that an awakening will happen to her on Tuesday in the same experiment.

But the same argument isn’t applicable to fissure, where awakening in different Rooms are not sequential, and truly are mutually exclusive. If you are awaken in Room 1 you definetely are not awaken in Room 2 in this experiment and vice versa.

I'm not saying the arguments are literally identical.

Your argument is:

  1. The awakening on Tuesday happens always and only after the awakening on Monday.
  2. Therefore !(P(Monday) = 0 & P(Tuesday) = 1) & !(P(Monday) > 0 & P(Tuesday) < 1).
  3. Therefore they are not exclusive.

The argument about copies is:

  1. The awakening in Room 1 always happens and the awakening in Room 2 always happens.
  2. Therefore !(P(Room 1) < 1) & !(P(Room 2) < 1).
  3. Therefore they are not exclusive.

Why the second one doesn't work?

But not all definitions are made equal.

I agree, some are more preferable. Therefore probabilities depend on preferences.

I’m afraid I won’t be able to address your concerns without the specifics. Currently I’m not even sure that they are true. According to Wei Dai in one of a previous comments our current best theory claims that Everett branches are causally disconnected and I’m more than happy to stick to that until our theories change.

They are approximately disconnected according to our current best theory. Like your clones in different rooms are approximately disconnected, but still gravitationally influence each other.

You can participate in a thousand fissure experiment in a row and accumulate a list of rooms and coin outcomes corresponding to your experience and I expect them to fit Lewis’s model. 75% of time you find yourself in room 1, 50% of time the coin is Heads.

Still don't get how it's consistent with your argument about statistical test. It's not about multiple experiments starting from each copy, right? You still would object to simulating multiple Beauties started from each awakening as random? And would be ok with simulating multiple Fissures from one original as random?

Because coexistence in space happens separately to different people who are not causally connected, while coexistence in one timeline happen to the same person, whose past and future are causally connected. I really don’t understand why everyone seem to have so much trouble with such an obvious point.

I understand that there is a difference. The trouble is with justification for why this difference is relevant. Like, you based your modelling of Monday and Tuesday as both happening on how we usually treat events when we use probability theory. But the same justification is even more obvious, when both the awakening in Room 1 and the awakening in Room 2 happen simultaneously. Or you say that the Beauty knows that she will be awake both times so she can't ignore this information. But both copies also know that they both will be awake, so why they can ignore it?

If you participate in a Fissure experiment you do not experience being at two rooms on Tails. You are in only one of the rooms in any case, and another version of you is in another room when it’s Tails.

Is this what it is all about? It depends on definition of "you". Under some definitions the Beauty also doesn't experience both days. Are you just saying that distinction is that no sane human would treat different moments as distinct identities?

Can’t we model interference as separate branches? My QM is a bit rusty, what kind of casual behaviour is implied? It’s not that we can actually jump from one branch to the other.

Don't know specifics, as usual, but as far as I know, amplitudes of the branch would be slightly different from what you get by evolving this branch in isolation, because other branch would also spread everywhere. The point is just that they all exist, so, as you say, why use imperfect approximation?

Simultaneous of existence has nothing to do with it. Elga’s model is wrong here because unlike the Sleeping Beauty, learning that you are in Room 1 is evidence for Heads, as you could not be sure to find yourself in Room 1 no matter what. Here Lewis’ model seems a better fit.

I meant the experiment where you don't know which room it is, but anyway - wouldn't Lewis’ model fail statistical test, because it doesn't generate both rooms on Tails? I don't get why modeling coexistence in one timeline is necessary, but coexistence in space is not.

What do you mean by "can be correctly approximated as random sampling"? If all souls are instantiated, then Elga’s model still wouldn't pass statistical test.

Oh, right, I missed that your simulation has 1/3 Heads. Thank you for your patient cooperation in finding mistakes in your arguments, by the way. So, why is it ok for a simulation of an outcome with 1/2 probability to have 1/3 frequency? That sounds like more serious failure of statistical test.

Nothing out of the ordinary. The Beauty will generate the list with the same statistical properties. Two lists if the coin is Tails.

I imagined that the Beauty would sample just once. And then if we combine all samples into list, we will see that if the Beauty uses your model, then the list will fail the "have the correct number of days" test.

Which is “Beauty is awakened today which is Monday” or simply “Beauty is awakened on Monday” just as I was saying.

They are not the same thing? The first one is false on Tuesday.

(I'm also interested in your thoughts about copies in another thread).

I agree that this should be said, but there is also actual disagreement about which theory is better.

Getting reliable 20% returns every year is really quite amazingly hard.

Foundations for analogous arguments about future AI systems are not sufficiently understood - I mean, maybe we can get very capable system that optimise softly like current systems.

And then the AI companies, if they’re allowed to keep selling those—we have now observed—just brute-RLHF their models into not talking about that. Which means we can’t get any trustworthy observations of what later models would otherwise be thinking, past that point of AI company shenanigans.

Seems to me like the weakest point of all this theory - models not only "don't talk" about wiping out humanity? They not always kill you, even if you give them (or make them think they have) a real chance? Yes, it's not reliable. But the question is how much we should update from Sydney (that was mostly fixed) versus RLHF mostly working. And whether RLHF is actually changing thoughts or the model is secretly acting benevolent is an empirical question with different predictions - can't we just look at weights?

What else does event “Monday” that has 2⁄3 probability means then?

It means "today is Monday".

I do not I understand what you mean here. Beauty is part of simulation. Nothing prevents any person from running the same code and getting the same results.

I mean what will happen, if Beauty runs the same code? Like you said, "any person" - what if this person is Beauty during the experiment? If we then compare combined statistics, which model will be closer to reality?

Why would it?

My thinking is because then Beauty would experience more tails and simulation would have to reproduce that.

How is definition of knowledge relevant to probability theory? I suppose, if someone redefines “knowledge” as “being wrong” then yes, in such definition the Beauty should not accept the correct model, but why would we do it?

The point of using probability theory is to be right. That's why your simulations have persuasive power. But different definition of knowledge may value average knowledge of awake moments of Beauty instead of knowledge of outside observer.

Load More