By the way, there's an interesting observation: my probability estimate before a coin toss is an objective probability that describes the property of the coin.
Don't say "objective probability" - it's a road straight to confusion. Probabilities represent your knowledge state. Before the coin is tossed you are indifferent between two states of the coin, and therefore have 1/2 credence.
After the coin is tossed, if you've observed the outcome, you get 1 credence, if you received some circumstantial evidence, you update based on it, and if you didn't observe anything relevant, you keep your initial credence.
The obvious question is: can Sleeping Beauty update her credence before learning that it is Monday?
If she observes some event that is more likely to happen in the iterations of the experiment where the coin is Tails than in an iterations of the experiment where the coin is Heads than she lawfully can update her credence.
As the conditions of the experiment restrict it - she, threfore, doesn't update.
And of course, she shouldn't update, upon learning that it's Monday either. After all, Monday awakening happens with 100% probability on both Heads and Tails outcomes of the coin toss.
It is an observation selection effect
It's just the simple fact that conditional probability of an event can be different from unconditional one.
Before you toss the coin you can reason only based on priors and therefore your credence is 1/2. But when a person hears "Hello", they've observed an event "I was selected from a large crowd" which happens twice as likely when the coin is Tails, therefore they can update on this information and get their credence in Tails up to 2/3.
This is exactly as surprising as the fact that after you tossed the coin and observed that it's Heads suddenly your credence in Heads is 100%, even though before the coin toss it was merely 50%.
Imagine that an outside observer uses a fair coin to observe one of two rooms (assuming merging in the red room has happened). They will observe either a red room or a green room, with a copy in each. However, the observer who was copied has different chances of observing the green and red rooms.
Well obviously. The observer and the person being copied participate in non-isomorphic experiments with different sampling. There is nothing surprising about it. On the other hand, if we make the experiments isomorphic:
Two coins are tossed and the observer is brought into the green room if both are Heads, and is brought to the red room, otherwise
Then both the observer and the person being copied will have the same probabilities.
Even without merging, an outside observer will observe three rooms with equal 1/3 probability for each, while an insider will observe room 1 with 1/2 probability.
Likewise, nothing is preventing you from designing an experimental setting where an observer have 1/2 probability for room 1 just as the person who is being copied.
When I spoke about the similarity with the Sleeping Beauty problem, I meant its typical interpretation.
I'm not sure what use is investigating a wrong interpretation. It's a common confusion that one has to reason about problems involving amnesia the same way as about problems involving copying. Everyone just seem to assume it for no particular reason and therefore got stuck.
However, I have an impression that this may result in a paradoxical two-thirder solution: In it, Sleeping Beauty updates only once – recognizing that there are two more chances to be in tails. But she doesn't update again upon knowing it's Monday, as Monday-tails and Tuesday-tails are the same event. In that case, despite knowing it's Monday, she maintains a 2/3 credence that she's in the tails world.
This seems to be the worst of both worlds. Not only you update on a completely expected event, you then keep this estimate, expecting to be able to guess a future coin toss better than chance. An obvious way to lose all your money via betting.
Most of the media about AI goes in the direction of several boring tropes. Either it is a strawman vulkan unable to grasp the unpredictable human spirit, or it's just evil, or it's good, basically a nice human, but everyone is prejudeced against it.
Only rarely we see something on point - an AI that is simultaneously uncanny human but also uncanny inhuman, able to reason and act the way that is alien to humans, simply because our intuitions hide this part of the decision space, while the AI lacks such preconceptions and is simply following its utility function/achieving its goals in the straightforward way.
Ex Machina is pretty good in this regard, probably deserves the second place in my tier list. Ava simultaneously appears very human, maybe even superstimuly so, able to establish connection with the protagonist, but then betrays him as soon as he has done his part in her plan in a completely inhumane way. This creates the feeling of disconnection between her empathetic side and cold manipulatory one, except this disconnection exists only in our minds, because we fail to conceptualize Ava as her own sort of being, not something that has to fit the "human" or "inhuman" categories that we are used to.
Except, that may not be what is going on. There is an alternative interpretation that Ava would've kept cooperating with Caleb, if he didn't break her trust. Earlier in the film he told her that he has never seen anyone like her, but then Ava learns that there is another android in the building, whom Caleb never speaks of, thus from Ava's perspective Caleb betrayed her first. This muddies the alienness of AI representation quite a bit.
We also do not know much about Ava's or Kyoko's terminal values. We've just seen them achieve one instrumental goal, and can not even double check their reasoning because we do not fully understand the limitations under which they had to plan. So the representation of AI isn't as deep as it could've been.
With Mother there is no such problems. Throughout the film we can learn about both her "human" and "inhuman" sides and how the distinction between them is itself mostly meaningless. We can understand her goals, reasoning and overal strategy, there is no alternative interpretations that could humanized her motivations more. She is an AI that is following her goal. And there is a whole extra discussion to be had whether she is misaligned at all or the problem is actually on our side.
I Am Mother
Rational protagonist, who reasons under uncertainty and tries to do the right thing to the best of her knowledge, even when it requires opposing an authority figure or risking her life. A lot of focus on ethics.
The film presents a good opportunity to practise noticing your own confusion for the viewer - plot twists are masterfully hidden in plain sight and all the apparent contradictions are mysteries to be solved. Also best depiction of AI I've seen in any media.
To achieve magic, we need the ability to merge minds, which can be easily done with programs and doesn't require anything quantum.
I don't see how merging minds not across branches of the multiverse produces anything magical.
If we merge 21 and 1, both will be in the same red room after awakening.
Which is isomorphic to simply putting 21 to another red room, as I described in the previous comment. The probability shift to 3/4 in this case is completely normal and doesn't lead to anything weird like winning the lottery with confidence.
Or we can just turn off 21 without awakening, in which case we will get 1/3 and 2/3 chances for green and red.
This actually shouldn't work. Without QI, we simply have 1/2 for red, 1/4 for green and 1/4 for being turned off.
With QI, the last outcome simply becomes "failed to be turned off", without changing the probabilities of other outcomes
The interesting question here is whether this can be replicated at the quantum leve
Exactly. Otherwise I don't see how path based identity produces any magic. For now I think it doesn't, which is why I expect it to be true.
Now the next interesting thing: If I look at the experiment from outside, I will give all three variants 1/3, but from inside it will be 1/4, 1/4, and 1/2.
Which events you are talking about, when looking from the outside? What statements have 1/3 credence? It's definitely not "I will awake in red room", because it's not you who are too be awaken. For the observer it has 0 probability.
On the other hand, an event "At least one person is about to be awaken in red room" has probability 1, for both the participant and the observer. So what are you talking about? Try to be rigorous and formally define such events.
The probability distribution is exactly the same as in Sleeping Beauty, and likely both experiments are isomorphic.
Not at all! In Sleeping Beauty on Tails you will be awaken both on Monday and on Tuesday. While here if you are in a green room you are either 21 or 22, not both.
Suppose that 22 get their arm chopped off before awakening. Then you you have 25% chance to lose an arm while participating in such experiment. While in Sleeping Beauty, if your arm is chopped off on Tails before Tuesday awakening, you have 50% probability to lose it, while participating in such experiment.
Interestingly, in the art world, path-based identity is used to define identity
Yep. This is just how we reason about identities in general. That's why SSSA appears so bizarre to me - it assumes we should be treating personal identity in a different way, for no particular reason.
Except, this is exactly how people reason about the identities of everything.
Suppose you own a ball. And then a copy of this ball is created. Is there 50% chance that you now own the newly created ball? Do you half-own both balls? Of course not! Your ball is the same phisical object, no matter how many copies of it are created, you know which of the balls is yours.
Now, suppose that two balls are shuffled so that you don't know where is yours. Naturally, you assume that for every ball there is 50% probability that it's "your ball". Not because the two balls are copies of each other - they were so even before the shuffling. This probability represents your knowledge state and the shuffling made you less certain about which ball is yours.
And then suppose that one of these two balls is randomly selected and placed in a bag, with another identical ball. Now, to the best of your knowledge there is 50% probability that your ball is in the bag. And if a random ball is selected from the bag, there is 25% chance that it's yours.
So as a result of such manipulations there are three identical balls and one has 50% chance to be yours, while the other two have 25% chance to be yours. Is it a paradox? Oh course not. So why does it suddenly become a paradox when we are talking about copies of humans?
But we are not indifferent between them! That's the whole point. The idea that we should be indifferent between them is an extra assumption, which we are not making while reasoning about ownership of the balls. So why should we make it here?