Posts

Sorted by New

Wiki Contributions

Comments

„Whether or not your probability model leads to optimal descision making is the test allowing to falsify it.“

Sure, I don‘t deny that. What I am saying is, that your probability model don‘t tell you which probability you have to base on a certain decision. If you can derive a probability from your model and provide a good reason to consider this probability relevant to your decision, your model is not falsified as long you arrive at the right decision. Suppose a simple experiment where the experimenter flips a fair coin and you have to guess if Tails or Heads, but you are only rewarded for the correct decision if the coin comes up Tails. Then, of course, you should still entertain unconditional probabilities P(Heads)=P(Tails)=1/2. But this uncertainty is completely irrelevant to your decision. What is relevant, however, is P(Tails/Tails)=1 and P(Heads/Tails)=0, concluding you should follow the strategy always guessing Tails. Another way to arrive at this strategy is to calculate expected utilities setting U(Heads)=0 as you would propose. But this is not the only reasonable solution. It’s just a different route of reasoning to take into account the experimental condition that your decision counts only if the coin lands Tails.

„The model says that  P(Heads|Red) = 1/3  P(Heads|Blue) = 1/3 but P(Heads|Red or Blue) = 1/2 Which obviosly translates in a betting scheme: someone who bets on Tails only when the room is Red wins 2/3 of times and someone who bets on Tails only when the room is Blue wins 2/3 of times, while someone who always bet on Tails wins only 1/2 of time.“

A quick translation of the probabilities is:

P(Heads/Red)=1/3: If your total evidence is Red, then you should entertain probability 1/3 for Heads.

P(Heads/Blue)=1/3: If your total evidence is Blue, then you should entertain probability 1/3 for Heads.

P(Heads/Red or Blue)=1/2: If your total evidence is Red or Blue, which is the case if you know that either red or blue or both, but not which exactly, you should entertain probalitity 1/2 for Heads.

If the optimal betting sheme requires you to rely on P(Heads/Red or Blue)=1/2 when receiving evidence Blue, then the betting sheme demands you to ignore your total evidence. Ignoring total evidence does not necessarily invalidate the probability model, but it certainly needs justification. Otherwise, by strictly following total evidence your model will let you also run foul of the Reflection Principle, since you will arrive at probability 1/3 in every single experimental run.

Going one step back, with my translation of the conditional probabilities above I have made the implicit assumption that the way the agent learns evidence is not biased towards a certain hypothesis. But this is obviously not true for the Beauty: Due to the memory loss Beauty is unable to learn evidence „Red and Blue“ regardless of the coin toss. This in combination with her sleep on Tuesday if Heads, she is going to learn „Red“ and „Blue“ (but not „Red and Blue“) if Tails while she is only going to learn either „Red“ or „Blue“ if Heads, resulting in a bias towards the Tails-hypothesis.

I admit that P(Heads/Red)=P(Heads/Blue)=1/3, but P(Heads/Red or Blue)=1/2 hints you towards the existence of that information selection bias. However, this is just as little a feature of your model as a flat tire is a feature of your car because it hints you to fix it. It is not your probability model that guides you to adopt the proper betting strategy by ignoring total evidence. In fact, it is just the other way around that your knowledge about the bias guides you to partially dismiss your model. As mentioned above, this does not necessarily invalidate your model, but it shows that directly applying it in certain decision scenarios does not guarantee optimal decisions but can even lead to bad decisions and violating Reflection Principle.

Therefore, as a halfer, I would prefer an updating rule that takes into account the bias and telling me P(Heads/Red)=P(Heads/Blue)=P(Red or Blue)=1/2. While offering me the possibility of a workaround to arrive at your betting sheme. One possible workaround is that Beauty runs a simulation of another experiment within her original Technicolor Experiment in which she is only awoken in a Red room. She can easily simulate that and the same updating rule that tells her P(Heads/Red)=1/2 for the original experiment tells her P(Heads/Red)=1/3 for the simulated experiment.

„This leads to a conclusion that observing event "Red" instead of "Red or Blue" is possible only for someone who has been expecting to observe event "Red" in particular. Likewise, observing HTHHTTHT is possible for a person who was expecting this particular sequence of coin tosses, instead of any combination with length 8.  See Another Non-Anthropic Paradox: The Unsurprising Rareness of Rare Events“

I have already refuted this way of reasoning in the comments of your post.

Honestly, I do not see any unlawful reasoning going on here. First of all, it‘s certainly important to distinguish between a probability model and a strategy. The job of a probability model is simply to suggest the probability of certain events and to describe how probabilities are affected by the realization of other events. A strategy on the other hand is to guide decision making to arrive at certain predefined goals.

My point is, that the probabilities a model suggests you to have based on the currently available evidence do NOT neccessarily have to match the probabilities that are relevant to your strategy and decisions. If Beauty is awake and doesn‘t know if it is the day her bet counts, it is in fact a rational strategy to behave and decide as if her bet counts today. If she knows that her bet only counts on Monday and her probability model suggests that „Today is Monday“ is relevant for H, then ideal rationality requires her to base her decision on P(H/Monday) cause she knows that Monday is realized when her decision counts. This guarantees that on her Monday awakening when her decision counts, she is calculating the probability for heads based on all relevant evidence that is realized on that day.

It is true that the thirder model does not suggest such a strategy, but suggesting strategies and therefore suggesting which probabilities are relevant for decisions is not the job of a probability model anyway. Similar is the case of the Technicolor Beauty: The strategy „only updating if Red“ is neither suggested nor hinted by your model. All your model suggests are probabilities conditional on the realization of certain events. It can’t tell you to treat the observation „Red room“ as a realization of the event „There is an awakening in a red room“ while treating the observation „Blue room“ merely as a realization of the event „There is an awakening in a red or a blue room“ instead of „There is an awakening in a blue room“. The observation of a blue room is always a realization of both of these events, and it is your strategy „tracking red“ and not your probability model that suggests to prefer one over the other as the relevant evidence to calculate your probabilities. I had been thinking over this for a while after I recently discovered this „Updating only if Red“-strategy for myself and how this strategy could be directly derived from the halfer model. But I honestly see no better justification to apply it than the plain fact that it proves to be more successful in the long run.

Sure, if the bet is offered only once per experiment, Beauty receives new evidence (from a thirder‘s perspective) and she could update.

In case the bet is offered on every awakening: do you mean if she gives conflicting answers on Monday and Tuesday that the bet nevertheless is regarded as accepted?

My initial idea was, that if for example only her Monday answer counts and Beauty knows that, she could reason that when her answer counts it is Monday, arriving at the conclusion that it is reasonable to act as if it was Monday on every awakening, thus grounding her answer on P(H/Monday)=1/2. Same logic holds for rule „last awakening counts“ and „random awakening counts“.

Rules for per experiment betting seem to be imprecise. What exactly does it mean that Beauty can bet only once per experiment? Does it mean that she is offered the bet only once in case of Tails? If so, is she offered the bet on Monday or Tuesday or is the day randomly selected? Or does it mean that she is offered the bet on both Monday and Tuesday and only one bet counts if she accepts both? If so, which one? Monday bet, Tuesday bet, or is it randomly selected?

Depending on, a Thirder could base his decision on:

P(H/Today is Monday)=1/2, P(H/Today is my last awakening)=1/2, or P(H/Today is the randomly selected day my bet counts/is offered to me)=1/2

and therefore escapes utility instability?

Maybe I expressed myself somewhat misleadingly. I am not saying that she is surprised because the coincidence is more unlikely than the sequence. You are absolutely right in correcting me that the latter isn‘t even the case (also since P(HHTHTHHT/„HHTHTHHT“)=P(HHTHTHHT)=1/2^8). What I was trying to say is that her suprise about the coincidence arises from the circumtance that the coincidence is both unlikely and looks like a pattern. That fact that an event is unlikely is a necessary condition to be surprised about its occurence but not a sufficient condition.

I agree with you when you are saying that how we structure our perception of the world is biased in some way towards what we are „tracking“ in our minds. And I also agree that this bias could be mathematically modelled by the event spaces you are proposing. But I would not go too far to say that we do only observe such events we are currently tracking (please let me know if I misread you or you feel strawmanned here, since it is absolutely not my intention to annoy you!). If this was true, then we could not observe the event „any other coin sequence“ as well since this event is by definition not being tracked. In fact, in order to detect a correspondence between a coin sequence that we have in mind and the actual sequence, our brain has to compare them to decide if there is a match. I can hardly imagine how this comparison could work without observing the specific actual sequence in the first place. That we classify and perceive a specific sequence as „any other sequence“ can be the result of the comparison, but is not its starting point.

In conclusion, I do not see a contradiction in not being surprised to observe an extremely unlikely event.

Yes. Our human mind is obviously biased to detect patterns. And people tend to react surprised if they observe patterns where they did not expect them to find. If someone has a specific sequence of coin toss results in her mind (eg. „HHTHTHHT“) and she is able to reproduce it with an actual coin on her first try, then she will likely be surprised. What she is really surprised about however, is not that she has observed an unlikely event ({HHTHTHHT}), but that she has observed an unexpected pattern. In this case, the coincidence of the sequence she had in mind and the sequence produced by the coin tosses constitutes a symmetry which our mind readily detects and classifies as such a pattern. One could also say that she has not just observed the event {HHTHTHHT} alone, but also the coincidence which can be regarded as an event, too. Both events, the actual coin toss sequence and the coincidence, are unlikely events and both become extremely unlikely with longer sequences. My reasoning is, that the coincidence is not more surprising than the actual sequence because it is an even more unlikely event than the sequence. Though both events are unlikely and therefore unexpected, the coincidence is more surprising to us simply because it looks like a pattern.