Radford Neal

Wiki Contributions

Comments

Well, given that the text of the US constitution seems to clearly state that all powers not explicitly granted to the federal government belong to the states (or the people), I don't see how "power is devolved to the states from the federal government". It seems that the states don't need to wait for the federal government to "devolve" power to them in order to do something. As indeed we saw recently with respect to covid policy.

You could argue that the federal government "lets" the states do this, in the sense that the federal government has more guns than the states, and hence could stop them it it wanted to. But this would be naive. These guns are operated by people, whose loyalty to the federal government if there were a conflict would not be automatic.

the constitution is quite clear that power is devolved to the states from the federal government, and not that the federal government is granted power at the behest of the states

The 10th amendment to the US constitution says:

The powers not delegated to the United States by the Constitution, nor prohibited by it to the States, are reserved to the States respectively, or to the people.

which sounds like the opposite of what you say.  Of course, practice may be different.

Can you tell me why?

It think if we encountered aliens who were apparently not hostile, but presumably strange, and likely disgusting or disturbing in some ways, there would be three groups (likely overlapping) of people opposed to wiping them out:

  • Those who see wiping them out as morally wrong.
  • Those who see wiping them out as imprudent - we might fail, and then they wipe us out, or other aliens now see us as dangerous, and wipe us out.
  • Those who see wiping them out as not profitable - better to trade with them.

There would also be three groups in favour of wiping them out:

  • Those who see wiping them out as morally good - better if the universe doesn't have such disgusting beings.
  • Those who see wiping them out as the prudent thing to do - wipe them out before they change their mind and do that to us.
  • Those who see wiping them out as profitable - then we can grab their resources.

I think it's clear that people with all these view will exist, in non-negligible numbers. I think there's at least a 5% chance that the "don't wipe them out" people prevail.  

Subgroups of our species are also actively wiping out other subgroups of our species they don't like.

Yes, but that's not how interactions between groups of humans always turn out. 

We didn't really wipe out the Neanderthals (assuming we even were a factor, rather than climate, disease, etc.), seeing as they are among our ancestors.

We are a species that has evolved in competition with other species.  Yet, I think there is at least a 5% chance that if we encountered an intelligent alien species that we wouldn't try to wipe them out (unless they were trying to wipe us out).

Biological evolution of us and aliens would in itself be a commonality, that might produce some common values, whereas there need be no common values with an AI created by a much different process and not successfully aligned.

One problem I have with Diamond's theory is that I doubt that there is anything for it to explain.  The Americas and Eurasia/Africa were essentially isolated from each other for about 15,000 years.  In 1500 AD, the Americas were roughly 3500 years less advanced than Eurasia/Africa.  That seems well within the random variation one would expect between two isolated instances of human cultural development over a 15,000 year time span.  If you think there is still some remaining indication that the Americas were disadvantaged, the fact that the Americas are about half the size of Eurasia/Africa seems like a sufficient explanation.

Perhaps you could give the definition you would use for the word "probability".

I define it as one's personal degree of belief in a proposition, at the time the judgement of probability is being made. It has meaning only in so far it is (or may be) used to make a decision, or is part of a general world model that is itself meaningful.  (For example, we might assign a probability to Jupiter having a solid core, even though that makes no difference to anything we plan to do, because that proposition is part of an overall theory of physics that is meaningful.) 

Frequentist ideas about probability being related to the proportion of times that an event occurs in repetitions of a scenario are not part of this definition, so the question of what denominator to use does not arise. (Looking at frequentist concepts can sometimes be a useful sanity check on whether probability judgements make sense, but if there's some conflict between frequentist and Bayesian results, the solution is to re-examine the Bayesian results, to see if you made a mistake, or to understand why the frequentist results don't actually contradict the Bayesian result.)

If you make the right probability judgements, you are supposed to make the right decision, if you correctly apply decision theory. And Beauty does make the right decision in all the Sleeping Beauty scenarios if she judges that P(Heads)=1/3 when woken before Wednesday. She doesn't make the right decision if she judges that P(Heads)=1/2. I emphasize that this is so for all the scenarios. Beauty doesn't have to ask herself, "what denominator should I be using?". P(Heads)=1/3 gives the right answer every time.

Another very useful property of probability judgements is that they can be used for multiple decisions, without change. Suppose, for example, that in the GWYD or GRYL scenarios, in addition to trying not to die, Beauty is also interested in muffins.

Specifically, she knows from the start that whenever she wakes up there will be a plate of freshly-baked muffins on her side table, purchased from the cafe down the road. She knows this cafe well, and in particular knows that (a) their muffins are always very delicious, and (b) on Tuesdays, but not Mondays, the person who bakes the muffins adds an ingredient that gives her a stomach ache 10 minutes after eating a muffin. Balancing these utilities, she decides to eat the muffins if the probability of it being Tuesday is less than 30%. If Beauty is a Thirder, she will judge the probability of Tuesday to be 1/3, and refrain from eating the muffins, but if Beauty is a Halfer, she will (I think, trying to pretend I'm a halfer) think the probability of Tuesday is 1/4, and eat the muffins.

The point here is not so much which decision is correct (though of course I think the Thirder decision is right), but that whatever the right decision is, it shouldn't depend on whether Beauty is in the GWYD or GRYL scenario. She shouldn't be considering "denominators".

I think we actually have two quantities:

"Quobability" - The frequency of correct guesses made divided by the total number of guesses made.

"Srobability" - The frequency of trials in which the correct guess was made, divided by the number of trials.

Quabability is 1/3, Scrobability is 1/2. "Probability" is (I think) an under-precise term that could mean either of the two.

I suspect that the real problem isn't with the word "probability", but rather the word "guess". In everyday usage, we use "guess" when the aim is to guess correctly. But the aim here is to not die. 

Suppose we rephrase the GRYL scenario to say that Beauty at each awakening takes one of two actions - "action H" or "action T". If the coin lands Heads, and Beauty takes action H the one time she is woken, then she lives (if she instead takes action T, she dies). If the coin lands Tails, and Beauty takes action T at least one of the two times she is woken, then she lives (if she takes action H both times, she dies).

Having eliminated the word "guess", why would one think that Beauty's use of the strategy of randomly taking action H or action T with equal probabilities implies that she must have P(Heads)=1/2? As I've shown above, that strategy is actually only compatible with her belief being that P(Heads)=1/3.

Note that in general, the "action space" for a decision theory problem need not be the same as the "state space". One might, for example, have some uncertain information about what day of the week it is (7 possibilities) and on that basis decide whether to order pepperoni, anchovy, or ham pizza (3 possibilities).  (You know that different people, with different skills, usually make the pizza on different days.)  So if for some reason you randomized your choice of action, it would certainly not say anything directly about your probabilities for the different days of the week.

By "GWYL" do you actually mean "GRYL" (ie, Guess Right You Live)?

One could argue that if the coin is flicked and comes up tails then we have both "Tails&Monday" and "Tails&Tuesday" as both being correct, sequentially.

Yes, it is a commonplace occurrence that "Today is Monday" and "Today is Tuesday" can both be true, on different days. This doesn't ordinarily prevent people from assigning probabilities to statements like "Today is Monday", when they happen to not remember for sure whether it is Monday or not now. And the situation is the same for Beauty - it is either Monday or Tuesday, she doesn't know which, but she could find out if she just left the room and asked some passerby. Whether it is Monday or Tuesday is an aspect of the external world, which one normally regards as objectively existing regardless of one's knowledge of it. 

All this is painfully obvious. I think it's not obvious to you because you don't accept that Beauty is a human being, not a python program. Note also that Beauty's experience on Monday is not the same as on Tuesday (if she is woken). Actual human beings don't have exactly the same experiences on two different days, even if the have memory issues. The problem setup specifies only that these differences aren't informative about whether it's Monday or Tuesday.

What do you mean by "probability".

I'm using "probability" in the subjective Bayesian sense of "degree of belief". Since the question in the Sleeping Beauty problem is what probability of Heads should Beauty have when awoken, I can't see how any other interpretation would address the question asked. Note that these subjective degree-of-belief probabilities are intended to be a useful guide to decision-making. If they lead one to make clearly bad decisions, they must be wrong.

Consider two very extreme cases of the sleeping beauty game: - Guess wrong and you die! (GWYD) - Guess right and you live! (GRYL)

If we look at the GWYD and GRYL scenarios you describe, we can, using an "outside" view, see what the optimal strategies are, based on how frequently Beauty survives in repeated instances of the problem. To see whether Beauty's subjective probability of Heads should be 1/2 or 1/3, we can ask whether after working out an optimal strategy beforehand, Beauty will change her mind after waking up and judging that P(Heads) is either 1/2 or 1/3, and then using that to decide whether to follow the original strategy or not. If Beauty's judgement of P(Heads) leads to her abandoning the optimal strategy, there must be something wrong with that P(Heads).

For GWYD, both the strategy of deterministically guessing Heads on all awakenings and the strategy of deterministically guessing Tails on all awakenings will give a survival probability of 1/2, which is optimal (I'll omit the proof of this).

Suppose Beauty decides ahead of time to deterministically guess Tails. Will she change her mind and guess Heads instead when she wakes up?

Suppose that Beauty thinks that P(Heads)=1/3 upon wakening. She will then think that if she guesses Heads, her probability of surviving is P(Heads)=1/3. If instead, she guesses Tails, she thinks her probability of surviving is P(Tails & on other wakening she also guesses Tails), which is 2/3 if she is sure to follow the original plan in her other wakening, and is greater than 1/3 as long as she's more likely than not to follow the original plan on her other wakening. So unless Beauty thinks she will do something perverse on her other wakening, she should think that following the original plan and guessing Tails is her best action.

Now suppose that Beauty thinks that P(Heads)=1/2 upon wakening. She will then think that if she guesses Heads, her probability of surviving is P(Heads)=1/2. If instead, she guesses Tails, she thinks her probability of surviving is P(Tails & on other wakening she also guesses Tails), which is 1/2 if she is sure to follow the original plan in her other wakening, and less than 1/2 if there is any chance that on her other wakening she doesn't follow the original plan. Since Beauty is a human being, who at least once in a while does something strange or mistaken, the probability that she won't follow the plan on her other wakening is surely not zero, so Beauty will judge her survival probability to be greater if she guesses Heads than if she guesses Tails, and abandon the original plan of guessing Tails.

Now, if Beauty thinks P(Heads)=1/2 and then always reasons in this way on wakening, then things turn out OK - despite having originally planned to always guess Tails, she actually always guesses Heads. But if there is a non-negligible chance that she follows the original plan without thinking much, she will end up dead more than half the time.

So if Beauty thinks P(Heads)=1/3, she does the right thing, but if Beauty thinks P(Heads)=1/2, she maybe does the right thing, but not really reliably.

Turning now to the GRYL scenario, we need to consider randomized strategies. Taking an outside view, suppose that Beauty follows the strategy of guessing heads with probability h, independently each time she wakes. Then the probability that she survives is S=(1/2)h+(1/2)(1-h*h) - that is, the probability the coin lands Heads (1/2) times the probability she guesses Heads, plus the probability that the coin lands Tails time the probability that she doesn't guess Heads on both awakenings. The derivative of S with respect to h is (1/2)-h, which is zero when h=1/2, and one can verify this gives a maximum for S of 5/8 (better than the survival probability when deterministically guessing either Heads of Tails).

This matches what you concluded. However, Beauty guessing Heads with probability 1/2 does not imply that Beauty thinks P(Heads)=1/2. The "guess" here is not made in an attempt to guess the coin flip correctly, but rather in an attempt to not die. The mechanism for how the guess influences whether Beauty dies is crucial.

We can see this by seeing what Beauty will do after waking if she thinks that P(Heads)=1/2. She knows that her original strategy is to randomly guess, with Heads and Tails both having probability 1/2. But she can change her mind if this seems advisable. She will think that if she guesses Tails, her probability of survival will be P(Tails)=1/2 (she won't survive if the coin landed Heads, because this will be her only (wrong) guess, and she will definitely survive if the coin landed Tails, regardless what she does on her other awakening). She will also compute that if she guesses Heads, she will survive with probability P(Heads)+P(Tails & she guesses Tails on her other wakening). Since P(Heads)=1/2, this will be greater than 1/2, since the second term surely is not exactly zero (it will be 1/4 if she follows the original strategy on her other awakening). So she will think that guessing Heads gives her a strictly greater chance of surviving than guessing Tails, and so will just guess Heads rather than following the original plan of guessing randomly.

The end result, if she reasons this way each awakening, is that she always guesses Heads, and hence survives with probability 1/2 rather than 5/8.

Now lets see what Beauty does if after wakening she thinks that P(Heads)=1/3. She will think that if she guesses Tails, her probability of survival will be P(Tails)=2/3. She will think that if she guesses Heads, her probability of survival will be P(Heads)+P(Tails & she guesses Tails on her other wakening). If she thinks she will follow the original strategy on her other wakening, the this is (1/3)+(2/3)*(1/2)=2/3. Since she computes her probability of survival to be the same whether she guesses Heads or Tails, she has no reason to depart from the original strategy of guessing Heads or Tails randomly. (And this reinforces her assumption that on another wakening she would follow the original strategy.)

So if Beauty is a Thirder, she lives with probability 5/8, but if she is a Halfer, she lives with lower probability of 1/2.

Load More