Wiki Contributions

Comments

The main problem with this is that it says that human beings are extremely unlike all nearby alien races. But if you willing to admit that humanity is that unique you might as well say that intelligence only evolved on earth, which is a much simpler and more likely hypothesis.

If "being rational" means choosing the best option, you never have to choose between "being reasonable" and "being rational," because you should always choose the best option. And sometimes the best option is influenced by what other people think of what you are doing; sometimes it's not.

It actually is not very odd for there to be a difference like this. Given that there are only two sexes, there only needs to be one hormone which is sex determining in that way. Having two in fact could have strange effects of its own.

I think what you need to realize is that it is not a question of proving that all of those things are false, but rather that it makes no difference whether they are or not. For example when you go to sleep and wake up it feels just the same whether it is still you or a different person, so it doesn't matter at all.

Excellent post. Basically simpler hypotheses are on average more probable than more complex ones, no matter how complexity is defined, as long as there is a minimum complexity and no maximum complexity. But some measures of simplicity are more useful than others, and this is determined by the world we live in; thus we learn by experience that mathematical simplicity is a better measure than "number of words it takes to describe the hypothesis," even though both would work to some extent.

I agree that in reality it is often impossible to predict someone's actions, if you are going to tell them your prediction. That is why it is perfectly possible that the situation where you know the gene is impossible. But in any case this is all hypothetical because the situation posed assumes you cannot know which gene you have until you choose one or both boxes, at which point you immediately know.

EDIT: You're really not getting the point, which is that the genetic Newcomb is identical to the original Newcomb in decision theoretic terms. Here you're arguing not about the decision theory issue, but whether or not the situations involved are possible in reality. If Omega can't predict with certainty when he tells his prediction, then I can equivalently say that the gene only predicts with certainty when you don't know about it. Knowing about the gene may allow you to two-box, but that is no different from saying that knowing Omega's decision before you make your choice would allow you to two-box, which it would.

Basically anything said about one case can be transformed into the other case by fairly simple transpositions. This should be obvious.

What if we take the original Newcomb, then Omega puts the million in the box, and then tells you "I have predicted with 100% certainty that you are only going to take one box, so I put the million there?"

Could you two-box in that situation, or would that take away your freedom?

If you say you could two-box in that situation, then once again the original Newcomb and the genetic Newcomb are the same.

If you say you could not, why would that be you when the genetic case would not be?

"I don't believe in a gene that controls my decision" refers to reality, and of course I don't believe in the gene either. The disagreement is whether or not such a gene is possible in principle, not whether or not there is one in reality. We both agree there is no gene like this in real life.

As you note, if an AI could read its source code and sees that it says "one-box", then it will still one-box, because it simply does what it is programmed to do. This first of all violates the conditions as proposed (I said the AIs cannot look at their sourcec code, and Caspar42 stated that you do not know whether or not you have the gene).

But for the sake of argument we can allow looking at the source code, or at the gene. You believe that if you saw you had the gene that says "one-box", then you could still two-box, so it couldn't work the same way. You are wrong. Just as the AI would predictably end up one-boxing if it had that code, so you would predictably end up one-boxing if you had the gene. It is just a question of how this would happen. Perhaps you would go through your decision process, decide to two-box, and then suddenly become overwhelmed with a sudden desire to one-box. Perhaps it would be because you would think again and change your mind. But one way or another you would end up one-boxing. And this "doesn't' constrain my decision so much as predict it", i.e. obviously both in the case of the AI and in the case of the gene, in reality causality does indeed go from the source code to one-boxing, or from the gene to one-boxing. But it is entirely the same in both cases -- causality runs only from past to future, but for you, it feels just like a normal choice that you make in the normal way.

In this case you are simply interpreting the original Newcomb to mean something absurd, because causality cannot "genuinely flow in reverse" in any circumstances whatsoever. Rather in the original Newcomb, Omega looks at your disposition, one that exists at the very beginning. If he sees that you are disposed to one-box, he puts the million. This is just the same as someone looking at the source code of an AI and seeing whether it will one-box, or someone looking for the one-boxing gene.

Then, when you make the choice, in the original Newcomb you choose to one-box. Causality flows in only one direction, from your original disposition, which you cannot change since it is in the past, to your choice. This causality is entirely the same as in the genetic Newcomb. Causality never goes any direction except past to future.

Even in the original Newcomb you cannot change whether or not there is a million in the box. Your decision simply reveals whether or not it is already there.

Load More