Unknown2
Unknown2 has not written any posts yet.

My guess is that Eliezer will be horrified at the results of CEV-- despite the fact that most people will be happy with it.
This is obvious given the degree to which Eliezer's personal morality diverges from the morality of the human race.
Being deterministic does NOT mean that you are predictable. Consider this deterministic algorithm, for something that has only two possible actions, X and Y.
This algorithm is deterministic, but not predictable. And by the way, human beings can implement this algorithm; try to tell someone everything he will do the next day, and I assure you that he will not do it (unless you pay him etc).
Also, Eliezer may be right that in theory, you can prove that the AI will not do X, and then it will think, "Now I know that I will... (read more)
James, of course it would know that only one of the two was objectively possible. However, it would not know which one was objectively possible and which one was not.
The AI would not be persuaded by the "proof", because it would still believe that if later events gave it reason to do X, it would do X, and if later events gave it reason to do Y, it would do Y. This does not mean that it thinks that both are objectively possible. It means that as far as it can tell, each of the two is subjectively open to it.
Your example does not prove what you want it to. Yes, if... (read more)
James Andrix: an AI would be perfectly capable of understanding a proof that it was deterministic, assuming that it in fact was deterministic.
Despite this, it would not be capable of understanding a proof that at some future time, it will take action X, some given action, and will not take action Y, some other given action.
This is clear for the reason stated. It sees both X and Y as possibilities which it has not yet decided between, and as long as it has not yet decided, it cannot already believe that it is impossible for it to take one of the choices. So if you present a "proof" of this fact, it... (read more)
Nick, the reason there are no such systems (which are at least as intelligent as us) is that we are not complicated enough to manage to understand the proof.
This is obvious: the AI itself cannot understand a proof that it cannot do action A. For if we told it that it could not do A, it would still say, "I could do A, if I wanted to. And I have not made my decision yet. So I don't yet know whether I will do A or not. So your proof does not convince me." And if the AI cannot understand the proof, obviously we cannot understand the proof ourselves, since we are inferior to it.
So in other words, I am not saying that there are no rigid restrictions. I am saying that there are no rigid restrictions that can be formally proved by a proof that can be understood by the human mind.
This is all perfectly consistent with physics and math.
Emile, you can't prove that the chess moves outputted by a human chess player will be legal chess moves, and in the same way, you may be able to prove that about a regular chess playing program, but you will not be able to prove it for an AI that plays chess; an AI could try to cheat at chess when you're not looking, just like a human being could.
Basically, a rigid restriction on the outputs, as in the chess playing program, proves you're not dealing with something intelligent, since something intelligent can consider the possibility of breaking the rules. So if you can prove that the AI won't turn the universe into paperclips, that shows that it is not even intelligent, let alone superintelligent.
This doesn't mean that there are no restrictions at all on the output of an intelligent being, of course. It just means that the restrictions are too complicated for you to prove.
Eliezer, this is the source of the objection. I have free will, i.e. I can consider two possible courses of action. I could kill myself, or I could go on with life. Until I make up my mind, I don't know which one I will choose. Of course, I have already decided to go on with life, so I know. But if I hadn't decided yet, I wouldn't know.
In the same way, an AI, before making its decision, does not know whether it will turn the universe into paperclips, or into a nice place for human beings. But the AI is superintelligent: so if it does not know which one it will... (read more)
Ben Jones, the means of identifying myself will only show that I am the same one who sent the $10, not who it is who sent it.
Eliezer seemed to think that one week would be sufficient for the AI to take over the world, so that seems enough time.
As for what constitutes the AI, since we don't have any measure of superhuman intelligence, it seems to me sufficient that it be clearly more intelligent than any human being.
Eliezer: did you receive the $10? I don't want you making up the story, 20 or 30 years from now, when you lose the bet, that you never received the money.
Eliezer: "And you might not notice if your goals shifted only a bit at a time, as your emotional balance altered with the strange new harmonies of your brain."
This is yet another example of Eliezer's disagreement with the human race about morality. This actually happens to us all the time, without any modification at all, and we don't care at all, and in fact we tend to be happy about it, because according to the new goal system, our goals have improved. So this suggests that we still won't care if it happens due to upgrading.