What if I say "maximize x for just a little while, then talk to me for further instructions"? A human can understand that without difficulty, so for a superintelligent AI it should be easy right?

I think it depends on how you mean "a little while", but it's quite possible the world would now contain safeguards against further changes, or simply no longer contain you (or a version of "you" that shares your goals.)

(Also, millennia of subjective torture (or whatever) might be a high price for the experiment, even if it got reset.)

More "Stupid" Questions

by NancyLebovitz 1 min read31st Jul 2013498 comments

14


This is a thread where people can ask questions that they would ordinarily feel embarrassed for not knowing the answer to. The previous "stupid" questions thread went to over 800 comments in two and a half weeks, so I think it's time for a new one.