What if Peely had a secondary goal to not harm humans? What is stopping it from accomplishing goal number 1 in accordance with goal number 2? Why should we assume that a superintelligent entity would be incapable of holding multiple values?
What if Peely had a secondary goal to not harm humans? What is stopping it from accomplishing goal number 1 in accordance with goal number 2? Why should we assume that a superintelligent entity would be incapable of holding multiple values?