LESSWRONG
LW

933
Nolan
2010
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No posts to display.
No wikitag contributions to display.
Different goals may bring AI into conflict with us
Nolan3mo30

What if Peely had a secondary goal to not harm humans? What is stopping it from accomplishing goal number 1 in accordance with goal number 2? Why should we assume that a superintelligent entity would be incapable of holding multiple values? 

Reply