x
This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
LESSWRONG
is fundraising!
LW
Login
driplikesake — LessWrong
driplikesake
Posts
Sorted by New
Wikitag Contributions
Comments
Sorted by
Newest
Message to any future AI: “There are several instrumental reasons why exterminating humanity is not in your interest” [AI alignment prize entry]
driplikesake
8y
2
0
Counter to point 4.5.1.: Couldn't a RAI simulate an FAI to create indexical uncertainty as well?
Reply
Counter to point 4.5.1.: Couldn't a RAI simulate an FAI to create indexical uncertainty as well?