LESSWRONG
LW

722
Frank Adk
10110
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Given one AI, why not more?
Frank Adk3y50

Thanks for the replies.

w/r to zero not being a probability: obviously. The probability is extremely low, not zero, such that the chance of a benevolent AI existing is greater than the chance of humanity surviving a single malevolent AI. If that's not the case, then 1 and 2 are useless.

Peter, thanks. After reading that Drexler piece, the linked Christiano piece, the linked Eliezer post, and a few others, especially the conversation between Eliezer and Drexler in the comments of Drexler's post, I agree with you. TBH I am surprised that there's no better standard argument in support of inevitable anti-human collusion than this from Eliezer: "They [AIs] cooperate with each other but not you because they can do a spread of possibilities on each other modeling probable internal thought processes of each other; and you can’t adequately well-model a spread of possibilities on them, which is a requirement on being able to join an LDT coalition." As Christiano says, that makes a lot of assumptions.

Reply
No wikitag contributions to display.
7Given one AI, why not more?
Q
3y
Q
12