LESSWRONG
LW

3632
Insc Seeker
0010
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No posts to display.
No wikitag contributions to display.
All AGI Safety questions welcome (especially basic ones) [~monthly thread]
Insc Seeker10mo10

Premise: A person who wants to destroy the world with exceeding amounts of power would be just as big of a problem as an AI with exceeding amounts of power. 

Question: Do we just think that AI will be able to obtain that amount of power more effectively than a human, and that the ratio of AIs that will destroy the world to safe AIs is larger than the ratio of world destroying humans to normal humans?

Reply