LESSWRONG
LW

1857
Petal Pepperfly
0010
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No posts to display.
No wikitag contributions to display.
Convince me that humanity is as doomed by AGI as Yudkowsky et al., seems to believe
Answer by Petal PepperflyMay 06, 202210

I see no problems with your list. I would add that creating corrigible superhumanly intelligent AGI doesn't necessarily solve the AI Control Problem forever because its corrigibility may be incompatible with its application to the Programmer/Human Control Problem, which is the threat that someone will make a dangerous AGI one day. Perhaps intentionally.    

Reply