LESSWRONG
LW

Christopher “Chris” Upshaw
13040
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
D0TheMath's Shortform
Christopher “Chris” Upshaw1y3-5

All of these seem pretty cold tea, as in true but not contrarian.

Reply
Contra EY: Can AGI destroy us without trial & error?
Christopher “Chris” Upshaw3y20

"Either they’re perfectly doable by humans in the present, with no AGI help necessary."

So, your argument about why this is a relevant statement is that AI isn't adding danger? That seems to me to be using a really odd standard for "perfectly doable" .. the actual number of humans who could do those things is not huge, and humans don't usually want to. 

Like either ending the world is easy for humans, in which AI is dangerous because it will want to, or its hard for humans in which case AI is dangerous because it will do them better.

I don't think that works to dismiss that category of risk.

Reply
AGI Ruin: A List of Lethalities
Christopher “Chris” Upshaw3y107

So what should I do with this information, like what other option than "nod along and go on living their lives" is there for me?

Reply
Underappreciated points about utility functions (of both sorts)
Christopher “Chris” Upshaw6y20

I don't believe that infinite gambles are a thing. In fact they feel almost self evidently at best an approximation.

Reply
No posts to display.