I didn't realize then that disutility of human-built AI can be much larger than utility of FAI, because pain is easier to achieve than human utility (which doesn't reduce to pleasure). That makes the argument much stronger.

I didn't realize then that disutility of human-built AI can be much larger than utility of FAI, because pain is easier to achieve than human utility (which doesn't reduce to pleasure).

This argument doesn't actually seem to be in the article that Kaj linked to. Did you see it somewhere else, or come up with it yourself? I'm not sure it makes sense, but I'd like to read more if it's written up somewhere. (My objection is that "easier to achieve" doesn't necessarily mean the maximum value achievable is higher. It could be that it would take long... (read more)

S-risks: Why they are the worst existential risks, and how to prevent them

by Kaj_Sotala 1 min read20th Jun 2017107 comments

21