I didn't realize then that disutility of human-built AI can be much larger than utility of FAI, because pain is easier to achieve than human utility (which doesn't reduce to pleasure).

This argument doesn't actually seem to be in the article that Kaj linked to. Did you see it somewhere else, or come up with it yourself? I'm not sure it makes sense, but I'd like to read more if it's written up somewhere. (My objection is that "easier to achieve" doesn't necessarily mean the maximum value achievable is higher. It could be that it would take long... (read more)

The argument somehow came to my mind yesterday, and I'm not sure it's true either. But do you really think human value might be as easy to maximize as pleasure or pain? Pain is only about internal states, and human value seems to be partly about external states, so it should be way more expensive.

0[anonymous]3yOur values might say, for example, that a universe filled with suffering insects is very undesirable, but a universe filled with happy insects isn't very desirable. More generally, if our values are a conjunction of many different values, then it's probably easier to create a universe where one is strongly negative and the rest are zero, than a universe where all are strongly positive. I haven't seen the argument written up, I'm trying to figure it out now.

S-risks: Why they are the worst existential risks, and how to prevent them

by Kaj_Sotala 1 min read20th Jun 2017107 comments

21