Why does FAI have to have a utility function that's such a close approximation of the human utility function? Let's say we develop awesome natural language processing technology, and the AI can read the internet and actually know what we mean when we say "OK AI, promote human flourishing" and ask us questions on ambiguous points and whatnot. Why doesn't this work? There are probably humans I would vote in to all-powerful benevolent dictator positions, so I'm not sure my threshold for what I'd accept as an all-powerful benevolent dictator is all that high.

Well, if we ask it to, say, maximize human happiness or "complexity" or virtue or GDP or any of a million other things ... BAM the world sucks and we probably can't fix it.

-1Qiaochu_Yuan7yEverything is ambiguous and this would slow it down too much.
1[anonymous]7yYou have two questions: why accurately approximate human value, and why not have it just ask us about ambiguities. 1. Because the hard part is getting it to do anything coherent at all, and once we are there, it is little extra work to make it do what we really want. 2. This would work. The hard part is to get it to do that. I would also accept most people as BDFL, over the incumbent gods of indifferent chaos. Again the hard part is kicking out the incumbent. Past that point the debate is basically what color to paint the walls, by comparison.

More "Stupid" Questions

by NancyLebovitz 1 min read31st Jul 2013498 comments

14


This is a thread where people can ask questions that they would ordinarily feel embarrassed for not knowing the answer to. The previous "stupid" questions thread went to over 800 comments in two and a half weeks, so I think it's time for a new one.