LESSWRONG
LW

Wikitags

Happiness maximizer

Edited by Eliezer Yudkowsky, et al. last updated 17th Dec 2015

It is sometimes proposed that we build an AI intended to maximize human happiness. (One early proposal suggested that AIs be trained to recognize pictures of people with smiling faces and then to take such recognized pictures as reinforcers, so that the grown version of the AI would value happiness.) There's a lot that would allegedly with an approach like that.

Parents:
Value identification problem
1
1
Central examples
predictably go wrong
Discussion0
Discussion0