In local parlance, "terminal" values are a decision maker's ultimate values, the things they consider ends in themselves.

A decision maker should never want to change their terminal values.

For example, if a being has "wanting to be a music star" as a terminal value, than it should adopt "wanting to make music" as an instrumental value.

For humans, how these values feel psychologically is a different question from whether they are terminal or not.

See here for more information

Thanks. Looks like I was using the word as I intended to.

My point is that humans (who are imperfect decision makers and not in full control of their motivational systems) may actually benefit from changing their terminal goals, even though perfectly rational agents with consistent utility functions never would want to.

Humans are not always consistent, and making yourself consistent can involve dropping or acquiring terminal goals. (Consider a converted slaveowner acquiring a terminal goal of improving quality of life for all humans.)

My original point stems... (read more)

Open thread, July 16-22, 2013

by David_Gerard 1 min read15th Jul 2013305 comments

15


If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Given the discussion thread about these, let's try calling this a one-week thread, and see if anyone bothers starting one next Monday.