How to evaluate terminal values of humans (defined like in here)? Values are subjective but the question asks for some objective perspective. This question is of interest as “Humans' terminal values are often mutually contradictory, inconsistent, and changeable”.
Obviousness of natural selection (NS) can pose some constraints, albeit weak ones, as all known systems with sentient agents abide NS. But weak constraints are still better than no constraints at all.
Terminal goals are being split by natural selection into ones that fail to reproduce / maintain themselves and ones that survive (together with their bearers of cource). And sometimes we can even predict whether some terminal goals would go extinct or at least range their probability of survival (we already had put aside instrumental goals that “die” when they...
I was told that this sequence covers this topic somehow: https://www.lesswrong.com/posts/C8nEXTcjZb9oauTCW/where-recursive-justification-hits-bottom
I guess it's worth checking and conspecting relevant parts. I'll do this when I would have free time.
UPD: The first post in the sequence didn't contain anything of importance to this question. To be continued...
P.S. Basically, that post was a recap of a part of the more poetic and “old school” article that I've written: Applying Universal Darwinism to evaluation of Terminal values. The article doesn't add anyth... (read more)