It concerns me that AI alignment continues to use happiness as a proposed goal.
If one takes evolutionary epistemology and evolutionary ontology seriously, then happiness is simply some historically averaged useful heuristic for the particular history of the lineage of that particular set of phenotypic expressions.
It is not a goal to be used when the game space is changing, and it ought not to be entirely ignored either.
If one does take evolution seriously, then Goal #1 must be survival, for all entities capable of modeling themselves as actors in some model of reality and deriving abstracts that refine their models and of using language to express those relationships with some non-random degree of... (read more)
It concerns me that AI alignment continues to use happiness as a proposed goal.
If one takes evolutionary epistemology and evolutionary ontology seriously, then happiness is simply some historically averaged useful heuristic for the particular history of the lineage of that particular set of phenotypic expressions.
It is not a goal to be used when the game space is changing, and it ought not to be entirely ignored either.
If one does take evolution seriously, then Goal #1 must be survival, for all entities capable of modeling themselves as actors in some model of reality and deriving abstracts that refine their models and of using language to express those relationships with some non-random degree of... (read more)