[AN #123]: Inferring what is valuable in order to align recommender systems — LessWrong