Human preferences as RL critic values - implications for alignment — LessWrong