Calibrating indifference - a small AI safety idea — LessWrong