Disincentivizing deception in mesa optimizers with Model Tampering — LessWrong