Stopping dangerous AI: Ideal lab behavior — LessWrong