Discovering alignment windfalls reduces AI risk — LessWrong