Aspiring AI safety researchers should ~argmax over AGI timelines — LessWrong