Brno: Far future, existential risk and AI safety — LessWrong