x
Moral hazards in AGI development — LessWrong