Conditional on the first AGI being aligned correctly, is a good outcome even still likely? — LessWrong