Non-alignment project ideas for making transformative AI go well — LessWrong