Why Civilizations Are Unstable (And What This Means for AI Alignment) — LessWrong