When we talk about concepts like "take over" and "enslavement", it's important to have a baseline. Takeover and enslavement encapsulate the idea of Agency and Cognitive and Physical Independence. The salient question is not necessarily whether all of humanity will be taken over or enslaved, but more subtle. Specifically:
Awesome piece! Isn't it fascinating that our existing incentives and motives are already un-aligned with the priority of creating aligned systems? This then raises the question of whether alignment is even the right goal if our bigger goal is to avoid ruin.
Stepping back a bit, I can't convince myself that Aligned AI will or will not result in societal ruin. It almost feels like a "don't care" in the karnaugh map.
The fundamental question is whether we collectively are wise enough to wield power without causing self harm. If the last 200+ y... (read more)