Introduction
The conjecture is that an AI can fully excel in any two of these dimensions only by compromising the third.
In other words, a system that is extremely general and highly agentic will be hard to align; one that is general and aligned must limit its agency; and an agentic aligned system must remain narrow. Below, I discuss how today’s AI designs implicitly “pick two.”
This is a useful mental model to look at AI systems because it clarifies fundamental tensions in contemporary AI design. It highlights how and where compromises typically arise.
Generality + Agency ⇒ Alignment sacrificed.
An AI that is both very general and truly agentic – selecting and pursuing open-ended goals –... (read 488 more words →)