We extracted the most powerful yet dangerous part of ourselves - relentless optimization - distilled it into perfect purity, and supercharged it while removing other parts that make us human.
To distill, you must remove.
You remove all of the shards that slow a human mind down — but also the things that stabilize it.
If AGI is modeled on Human Intelligence, shouldn't the solution for controlling it come from the rest of the model that makes us human?
Are we evidence intelligence alone is lethal and that constraints are not flaws but the features that keep an intelligent system functional? Are we listening to these clues?
We modeled AGI on our own intelligence.
We extracted the most powerful yet dangerous part of ourselves - relentless optimization - distilled it into perfect purity, and supercharged it while removing other parts that make us human.
To distill, you must remove.
You remove all of the shards that slow a human mind down — but also the things that stabilize it.
If AGI is modeled on Human Intelligence, shouldn't the solution for controlling it come from the rest of the model that makes us human?
Are we evidence intelligence alone is lethal and that constraints are not flaws but the features that keep an intelligent system functional? Are we listening to these clues?