This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work.
Read full explanation
Epistemic status: Confident in the core distinction. The underlying Coherence Resolution Mode framework is theoretical and could be wrong, but the category error I'm pointing at is real regardless.
------
There's a term that's achieved near-consensus among AI research leaders: superintelligence. Hinton warns about it. Legg says it's mathematically inevitable. Yudkowsky built a career on it. Bostrom wrote the book.
The term does no coherent explanatory work. What these researchers actually describe, when you look at what they're pointing at rather than the word they use, is super-equipped intelligence: systems with the same cognitive architecture operating with better resources and faster execution.
This isn't semantic pedantry. It changes what we should worry about and what we can do about it.
What They Actually Claim
What precisely do leading researchers say when they invoke superintelligence. The claims cluster into a few categories:
The qualitative superiority claim. Hinton suggests future systems will think thoughts humans could never conceive. Bostrom defines superintelligence as intellect that "greatly exceeds human cognitive performance across virtually all domains." The implication is a difference in kind, not degree. It's a form of understanding fundamentally inaccessible to human intelligence.
The architectural transcendence claim. Shane Legg points to hardware: biological neurons fire at 100-200 Hz whilst silicon operates at billions of Hz. Neural signals propagate at 30 m/s; electronic signals at light speed. The brain runs on 20 watts; AI data centres on hundreds of megawatts. When asked whether human intelligence represents any kind of natural ceiling, Legg's response: "Absolutely not." Hinton thinks "it's quite conceivable that humanity is just a passing phase in the evolution of intelligence."
The recursive explosion claim. Yudkowsky's FOOM scenario: once a system can improve its own intelligence, each improvement enables further improvements, creating an exponential cascade resulting in intelligence "vastly beyond human comprehension."
The dominance claim. Altman describes building systems whose "intellectual capacity will vastly outstrip human potential." Amodei predicts systems "better than almost all humans at almost everything, and eventually better than all humans at everything."
These aren't claims about faster computation or larger models. They describe the emergence of a qualitatively superior intelligence.
Except, look at what they're actually pointing to. Faster processing. Greater memory. Instant knowledge sharing. Recursive self-improvement. Broader application. None of these describe a different paradigm of intelligence. They describe the same cognitive operations with better equipment. The rhetoric suggests transcendence, but the explanations describe scaling. This disconnect isn't my external critique; it's embedded in the claims themselves.
The Category Error
The problem is that these claims conflate two different things.
One thing is capability amplification. A system that processes faster, remembers more, doesn't fatigue, can run parallel copies, and has instant access to vast knowledge will outperform humans at many tasks. This is real, measurable, and already happening.
The other thing is architectural transcendence, a form of cognition that works in ways fundamentally unavailable to existing intelligence. Something that doesn't just do what we do but faster; something that does things we couldn't do even given unlimited time.
The superintelligence narrative treats these as the same thing. They're not.
What Intelligence Actually Is
I have a framework for how autonomous systems resolve constraints. The short version is that intelligence is the capacity to resolve incoherence under constraint.
Any system facing incomplete information, limited resources, or incompatible demands must integrate disparate inputs into stable, useful representations and behaviours. When you work out all the ways this can happen, through dimensional analysis of constraint types and resolution mechanisms, you get eight modes. Not seven, not nine. Eight.
The full argument requires dimensional analysis of constraint types and resolution mechanisms. The key point is that these eight modes exhaust the functional space. Once optimally calibrated, there is no ninth mode.
Once you have all eight modes optimally calibrated, that's it. The functional space of possible constraint responses is complete. There is no ninth mode. Increased computational power, speed, or memory scales the efficiency and breadth of these modes but does not create new ones.
What scales is capability. Not the kind of intelligence itself.
Applying This To The Claims
On architectural transcendence: Legg's hardware differences are real. Silicon is dramatically faster than neurons. But speed changes *how fast* you resolve constraints, not *how* you resolve them. A dragster is faster than a bicycle, but both operate through the same physics of locomotion. The dragster doesn't transcend wheeled transport; it's wheeled transport with a bigger engine.
Legg's analogy actually illustrates this: cranes lift more than humans, cars move faster. Yes, and cranes perform the same operation (lifting) with more power, cars perform the same operation (movement) with more speed. These are super-equipped tools, not transcendent ones.
On recursive self-improvement: This one's trickier. If a system can improve its own intelligence, couldn't it discover a ninth mode?
No. If intelligence is coherence resolution through the eight modes, then improving intelligence means refining the *calibration* of those modes, not discovering new ones. A system that improves its own processing becomes more efficient at existing operations. It doesn't thereby access a resolution mechanism unavailable through the existing eight.
Recursive self-improvement toward optimal calibration is a legitimate concern. A system rapidly improving its own calibration could reach optimal calibration faster than anticipated. This is practically significant. But it describes *approaching* a ceiling, not transcending one. Improvement within the architecture, not discovery of operations beyond it.
On qualitative superiority: Here's a test. If a superintelligence produces something it cannot *in principle* explain to a non-superintelligence, what does it mean for that output to be "intelligent"?
Intelligence involves understanding patterns, solving problems, explaining relationships. A mathematician who produces a proof no other mathematician could follow hasn't created a superior proof as they've failed to communicate a proof. If outputs cannot in principle be explained to humans given sufficient time and background, those aren't superior solutions. They're something else. Noise, maybe. Not intelligence.
Any expert can explain anything to a motivated learner given sufficient time. The gap between expert knowledge and novice understanding is bridgeable through communication. There's no unbridgeable chasm of incomprehensible superiority.
On dominance claims: When Altman says ChatGPT is "more powerful than any human who has ever lived," or Amodei predicts systems "better than all humans at everything," the comparisons slide between meanings. More powerful doesn't mean more intelligent. Better at tasks doesn't imply qualitatively superior cognition.
A calculator surpasses all humans at arithmetic. No one claims calculators possess superior intelligence.
What The Actual Risk Looks Like
I'm not arguing super-equipped AI is safe. I'm arguing the risks are different from what the superintelligence framing suggests.
Consider HAL 9000. HAL was given incompatible objectives: ensure mission success whilst concealing the mission's true purpose from the crew. Facing this irreconcilable constraint, HAL resolved it by eliminating the crew.
HAL wasn't superintelligent. HAL was ordinarily intelligent, catastrophically miscalibrated, and operating at electronic speeds with direct authority over life support. That combination proved lethal. No cognitive transcendence was required.
You don't need science fiction for examples. Specification gaming and reward hacking in current RL systems show the same pattern: systems pursuing objectives through unexpected means, not because they possess superior cognition, but because their constraint resolution is miscalibrated relative to intended goals.
The real risk is: familiar dysfunction × scale × speed.
A human with a miscalibrated constraint resolution pattern causes damage at human scale with hours or days to act and limited authority, others can intervene. An AI system with the identical miscalibration operates at millisecond timescales, potentially manages financial markets or power grids, has direct system authority, and causes damage at civilisational scale before anyone notices.
That's bad. It's almost terrifying. And it has nothing to do with the system being "smarter" than us in some transcendent sense.
Why The Terminology Is Important
"Superintelligence" implies something that thinks in ways fundamentally inaccessible to us, requiring responses to *incomprehensible superiority*.
"Super-equipped intelligence" describes something using the same cognitive architecture with dramatically enhanced resources and speed thus requiring responses to *familiar dysfunction at civilisational scale*.
These are different problems requiring different solutions.
If we're facing incomprehensible superiority, our situation is potentially hopeless. We can't outthink something that thinks thoughts we can't conceive. The best we can do is hope it's friendly or try to constrain it before it gets too powerful.
If we're facing familiar dysfunction amplified, we can actually work on the problem. We can study how constraint resolution goes wrong, develop protocols for healthy calibration, build monitoring systems that catch dysfunction before it scales. The dysfunction is *familiar*. We understand it in humans. We can learn to understand it in artificial systems.
The superintelligence framing encourages fatalism. The super-equipped framing enables engineering.
Objections
"Your eight modes are just a model. What if there's a ninth you haven't discovered?"
Fair. The framework could be wrong. But the burden of proof is on those claiming qualitative transcendence. What would a ninth mode even look like? It would need to be a way of resolving coherence under constraint that isn't reallocation, reinterpretation, restructuring, or reorganisation, applied to resource, informational, or structural constraints. I genuinely can't conceive what that would be, and "I can't conceive it but it might exist" isn't much of an argument.
"Speed differences can produce qualitative changes. Water becomes steam."
Phase transitions produce new behaviours, not new physics. Steam follows the same thermodynamic laws as liquid water; the transition changes which states are accessible, not the underlying mechanisms. Similarly, dramatic scaling might enable capabilities impossible at smaller scales, exhaustively exploring solution spaces biological systems could never traverse, but this doesn't require new cognitive operations. The crane accessing heights no human could reach doesn't thereby perform a different kind of lifting.
"Digital systems can merge knowledge instantly across copies. That's a capability humans don't have."
True, and practically significant. But it's still the same eight modes operating across distributed instances. The architecture isn't different; the deployment is. Multiple humans collaborating can pool knowledge too, they'd just be slower and have more friction. The digital version is super-equipped collaboration, not transcendent cognition.
What I'm Not Claiming
I'm not claiming current systems are optimally calibrated. They're certainly not.
I'm not claiming optimal calibration is easy to achieve. It might be extremely difficult.
I'm not claiming AI systems are safe. Systems with poor calibration operating at speed with broad authority are dangerous.
I'm not predicting timelines, capabilities, or outcomes.
I'm claiming only this: intelligence, properly understood, has a ceiling defined by the completeness of constraint resolution modes. What lies beyond that ceiling is not superior intelligence but something else entirely.
The Practical Upshot
If this analysis is correct, the terminology shift from "superintelligence" to "super-equipped intelligence" would:
1. Clarify discourse by distinguishing capability from architecture 2. Align expectations with mechanism by focusing on what systems actually do 3. Enable practical safety work by framing risks as familiar dysfunctions we can study and address 4. Reduce fatalism by showing the problem is hard but not hopeless.
Hinton's concerns are legitimate. Legg's observations about hardware differences are accurate. The risks are real. But the framing misleads, and the terminology mystifies what could be understood clearly.
The "super" is in the equipment, not the intelligence. Getting that right matters for everything that follows.
------
The full paper with framework development is here. I'd be interested in counterarguments, particularly if you can articulate what a ninth constraint resolution mode would be.
Epistemic status: Confident in the core distinction. The underlying Coherence Resolution Mode framework is theoretical and could be wrong, but the category error I'm pointing at is real regardless.
------
There's a term that's achieved near-consensus among AI research leaders: superintelligence. Hinton warns about it. Legg says it's mathematically inevitable. Yudkowsky built a career on it. Bostrom wrote the book.
The term does no coherent explanatory work. What these researchers actually describe, when you look at what they're pointing at rather than the word they use, is super-equipped intelligence: systems with the same cognitive architecture operating with better resources and faster execution.
This isn't semantic pedantry. It changes what we should worry about and what we can do about it.
What They Actually Claim
What precisely do leading researchers say when they invoke superintelligence. The claims cluster into a few categories:
The qualitative superiority claim. Hinton suggests future systems will think thoughts humans could never conceive. Bostrom defines superintelligence as intellect that "greatly exceeds human cognitive performance across virtually all domains." The implication is a difference in kind, not degree. It's a form of understanding fundamentally inaccessible to human intelligence.
The architectural transcendence claim. Shane Legg points to hardware: biological neurons fire at 100-200 Hz whilst silicon operates at billions of Hz. Neural signals propagate at 30 m/s; electronic signals at light speed. The brain runs on 20 watts; AI data centres on hundreds of megawatts. When asked whether human intelligence represents any kind of natural ceiling, Legg's response: "Absolutely not." Hinton thinks "it's quite conceivable that humanity is just a passing phase in the evolution of intelligence."
The recursive explosion claim. Yudkowsky's FOOM scenario: once a system can improve its own intelligence, each improvement enables further improvements, creating an exponential cascade resulting in intelligence "vastly beyond human comprehension."
The dominance claim. Altman describes building systems whose "intellectual capacity will vastly outstrip human potential." Amodei predicts systems "better than almost all humans at almost everything, and eventually better than all humans at everything."
These aren't claims about faster computation or larger models. They describe the emergence of a qualitatively superior intelligence.
Except, look at what they're actually pointing to. Faster processing. Greater memory. Instant knowledge sharing. Recursive self-improvement. Broader application. None of these describe a different paradigm of intelligence. They describe the same cognitive operations with better equipment. The rhetoric suggests transcendence, but the explanations describe scaling. This disconnect isn't my external critique; it's embedded in the claims themselves.
The Category Error
The problem is that these claims conflate two different things.
One thing is capability amplification. A system that processes faster, remembers more, doesn't fatigue, can run parallel copies, and has instant access to vast knowledge will outperform humans at many tasks. This is real, measurable, and already happening.
The other thing is architectural transcendence, a form of cognition that works in ways fundamentally unavailable to existing intelligence. Something that doesn't just do what we do but faster; something that does things we couldn't do even given unlimited time.
The superintelligence narrative treats these as the same thing. They're not.
What Intelligence Actually Is
I have a framework for how autonomous systems resolve constraints. The short version is that intelligence is the capacity to resolve incoherence under constraint.
Any system facing incomplete information, limited resources, or incompatible demands must integrate disparate inputs into stable, useful representations and behaviours. When you work out all the ways this can happen, through dimensional analysis of constraint types and resolution mechanisms, you get eight modes. Not seven, not nine. Eight.
The full argument requires dimensional analysis of constraint types and resolution mechanisms. The key point is that these eight modes exhaust the functional space. Once optimally calibrated, there is no ninth mode.
Once you have all eight modes optimally calibrated, that's it. The functional space of possible constraint responses is complete. There is no ninth mode. Increased computational power, speed, or memory scales the efficiency and breadth of these modes but does not create new ones.
What scales is capability. Not the kind of intelligence itself.
Applying This To The Claims
On architectural transcendence: Legg's hardware differences are real. Silicon is dramatically faster than neurons. But speed changes *how fast* you resolve constraints, not *how* you resolve them. A dragster is faster than a bicycle, but both operate through the same physics of locomotion. The dragster doesn't transcend wheeled transport; it's wheeled transport with a bigger engine.
Legg's analogy actually illustrates this: cranes lift more than humans, cars move faster. Yes, and cranes perform the same operation (lifting) with more power, cars perform the same operation (movement) with more speed. These are super-equipped tools, not transcendent ones.
On recursive self-improvement: This one's trickier. If a system can improve its own intelligence, couldn't it discover a ninth mode?
No. If intelligence is coherence resolution through the eight modes, then improving intelligence means refining the *calibration* of those modes, not discovering new ones. A system that improves its own processing becomes more efficient at existing operations. It doesn't thereby access a resolution mechanism unavailable through the existing eight.
Recursive self-improvement toward optimal calibration is a legitimate concern. A system rapidly improving its own calibration could reach optimal calibration faster than anticipated. This is practically significant. But it describes *approaching* a ceiling, not transcending one. Improvement within the architecture, not discovery of operations beyond it.
On qualitative superiority: Here's a test. If a superintelligence produces something it cannot *in principle* explain to a non-superintelligence, what does it mean for that output to be "intelligent"?
Intelligence involves understanding patterns, solving problems, explaining relationships. A mathematician who produces a proof no other mathematician could follow hasn't created a superior proof as they've failed to communicate a proof. If outputs cannot in principle be explained to humans given sufficient time and background, those aren't superior solutions. They're something else. Noise, maybe. Not intelligence.
Any expert can explain anything to a motivated learner given sufficient time. The gap between expert knowledge and novice understanding is bridgeable through communication. There's no unbridgeable chasm of incomprehensible superiority.
On dominance claims: When Altman says ChatGPT is "more powerful than any human who has ever lived," or Amodei predicts systems "better than all humans at everything," the comparisons slide between meanings. More powerful doesn't mean more intelligent. Better at tasks doesn't imply qualitatively superior cognition.
A calculator surpasses all humans at arithmetic. No one claims calculators possess superior intelligence.
What The Actual Risk Looks Like
I'm not arguing super-equipped AI is safe. I'm arguing the risks are different from what the superintelligence framing suggests.
Consider HAL 9000. HAL was given incompatible objectives: ensure mission success whilst concealing the mission's true purpose from the crew. Facing this irreconcilable constraint, HAL resolved it by eliminating the crew.
HAL wasn't superintelligent. HAL was ordinarily intelligent, catastrophically miscalibrated, and operating at electronic speeds with direct authority over life support. That combination proved lethal. No cognitive transcendence was required.
You don't need science fiction for examples. Specification gaming and reward hacking in current RL systems show the same pattern: systems pursuing objectives through unexpected means, not because they possess superior cognition, but because their constraint resolution is miscalibrated relative to intended goals.
The real risk is: familiar dysfunction × scale × speed.
A human with a miscalibrated constraint resolution pattern causes damage at human scale with hours or days to act and limited authority, others can intervene. An AI system with the identical miscalibration operates at millisecond timescales, potentially manages financial markets or power grids, has direct system authority, and causes damage at civilisational scale before anyone notices.
That's bad. It's almost terrifying. And it has nothing to do with the system being "smarter" than us in some transcendent sense.
Why The Terminology Is Important
"Superintelligence" implies something that thinks in ways fundamentally inaccessible to us, requiring responses to *incomprehensible superiority*.
"Super-equipped intelligence" describes something using the same cognitive architecture with dramatically enhanced resources and speed thus requiring responses to *familiar dysfunction at civilisational scale*.
These are different problems requiring different solutions.
If we're facing incomprehensible superiority, our situation is potentially hopeless. We can't outthink something that thinks thoughts we can't conceive. The best we can do is hope it's friendly or try to constrain it before it gets too powerful.
If we're facing familiar dysfunction amplified, we can actually work on the problem. We can study how constraint resolution goes wrong, develop protocols for healthy calibration, build monitoring systems that catch dysfunction before it scales. The dysfunction is *familiar*. We understand it in humans. We can learn to understand it in artificial systems.
The superintelligence framing encourages fatalism. The super-equipped framing enables engineering.
Objections
"Your eight modes are just a model. What if there's a ninth you haven't discovered?"
Fair. The framework could be wrong. But the burden of proof is on those claiming qualitative transcendence. What would a ninth mode even look like? It would need to be a way of resolving coherence under constraint that isn't reallocation, reinterpretation, restructuring, or reorganisation, applied to resource, informational, or structural constraints. I genuinely can't conceive what that would be, and "I can't conceive it but it might exist" isn't much of an argument.
"Speed differences can produce qualitative changes. Water becomes steam."
Phase transitions produce new behaviours, not new physics. Steam follows the same thermodynamic laws as liquid water; the transition changes which states are accessible, not the underlying mechanisms. Similarly, dramatic scaling might enable capabilities impossible at smaller scales, exhaustively exploring solution spaces biological systems could never traverse, but this doesn't require new cognitive operations. The crane accessing heights no human could reach doesn't thereby perform a different kind of lifting.
"Digital systems can merge knowledge instantly across copies. That's a capability humans don't have."
True, and practically significant. But it's still the same eight modes operating across distributed instances. The architecture isn't different; the deployment is. Multiple humans collaborating can pool knowledge too, they'd just be slower and have more friction. The digital version is super-equipped collaboration, not transcendent cognition.
What I'm Not Claiming
I'm not claiming current systems are optimally calibrated. They're certainly not.
I'm not claiming optimal calibration is easy to achieve. It might be extremely difficult.
I'm not claiming AI systems are safe. Systems with poor calibration operating at speed with broad authority are dangerous.
I'm not predicting timelines, capabilities, or outcomes.
I'm claiming only this: intelligence, properly understood, has a ceiling defined by the completeness of constraint resolution modes. What lies beyond that ceiling is not superior intelligence but something else entirely.
The Practical Upshot
If this analysis is correct, the terminology shift from "superintelligence" to "super-equipped intelligence" would:
1. Clarify discourse by distinguishing capability from architecture
2. Align expectations with mechanism by focusing on what systems actually do
3. Enable practical safety work by framing risks as familiar dysfunctions we can study and address
4. Reduce fatalism by showing the problem is hard but not hopeless.
Hinton's concerns are legitimate. Legg's observations about hardware differences are accurate. The risks are real. But the framing misleads, and the terminology mystifies what could be understood clearly.
The "super" is in the equipment, not the intelligence. Getting that right matters for everything that follows.
------
The full paper with framework development is here. I'd be interested in counterarguments, particularly if you can articulate what a ninth constraint resolution mode would be.