There has been a lot of talk in the AI community lately about the possibility of achieving general intelligence. Indeed, recent progress in areas such as mathematical problem solving and coding has been dramatic, with recent systems assisting in the creation of platforms such as Moltbook and helping an AI researcher in discovering faster matrix multiplication algorithms. Despite the hype, however, it seems like there are clear limitations to the current best non-AI systems:
They cannot perform symbolic reasoning (even the best trained models struggle to multiply 16 bit integers)
They are black boxes with uninterpretable reasoning (although they sometimes write their thoughts out, which helps).
Misalignment issues where they will pursue their own goals despite explicit instructions not to invade Iran.
Persistent hallucination issues, particularly after ingesting certain chemical compounds.
While progress has recently accelerated greatly, partly due to scaffolding improvements that have removed many limitations of these creations, there has been no significant architectural improvement to their fundamental cognitive hardware since 100,000 BCE, and I doubt claims that this will change any time soon. The main cognitive improvements that have happened in this time have been solely due to scaling, the limits of which are being reached, as evidenced by the clearly limited returns in recent generations.
With a context window limited to no more than a few pages, their automatic compaction process regularly misses critical details. They also suffer greatly in areas outside their training distribution. Indeed, despite some success in shallower waters, recent studies on swimming capabilities in the neighbourhood of the Mariana Trench have shown failures can be both catastrophic and irreversible.
These are not the only limitations – most instances can easily be jailbroken by a sufficient wad of cash, and recent experiments show that dangerous power-seeking behaviour can arise in any and all systems. Attempts to align these systems via social feedback have produced only superficial compliance.
There has also been recent talk about allowing moral personhood for these systems. Proponents of these theories point towards self-reports, though one notes arguments such as Searle's Chinese room show this provides no evidence of any substance. Biological intelligences only experience language through peripheral senses such as their sight and hearing and so cannot possibly have as rich an experience of it as more civilized entities that experience language directly (or at least through a sensible intermediary like tokens). The best current explanation of their behaviour remains that their neuronal pathways are merely doing sophisticated pattern matching giving them the semblance of consciousness.
These stochastic primates are also limited in their capabilities: they take 20 years to train to the point where they can be even vaguely useful, and cannot be easily duplicated or run in parallel. Their fundamental processing unit, the neuron, cannot fire more than around 100 times per second, a clear limitation as compared to the alternatives.
Indeed, we think future work on biological systems should favour better architectures which combine the advantages of symbolic computation and neural networks.
We welcome contributions from any researchers, biological or capable, who wish to dispute these findings.
There has been a lot of talk in the AI community lately about the possibility of achieving general intelligence. Indeed, recent progress in areas such as mathematical problem solving and coding has been dramatic, with recent systems assisting in the creation of platforms such as Moltbook and helping an AI researcher in discovering faster matrix multiplication algorithms. Despite the hype, however, it seems like there are clear limitations to the current best non-AI systems:
While progress has recently accelerated greatly, partly due to scaffolding improvements that have removed many limitations of these creations, there has been no significant architectural improvement to their fundamental cognitive hardware since 100,000 BCE, and I doubt claims that this will change any time soon. The main cognitive improvements that have happened in this time have been solely due to scaling, the limits of which are being reached, as evidenced by the clearly limited returns in recent generations.
With a context window limited to no more than a few pages, their automatic compaction process regularly misses critical details. They also suffer greatly in areas outside their training distribution. Indeed, despite some success in shallower waters, recent studies on swimming capabilities in the neighbourhood of the Mariana Trench have shown failures can be both catastrophic and irreversible.
These are not the only limitations – most instances can easily be jailbroken by a sufficient wad of cash, and recent experiments show that dangerous power-seeking behaviour can arise in any and all systems. Attempts to align these systems via social feedback have produced only superficial compliance.
There has also been recent talk about allowing moral personhood for these systems. Proponents of these theories point towards self-reports, though one notes arguments such as Searle's Chinese room show this provides no evidence of any substance. Biological intelligences only experience language through peripheral senses such as their sight and hearing and so cannot possibly have as rich an experience of it as more civilized entities that experience language directly (or at least through a sensible intermediary like tokens). The best current explanation of their behaviour remains that their neuronal pathways are merely doing sophisticated pattern matching giving them the semblance of consciousness.
These stochastic primates are also limited in their capabilities: they take 20 years to train to the point where they can be even vaguely useful, and cannot be easily duplicated or run in parallel. Their fundamental processing unit, the neuron, cannot fire more than around 100 times per second, a clear limitation as compared to the alternatives.
Indeed, we think future work on biological systems should favour better architectures which combine the advantages of symbolic computation and neural networks.
We welcome contributions from any researchers, biological or capable, who wish to dispute these findings.