In a previous essay, I illustrated how modern machine learning architectures require too much training data to teach themselves high-level concepts. They might be able to learn concepts one or even two rungs up the ladder of abstraction. Then they hit a computational wall.
To get around this bottleneck, an intelligence must have the capability to create an abstraction layer and then build another abstraction layer on top of the previous abstraction layer, inductively. This is a fractal. Thus we come to my First Law of Artificial Intelligence.
Any algorithm that is not organized fractally will eventually hit a computational wall, and vice versa.
―Lsusr's First Law of Artificial Intelligence
Fractally nested intelligences construct a graph of abstractions from specific to general. Changes to specific small-scale abstractions produce large changes in the behavior of general large-scale abstractions. General abstractions are emergent from specific abstractions.
I believe the human brain embeds its fractal in Connectome-Specific Harmonic Waves. It follows that bottom-up (evidence-heavy) and top-down (prior-heavy) processing are both governed by resonance equation where is to be found experimentally. The resonance equation provides a mechanism for bottom-up and top-down processing simultaneously.
Top-down processing is a kind of amplification. For example, after you realize that black capped chickadees (high-level abstraction) are a species of bird your eyes will pay closer attention (amplification) of the bird's characteristic black markings on its head (low-level abstraction).
Bottom-up processing is how an intelligence creates its own semantic layer.
A self-amplifying hierarchical system is chaotic; small changes to training data can produce large changes in self-organization. Resonance in particular are so chaotic that the double pendulum is a classroom example of chaotic motion.
The First Law implies that an AI's updates must percolate both up and down along its hierarchical organization thereby resulting in chaotic self-modification.
No algorithm without the freedom to self-alter its own error function can operate unsupervised on small data.
The Second Law implies than an AI has the freedom to modify its own morality.
The First Law implies that an AI will modify itself chaotically.
An AI cannot be considered "powerful" unless it is organized fractally. An AI cannot be considered "general" unless it can operate unsupervised on small data. Together, the First and Second Law imply that an AGI must have the capability to make large changes to its own morality in response to small changes in its input. In other words, the morality of an AGI is inherently chaotic.
A chaotically-moral AGI is the opposite of an reliably-aligned system. This could throw a wrench into the creation of a reliably aligned AI.
In a hybrid machine learning system, human programmers often create layers of abstraction for the system. These ad hoc systems do not suffer from the same computational wall. Nor do they constitute artificial general intelligence. ↩︎
A fractal software architecture does not depend on fractal underlying hardware. ↩︎