The goal of this post is to cover some thoughts about whether / how the symbolic and emergence based approaches to replicating human intelligence should be combined.
I am of the opinion that intelligence is defined by humans mostly as similarity to human intelligence, so that the path towards widely recognizable intelligence or AGI is to replicate human intelligence.
In 1. I cover some thoughts on approaches in replicating intelligent behavior and their synthesis.
In 2. I briefly discuss the view that replicating human intelligence is simply a instrumental goal, but nevertheless retains importance.
Note: I make use of liberal generalizations in this short piece to simply get the ideas across without focusing on details.
1. How Can We Replicate Human Intelligence Exactly?
Modern practitioners of deep learning, including myself, have a bias toward systems composed of simple components (e.g., neurons) trained via simple processes (e.g., gradients), leading to the emergence of intelligence without direct human intervention to inject structure into the system (i.e., modules for different specific skills, expert systems). However, starting a system with just an environment as data and expecting it to develop human-level intelligence essentially entails simulating the entire process of evolving the human brain—from a single-cell organism or a large mess of components that initially do nothing. The disadvantage is that we might be forced to learn too much, making it implausible to replicate most of what evolution has learned from scratch.
Some people who have entered the field much earlier prefer systems constructed from human observations of how the mind works, attempting to specify its capabilities—including those acquired through life experience—directly in source code. This approach essentially tries to enumerate everything learned by evolution and life experience.
Perhaps the best path forward is a balance between these extremes: specify the parts of evolution and living experience that are embodied by simple rules, and attempt to learn the rest. For example, we might specify visual, spatial, and reasoning cortices in the brain, which we know to exist, but learn the specific concepts of different animals or objects, as they are too numerous to specify. Perhaps we should only specify elements learned through evolution, such as the existence of cortical structures, while leaving higher-level functions like logical reasoning to be learned.
2. How Can We Replicate the Aspects of Human Intelligence That Are Useful to Us?
Understanding how to replicate human intelligence as precisely as possible could inform this question, even though the end goals differ. It's possible that human intelligence is inherently valuable, and that replicating it aligns closely with creating highly useful capabilities.
To optimize for usefulness while still leveraging evolutionary insights, perhaps we should specify only the physical aspects learned through evolution, ignoring biases such as linguistic predispositions. We might also consider relaxing constraints on what we choose to specify.
It seems that many hybrid approaches are attempting not to specify the physical aspects of the human brain but rather aspects learned from experience during a human's lifetime. E.g., baking in things such as reflection, or even specific behaviors such as refinement. It seems the trend is that advances gained through this strategy will be outstripped by end-to-end learned systems sooner or later.
The goal of this post is to cover some thoughts about whether / how the symbolic and emergence based approaches to replicating human intelligence should be combined.
In 1. I cover some thoughts on approaches in replicating intelligent behavior and their synthesis.
In 2. I briefly discuss the view that replicating human intelligence is simply a instrumental goal, but nevertheless retains importance.
Note: I make use of liberal generalizations in this short piece to simply get the ideas across without focusing on details.
1. How Can We Replicate Human Intelligence Exactly?
Modern practitioners of deep learning, including myself, have a bias toward systems composed of simple components (e.g., neurons) trained via simple processes (e.g., gradients), leading to the emergence of intelligence without direct human intervention to inject structure into the system (i.e., modules for different specific skills, expert systems). However, starting a system with just an environment as data and expecting it to develop human-level intelligence essentially entails simulating the entire process of evolving the human brain—from a single-cell organism or a large mess of components that initially do nothing. The disadvantage is that we might be forced to learn too much, making it implausible to replicate most of what evolution has learned from scratch.
Some people who have entered the field much earlier prefer systems constructed from human observations of how the mind works, attempting to specify its capabilities—including those acquired through life experience—directly in source code. This approach essentially tries to enumerate everything learned by evolution and life experience.
Perhaps the best path forward is a balance between these extremes: specify the parts of evolution and living experience that are embodied by simple rules, and attempt to learn the rest. For example, we might specify visual, spatial, and reasoning cortices in the brain, which we know to exist, but learn the specific concepts of different animals or objects, as they are too numerous to specify. Perhaps we should only specify elements learned through evolution, such as the existence of cortical structures, while leaving higher-level functions like logical reasoning to be learned.
2. How Can We Replicate the Aspects of Human Intelligence That Are Useful to Us?
Understanding how to replicate human intelligence as precisely as possible could inform this question, even though the end goals differ. It's possible that human intelligence is inherently valuable, and that replicating it aligns closely with creating highly useful capabilities.
To optimize for usefulness while still leveraging evolutionary insights, perhaps we should specify only the physical aspects learned through evolution, ignoring biases such as linguistic predispositions. We might also consider relaxing constraints on what we choose to specify.
It seems that many hybrid approaches are attempting not to specify the physical aspects of the human brain but rather aspects learned from experience during a human's lifetime. E.g., baking in things such as reflection, or even specific behaviors such as refinement. It seems the trend is that advances gained through this strategy will be outstripped by end-to-end learned systems sooner or later.