(This is my fourth daily post on LessWrong, a summary of what was said yesterday at a regular Meetup on AI Safety I host in Paris)
A Logician, an Entrepreneur, and a Hacker walk into my place.
I naturally ask them: "How would you generate General Intelligence?"
Whole Brain Emulations
Entrepreneur - "Well, I am currently reading Self Comes to Mind, from Antonio Damasio, and it seems that we have the ability to understand the function of certain groups of neurons. If we get to understand better how brains work, we might, after having scanned brains, run brain emulations with our inner knowledge of the brain. However, our brains are intimately linked to our perception of the world, so we might need to incorporate vision or other senses, and thus a virtual environment to the emulation."
Logician - "Yes. But in practice, human brains appear to have some prior knowledge about how the world works. For instance, Chomsky argues with his theory of language development that children already have some prior about language, because they are able to generalize from a very limited set of sentences."
Hacker - "Agreed. But how the heck should we implement it? Because there is a difference between the necessity to implement prior knowledge for humans, and for intelligence in general."
Entrepreneur - "Indeed, but how would we define the prior here? Shouldn't the algorithm be a prior too? And what about the senses? Shouldn't they be a prior also?"
Logician - "And what about Brain Machine Interfaces and what Neuralink is doing to solve the Input/Output problem? Research in this field is advancing at a great pace. They are now capable of identifying the song you hear in your head from Magnetic Resonance Imaging."
Entrepreneur - "Yes, but the real bottleneck here seems to be the ability to process data efficiently, not only to receive/communicate it."
Hacker - "Then isn't the solution to build a digital layer able to process information much quickly? It would be you, and with a relationship to your cortex analogous to the relationship your cortex has with your limbic system."
Hacker - "More generally, we should ask ourselves if all optimization processes could in principle lead to General Intelligence. We, humans, are able to think as a result of evolution's optimization process. Thus, evolution is the only algorithm we know for sure can lead to General Intelligence."
Logician - "Indeed. And mammals are the proof that evolution can systematically produce intelligent organisms, although their intelligence would be very different from human's."
Hacker - "Yes. Another very promising argument for the effectiveness of evolutionary methods is the following: the size of our genome is way smaller (in memory size) than our connectome. More precisely, human's intelligence is the result of an optimization algorithm processing a string of only about thousands of Mb. General Intelligence seems much more accessible, seen from this angle."
Logician - "Agreed. We have seen with AlphaGo that policies could be learned by searching in a weight space, which was limited if we consider real values as encoded in a limited number of bits and the network having a fixed number of neurons. So if we make the analogy with our genetic algorithm, and take the reward from the body/environment as the rules of the game, we have our AlphaGo for General Intelligence."