Reinforcement learning isn't trivial, but it might be kind of modular. Evolution doesn't have to create reinforcement learning from scratch every time, it can re-use the existing structures and just hook in new inputs.
Learning takes a lot of overhead. Much less overhead if you can simply be born knowing what you need to know, instinctively. Konrad Lorenz believed that instincts somehow develop out of learned behavior. He looked at related species where one had an instinctual behavior and the other had to learn the same behavior. It takes so long to learn instincts by mutation and selection, it's intuitively obvious that learning should come first and then be replaced by instinct over time. But Lorenz didn't propose a mechanism for that to happen and I don't know a mechanism either.
We do so much less instinct than other animals that I'm tempted to think we haven't spent much time in consistent predictable environments. Maybe we've always lived by disrupting ecosystems and surviving in the chaos -- a role we would manage better than animals that do less flexible thinking.
It takes so long to learn instincts by mutation and selection, it's intuitively obvious that learning should come first and then be replaced by instinct over time. But Lorenz didn't propose a mechanism for that to happen and I don't know a mechanism either.
The mechanism seems straightforward: evolution by mutation and natural selection. If a species is in a stable (enough) environment, and the learning process is costly in energy, time, or both (and there will necessarily be some cost), then those who don't have to learn as much will be selected for.
To put my point a bit differently I think your argument would disappear if you tried to define imagine/reason more precisely.
Humans certainly aren't perfect at imagining. In fact if you ask most people to imagine a heavy object and a much heavier object falling they will predict the much heavier object hits first and I can give a host of other examples of the same thing. So certainly we can't require perfection to count as imagining or reasoning.
Neither do we want to define reasoning in terms of speaking english sentences or anything similar. I mean we can imagine the possibility of intelligent alien blobs of gas that have no language at all (or at least that we can understand). Besides using this definition would just make the claims trivial.
The best approach I can think of to making these terms more precisce is by defining reasoning ability in terms of ability to respond in a survival enhancing way to a diverse set of conditions. However, when we try to define it this way it is far from clear that complexes of genes aren't just as good at reasoning as people are.
Sure it's weird that something like this might be the case but it's damn weird that induction works at all so why not believe it can be captured in a large number of relatively simple heuristics.
"How the heck does a double-stranded molecule that fits inside a cell nucleus, come to embody truths that baffle a whole damn squirrel brain?"
This kind of thing happens all the time, even in distantly related fields like physics. Observe: a dropped brick perfectly obeys the laws for movement within a curved Lorentzian manifold, without a brain or even a computer, while it would take a mere human half an hour to do out the calculations! Surely we should appoint bricks as pre-eminent physics professors.
logicnazi, he is making progress ;-)
Humans certainly aren't perfect at imagining. In fact if you ask most people to imagine a heavy object and a much heavier object falling they will predict the much heavier object hits first and I can give a host of other examples of the same thing.
When you ask someone to imagine something he is controlling his imagination which is equivalent to conscious though. What one can think of however is controlled by ones beliefs - what is skewed in humans is their beliefs, not their imagination. Once beliefs are being controlled by science one will imagine the falling consistent with the scientific theory forming ones beliefs. I would reword your sentence to: humans not usually form their beliefs on the basis of science.
A conscious mind on a protein-based computer? You expect me to believe in sentient MEAT!?
As an interesting aside, I believe calorie-counting isn't done in the stomach. However, protein content is measured directly at the taste buds. Which explains why MSG tastes so good, and makes a light meal seem more filling. At least until digestion has started and the expected barrage of amino acids is somehwat lacking.
Surely we should appoint bricks as pre-eminent physics professors.Not exactly. If we want to know how objects fall, though, we shouldn't ask people with the socially-awarded status of authority, we should look at falling things - like bricks.
This was the insight that started the scientific method. If you want to know how things fall, don't read Aristotle, drop things and watch them.
Empiricism, trial and error - these are the most powerful forces for knowledge there are. Reason is excellent, but overrated.
A conscious mind on a protein-based computer? You expect me to believe in sentient MEAT!? "Yes, thinking meat! Conscious meat! Loving meat. Dreaming meat. The meat is the whole deal! Are you beginning to get the picture or do I have to start all over?"
"If we want to know how objects fall, though, we shouldn't ask people with the socially-awarded status of authority, we should look at falling things - like bricks."
But now we don't do much of that, we read physics books and work out the homework. There isn't time to repeat all the experiments. When it comes to supercolliders etc it would take years to work out the math and deal with the confounding variables and such.
And no one could do it. It's not just the time needed. The original work was done by many people over decades of time. NO ONE could repeat it on their own.
Billswift, after thinking it over some I'm surprised how much one person could do, if they knew not to follow any blind alleys. Like, you could do all the experiments needed to support Maxwell's equations in a reasonably short time if someone helped you set up the equipment.
It took a lot of people a long time to do it, but that's because they didn't know what they were doing ahead of time.
I don't know where the limits would come but they might be a lot broader than I first thought.
Sure there's time to do all the experiments - If you already know what to do. There isn't time to learn all physics again from scratch, which is what the thread was basically talking about.
IMO a fun project (for those like me who like this but are clearly not smart enough to be part of a Singularity developing team): create an object-based environment with maybe rule-based reproductive agents, with customizable explicit world rules (as in a computer game, not as in physics) and let them evolve. Maybe users across the world can add new magical artifacts and watch the creatures fail hilariously...
On a more related note, the post sounds ominous for any hope of a general AI. There may be no clear distinction between protein computer and just protein, between learning and blindly acting. If we and our desires are in such a position, aren't any AI we can make indirectly also blind? Or as I understand, Eliezer seems to think that we can (or had better) bootstrap, both for intelligence/computation and morality. For him, this bootstrapping, this understanding/theory of the generality of our own intelligence (as step one of the bootstrapping) seems to be the Central Truth of Life (tm). Maybe he's right, but for me with less insight into intelligence, that's not self-evident. And he didn't explain this crucial point anywhere clearly, only advocated it. Come on, it's not as if enough people can be a seed AI programmer even armed with that Central Truth. (But who knows.)
Evolutionary biologists basically do this, without the interactivity. They create and run computer simulations on rule-based reproducing agents.
Your general point is valid. But you say "no non-primate species", and I know of some non-primate species that can do this task. Namely, cetaceans, corvids, and octopus. Slight nitpick, as these are all very smart animals.
Followup to: Evolutionary Psychology
It takes hundreds of generations for a simple beneficial mutation to promote itself to universality in a gene pool. Thousands of generations, or even millions, to create complex interdependent machinery.
That's some slow learning there. Let's say you're building a squirrel, and you want the squirrel to know locations for finding nuts. Individual nut trees don't last for the thousands of years required for natural selection. You're going to have to learn using proteins. You're going to have to build a brain.
Protein computers and sensors can learn by looking, much faster than DNA can learn by mutation and selection. And yet (until very recently) the protein learning machines only learned in narrow, specific domains. Squirrel brains learn to find nut trees, but not to build gliders - as flying squirrel DNA is slowly learning to do. The protein computers learned faster than DNA, but much less generally.
How the heck does a double-stranded molecule that fits inside a cell nucleus, come to embody truths that baffle a whole damn squirrel brain?
Consider the high-falutin' abstract thinking that modern evolutionary theorists do in order to understand how adaptations increase inclusive genetic fitness. Reciprocal altruism, evolutionarily stable strategies, deterrence, costly signaling, sexual selection - how many humans explicitly represent this knowledge? Yet DNA can learn it without a protein computer.
There's a long chain of causality whereby a male squirrel, eating a nut today, produces more offspring months later: Chewing and swallowing food, to digesting food, to burning some calories today and turning others into fat, to burning the fat through the winter, to surviving the winter, to mating with a female, to the sperm fertilizing an egg inside the female, to the female giving birth to an offspring that shares 50% of the squirrel's genes.
With the sole exception of humans, no protein brain can imagine chains of causality that long, that abstract, and crossing that many domains. With one exception, no protein brain is even capable of drawing the consequential link from chewing and swallowing to inclusive reproductive fitness.
Yet natural selection exploits links between local actions and distant reproductive benefits. In wide generality, across domains, and through levels of abstraction that confuse some humans. Because - of course - the basic evolutionary idiom works through the actual real-world consequences, avoiding the difficulty of having a brain imagine them.
Naturally, this also misses the efficiency of having a brain imagine consequences. It takes millions of years and billions of dead bodies to build complex machines this way. And if you want to memorize the location of a nut tree, you're out of luck.
Gradually DNA acquired the ability to build protein computers, brains, that could learn small modular facets of reality like the location of nut trees. To call these brains "limited" implies that a speed limit was tacked onto a general learning device, which isn't what happened. It's just that the incremental successes of particular mutations tended to build out into domain-specific nut-tree-mapping programs. (If you know how to program, you can verify for yourself that it's easier to build a nut-tree-mapper than an Artificial General Intelligence.)
One idiom that brain-building DNA seems to have hit on, over and over, is reinforcement learning - repeating policies similar to policies previously rewarded. If a food contains lots of calories and doesn't make you sick, then eat more foods that have similar tastes. This doesn't require a brain that visualizes the whole chain of digestive causality.
Reinforcement learning isn't trivial: You've got to chop up taste space into neighborhoods of similarity, and stick a sensor in the stomach to detect calories or indigestion, and do some kind of long-term-potentiation that strengthens the eating impulse. But it seems much easier for evolution to hit on reinforcement learning, than a brain that accurately visualizes the digestive system, let alone a brain that accurately visualizes the reproductive consequences N months later.
(This efficiency does come at a price: If the environment changes, making food no longer scarce and famines improbable, the organisms may go on eating food until they explode.)
Similarly, a bird doesn't have to cognitively model the airflow over its wings. It just has to track which wing-flapping policies cause it to lurch.
Why not learn to like food based on reproductive success, so that you'll stop liking the taste of candy if it stops leading to reproductive success? Why don't birds wait and see which wing-flapping policies result in more eggs, not just more stability?
Because it takes too long. Reinforcement learning still requires you to wait for the detected consequences before you learn.
Now, if a protein brain could imagine the consequences, accurately, it wouldn't need a reinforcement sensor that waited for them to actually happen.
Put a food reward in a transparent box. Put the corresponding key, which looks unique and uniquely corresponds to that box, in another transparent box. Put the key to that box in another box. Do this with five boxes. Mix in another sequence of five boxes that doesn't lead to a food reward. Then offer a choice of two keys, one which starts the sequence of five boxes leading to food, one which starts the sequence leading nowhere.
Chimpanzees can learn to do this. (Dohl 1970.) So consequentialist reasoning, backward chaining from goal to action, is not strictly limited to Homo sapiens.
But as far as I know, no non-primate species can pull that trick. And working with a few transparent boxes is nothing compared to the kind of high-falutin' cross-domain reasoning you would need to causally link food to inclusive fitness. (Never mind linking reciprocal altruism to inclusive fitness). Reinforcement learning seems to evolve a lot more easily.
When natural selection builds a digestible-calorie-sensor linked by reinforcement learning to taste, then the DNA itself embodies the implicit belief that calories lead to reproduction. So the long-term, complicated, cross-domain, distant link from calories to reproduction, is learned by natural selection - it's implicit in the reinforcement learning mechanism that uses calories as a reward signal.
Only short-term consequences, which the protein brains can quickly observe and easily learn from, get hooked up to protein learning. The DNA builds a protein computer that seeks calories, rather than, say, chewiness. Then the protein computer learns which tastes are caloric. (Oversimplified, I know. Lots of inductive hints embedded in this machinery.)
But the DNA had better hope that its protein computer never ends up in an environment where calories are bad for it... or where sexual pleasure stops correlating to reproduction... or where there are marketers that intelligently reverse-engineer reward signals...