A classic paper by Drew McDermott, “Artificial Intelligence Meets Natural Stupidity,” criticized AI programs that would try to represent notions like happiness is a state of mind using a semantic network:

And of course there’s nothing inside the HAPPINESS node; it’s just a naked LISP token with a suggestive English name.
So, McDermott says, “A good test for the disciplined programmer is to try using gensyms in key places and see if he still admires his system. For example, if STATE-OF-MIND is renamed G1073. . .” then we would have IS-A(HAPPINESS, G1073) “which looks much more dubious.”
Or as I would slightly rephrase the idea: If you substituted randomized symbols for all the suggestive English names, you would be completely unable to figure out what G1071(G1072, G1073) meant. Was the AI program meant to represent hamburgers? Apples? Happiness? Who knows? If you delete the suggestive English names, they don’t grow back.
Suppose a physicist tells you that “Light is waves,” and you believe the physicist. You now have a little network in your head that says:
IS-A(LIGHT, WAVES)
As McDermott says, “The whole problem is getting the hearer to notice what it has been told. Not ‘understand,’ but ‘notice.’ ” Suppose that instead the physicist told you, “Light is made of little curvy things.”1 Would you notice any difference of anticipated experience?
How can you realize that you shouldn’t trust your seeming knowledge that “light is waves”? One test you could apply is asking, “Could I regenerate his knowledge if it were somehow deleted from my mind?”
This is similar in spirit to scrambling the names of suggestively named lisp tokens in your AI program, and seeing if someone else can figure out what they allegedly “refer” to. It’s also similar in spirit to observing that an Artificial Arithmetician programmed to record and play back
Plus-Of(Seven, Six) = Thirteen
can’t regenerate the knowledge if you delete it from memory, until another human re-enters it in the database. Just as if you forgot that “light is waves,” you couldn’t get back the knowledge except the same way you got the knowledge to begin with—by asking a physicist. You couldn’t generate the knowledge for yourself, the way that physicists originally generated it.
The same experiences that lead us to formulate a belief, connect that belief to other knowledge and sensory input and motor output. If you see a beaver chewing a log, then you know what this thing-that-chews-through-logs looks like, and you will be able to recognize it on future occasions whether it is called a “beaver” or not. But if you acquire your beliefs about beavers by someone else telling you facts about “beavers,” you may not be able to recognize a beaver when you see one.
This is the terrible danger of trying to tell an artificial intelligence facts that it could not learn for itself. It is also the terrible danger of trying to tell someone about physics that they cannot verify for themselves. For what physicists mean by “wave” is not “little squiggly thing” but a purely mathematical concept.
As Donald Davidson observes, if you believe that “beavers” live in deserts, are pure white in color, and weigh 300 pounds when adult, then you do not have any beliefs about beavers, true or false. Your belief about “beavers” is not right enough to be wrong.2 If you don’t have enough experience to regenerate beliefs when they are deleted, then do you have enough experience to connect that belief to anything at all? Wittgenstein: “A wheel that can be turned though nothing else moves with it, is not part of the mechanism.”
Almost as soon as I started reading about AI—even before I read McDermott—I realized it would be a really good idea to always ask myself: “How would I regenerate this knowledge if it were deleted from my mind?”
The deeper the deletion, the stricter the test. If all proofs of the Pythagorean Theorem were deleted from my mind, could I re-prove it? I think so. If all knowledge of the Pythagorean Theorem were deleted from my mind, would I notice the Pythagorean Theorem to re-prove? That’s harder to boast, without putting it to the test; but if you handed me a right triangle with sides of length 3 and 4, and told me that the length of the hypotenuse was calculable, I think I would be able to calculate it, if I still knew all the rest of my math.
What about the notion of mathematical proof? If no one had ever told it to me, would I be able to reinvent that on the basis of other beliefs I possess? There was a time when humanity did not have such a concept. Someone must have invented it. What was it that they noticed? Would I notice if I saw something equally novel and equally important? Would I be able to think that far outside the box?
How much of your knowledge could you regenerate? From how deep a deletion? It’s not just a test to cast out insufficiently connected beliefs. It’s a way of absorbing a fountain of knowledge, not just one fact.
A shepherd builds a counting system that works by throwing a pebble into a bucket whenever a sheep leaves the fold, and taking a pebble out whenever a sheep returns. If you, the apprentice, do not understand this system—if it is magic that works for no apparent reason—then you will not know what to do if you accidentally drop an extra pebble into the bucket. That which you cannot make yourself, you cannot remake when the situation calls for it. You cannot go back to the source, tweak one of the parameter settings, and regenerate the output, without the source. If “two plus four equals six” is a brute fact unto you, and then one of the elements changes to “five,” how are you to know that “two plus five equals seven” when you were simply told that “two plus four equals six”?
If you see a small plant that drops a seed whenever a bird passes it, it will not occur to you that you can use this plant to partially automate the sheep-counter. Though you learned something that the original maker would use to improve on their invention, you can’t go back to the source and re-create it.
When you contain the source of a thought, that thought can change along with you as you acquire new knowledge and new skills. When you contain the source of a thought, it becomes truly a part of you and grows along with you.
Strive to make yourself the source of every thought worth thinking. If the thought originally came from outside, make sure it comes from inside as well. Continually ask yourself: “How would I regenerate the thought if it were deleted?” When you have an answer, imagine that knowledge being deleted as well. And when you find a fountain, see what else it can pour.
1 Not true, by the way.
2 Richard Rorty, “Out of the Matrix: How the Late Philosopher Donald Davidson Showed That Reality Can’t Be an Illusion,” The Boston Globe, 2003, http://archive.boston.com/news/globe/ideas/articles/2003/10/05/out_ of_ the_ matrix/.
I feel really stupid after reading this, so thanks a lot for shedding light onto the vast canvas of my ignorance.
I have almost no idea which of the spinning gears in my head I could regrow on my own. I'm close to being mathematically illiterate, due to bad teaching and a what appears to be a personal aversion or slight inability - so I may have come up with the bucket plus pebble method and perhaps with addition, substraction, division and possibly multiplication - but other than that I'd be lost. I'd probably never conceive of the idea of a tidy decimal system, or that it may be helpful to keep track of the number zero.
Non-mathematical concepts on the other hand may be easier to regrow in some instances. Atheism for example seems easy to regrow if you merely have decent people-intuition, a certain willingness to go against the grain (or at least think against the grain), plus a deeply rooted aversion against hypocricy. Once you notice how full of s*it people are (and notice that you yourself seem to share their tendencies) it's a fairly small leap of (non)faith, which would explain why so many people seem to arrive at atheism all due to their own observations and reasoning.
I think I could also regrow the concept of evolution if I spent enough time around different animals to notice their similarities and if I was familiar with animal breeding - but it may realistically take at least a decade of being genuinely puzzled about their origin and relation to one another (without giving in to the temptation of employing a curiosity stopper needless to say). Also, having a rough concept of how incredibly old the earth is and that even landscapes and mountains shift their shape over time would have helped immensely.
It feels so hard to understand why it took almost 10000 years for two human brains to make a spark and come up with the concept of evolution. How did smart and curious people who tended to animals for a living and who knew about the intricacies of artificial breeding not see the slightly unintuitive but nontheless simple implications of what they were doing there?
Was it seriously just the fault of the all-purpose curiosity stopper superstition, or was it some other deeply ingrained human bias? It's unbelievable how long no one realized what life actually is all about. And then all of a sudden two people caught the right spark at the same point in history independently of each other. So apparently biologists needed to be impacted by many vital ideas (geological time, economics) to come up with something, that a really sharp and observant person could have realistically figured out 10000 years earlier.
And who knows, maybe some people thought of it much earlier and left no trace due to illiteracy or fear of losing their social status or even their life. Come to think of it, most people in most places during most of the past would have gotten their brilliant head on a stick if they actually voiced the unthinkable truth and dared to deflate the everneedy morbidly obese ego of homo sapiens sapiens.
Just because you aren't aware of it, doesn't mean it didn't happen : )