Artificial Addition

15Doug_S.

2Gray_Area

1[anonymous]

21Recovering_irrationalist

2JohnH

1tlhonmey

2DanBurfoot

4JohnH

7wedrifid

6Cyan

2wedrifid

3Houshalter

1Nornagest

6Eugene

0Cyan

5Eugene

2Cyan

2Eugene

1Kenny

0Houshalter

0Punoxysm

0[anonymous]

0Houshalter

0Punoxysm

-2Caledonian2

1wizzwizz4

0Silas

-7Benoit_Essiambre

11Nominull3

-5taryneast

9Paul Crowley

1taryneast

2byrnema

3taryneast

0byrnema

2taryneast

3Sniffnoy

-1taryneast

0JoshuaZ

-2taryneast

2wizzwizz4

2taryneast

0Sniffnoy

-3taryneast

4Sniffnoy

-4taryneast

2Sniffnoy

-2taryneast

2[anonymous]

-1taryneast

2[anonymous]

1taryneast

2Jonni

-4Houshalter

8ialdabaoth

2Houshalter

0taryneast

3Recovering_irrationalist

-8Benoit_Essiambre

5Nominull3

-10Francois_O

2Ramana Kumar

6taryneast

-6Benoit_Essiambre

7g

4Cyan2

-9anonymous14

1Neel_Krishnaswami

9Eliezer Yudkowsky

5SilasBarta

4thomblake

0anonymous15

-2anonymous16

-2douglas

-3James4

-4taryneast

2DanielLC

-8mclaren

1DanielLC

-2AndyCossyleon

1NancyLebovitz

1taryneast

1DanielLC

2thomblake

2Caledonian2

1Eliezer Yudkowsky

1g

1Benoit_Essiambre

-1Benoit_Essiambre

2Benquo

-1Benoit_Essiambre

1Cyan2

-17mojo_jojo

2Benquo

1g

4Peter_de_Blanc

-3Benoit_Essiambre

6Nick_Tarleton

3Cyan2

-12Tom5

-12Bog

1taryneast

1wedrifid

0taryneast

1wedrifid

3Ian_Maxwell

5Ian_Maxwell

1Viktor Riabtsev

3DanielLC

4Eliezer Yudkowsky

3MichaelHoward

14Oscar_Cunningham

-2Mass_Driver

8JoshuaZ

3DanielLC

2JoshuaZ

3taryneast

1lessdazed

1Luke_A_Somers

1Oscar_Cunningham

0CarlShulman

9AnthonyC

6[anonymous]

3Luke_A_Somers

4orthonormal

5Daniel Kokotajlo

11gwern

1EniScien

New Comment

128 comments, sorted by Click to highlight new comments since: Today at 9:58 PM

Some comments are truncated due to high volume. (⌘F to expand all)

Well, shooting randomly at a distant target is more likely to produce a bulls-eye than not shooting at all, even though you're almost certainly going to miss (and probably shoot yourself in the foot while you're at it). It's probably better to try to find a way to take off that blindfold. As you suggest, we don't yet understand intelligence, so there's no way we're going to make an intelligent machine without either significantly improving our understanding or winning the proverbial lottery.

"Programming is the art of figuring out what you want so precisely that even a machine can do it." - Some guy who isn't famous

Well shooting randomly is perhaps a bad idea, but I think the best we can do is shoot systematically, which is hardly better (takes exponentially many bullets). So you either have to be lucky, or hope the target isn't very far, so you don't need to a wide cone to take pot shots at, or hope P=NP.

1[anonymous]12y

quadratically many, actually.
EDIT: well, in the case of actual shooting at least.

2[anonymous]13y

This keeps on coming up, is there somewhere this is explained in detail? Also, have possible solutions been looked at such as constructing the AI in a controlled environment? If so why wouldn't any of them work work?
Thanks to whoever responds.

11y

Try "The Two Faces of Tomorrow", by James P. Hogan. Fictional evidence, to be sure, but well thought out fiction that demonstrates the problem well.

Eliezer,

Did you include your own answer to the question of why AI hasn't arrived yet in the list? :-)

This is a nice post. Another way of stating the moral might be: "If you want to understand something, you have to stare your confusion right in the face; don't look away for a second."

So, what is confusing about intelligence? That question is problematic: a better one might be "what isn't confusing about intelligence?"

Here's one thing I've pondered at some length. The VC theory states that in order to generalize well a learning machine m...

413y

This is what some philosophers have purposed, others have thought we start as a blank slate. The research into the subject has shown that babies do start with some sort of working model of things. That is we begin life with a set of preset preferences and the ability to distinguish those preferences and a basic understanding of geometric shapes.

713y

It would be shocking if we didn't have preset functions. Calves, for example, can walk almost straight away and swim not much longer. We aren't going to entirely eliminate the mammalian ability to start with a set of preset features there just isn't enough pressure to keep a few of them.

613y

If you put a newborn whose mother had an unmedicated labor on the mother's stomach, the baby will move up to a breast and start to feed.

213y

Good point. Drink (food), breathe, scream and a couple of cute reactions to keep caretakers interested. All you need to bootstrap a human growth process. There seems to be something built in about eye contact management too - because a lack there is an early indicator that something is wrong.

310y

Not terribly relevant to your point, but it's likely human sense of cuteness is based on what babies do rather than the other way around.

110y

I'd replace "human" with "mammalian" -- most young mammals share a similar set of traits, even those that aren't constrained as we are by big brains and a pelvic girdle adapted to walking upright. That seems to suggest a more basal cuteness response; I believe the biology term is "baby schema".
Other than that, yeah.

612y

Conversely, studies with newborn mammals have shown that if you deprive them of something as simple as horizontal lines, they will grow up unable to distinguish lines that approach 'horizontalness'. So even separating the most basic evolved behavior from the most basic learned behavior is not intuitive.

012y

The deprivation you're talking about takes place over the course of days and weeks -- it reflects the effects of (lack of) reinforcement learning, so it's not really germane to a discussion of preset functions that manifest in the first few minutes after birth.

511y

It's relevant insofar as we shouldn't make assumptions on what is and is not preset simply based on observations that take place in a "typical" environment.

211y

Ah, a negative example. Fair point. Guess I wasn't paying enough attention and missed the signal you meant to send by using "conversely" as the first word of your comment.

211y

That was lazy of me, in retrospect. I find that often I'm poorer at communicating my intent than I assume I am.

111y

Illusion of transparency strikes again!

09y

Artificial Neural Networks have been trained with millions of parameters. There are a lot of different methods of regularization like dropconnect or sparsity constraints. But the brain does online learning. Overfitting isn't as big of a concern because it doesn't see the data more than once.

09y

On the other hand, architecture matters. The most successful neural network for a given task has connections designed for the structure of that task, so that it will learn much more quickly than a fully-connected or arbitrarily connected network.
The human brain appears to have a great deal of information and structure in its architecture right off the bat.

0[anonymous]9y

The human brain appears to engage in hierarchical learning, which is what allows it to leverage huge amounts of "general case" abstract knowledge in attacking novel specific problems put before it.

09y

I'm not saying that you're wrong, but the state of the art in computer vision is weight sharing which biological NNs probably can't do. Hyper parameters like the number of layers and how local the connections should be, are important but they don't give that much prior information about the task.
I may be completely wrong, but I do suspect that biological NNs are far more general purpose and less "pre-programmed" than is usually thought. The learning rules for a neural network are far simpler than the functions they learn. Training neural networks with genetic algorithms is extremely slow.

09y

Architecture of the V1 and V2 areas of the brain, which Convolutional Neural Networks and other ANNs for vision borrow heavily from, is highly geared towards vision, and includes basic filters that detect stripes, dots, corners, etc. that appear in all sorts of computer vision work. Yes, no backpropagation or weight-sharing is directly responsible for this, but the presence of local filters is still what I would call very specific architecture (I've studied computer vision and inspiration it draws from early vision specifically, so I can say more about this).
The way genetic algorithms tune weights in an ANN (and yes, this is an awful way to train an ANN) is very different from the way they work in actually evolving a brain; working on the genetic code that develops the brain. I'd say they are so wildly different that no conclusions from the first can be applied to the second.
During a single individual's life, Hebbian and other learning mechanisms in the brain are distinct from gradient learning, but can achieve somewhat similar things.

That's not how William Tell managed it. He had to practice aiming at less-dangerous targets until he became an expert, and only then did he attempt to shoot the apple.

It is not clear to me that it is desirable to prejudge what an artificial intelligence should desire or conclude, or even possible to purposefully put real constraints on it in the first place. We should simply create the god, then acknowledge the truth: that we aren't capable of evaluating the thinking of gods.

14y

But it shouldn't conclude that throwing large asteroids at Yellowstone is a good idea, nor desire to do it. If you follow this strategy, you'll doom us. Simple as that.

-513y

Me:AGI is a William Tell target. A near miss could be very unfortunate. We can't responsibly take a proper shot till we have an appropriate level of understanding and confidence of accuracy.

Caledonian:That's not how William Tell managed it. He had to practice aiming at less-dangerous targets until he became an expert, and only then did he attempt to shoot the apple.

Yes, by "take a proper shot" I meant shooting at the proper target with proper shots. And yes, practice on less-dangerous targets is necessary, but it's not sufficient.

...It is no

1.9999... = 2 is not an "issue" or a "paradox" in mathematics.

If you use a limited number of digits in your calculations, then your quantization errors can accumulate. (And suppose the quantity you are measuring is the difference of two much larger numbers.)

Of course it's possible that there's nothing in the real world that corresponds exactly to our so-called "real numbers". But until we actually know what smaller-scale structure it is that we're approximating, it would be crazy to pick some arbitrary "lower-resolution&q...

"...mathematics that represent continuous scales which would be best represented by the real numbers system with the limited significant digits."

If you limit the number of significant digits, your mathematics are discrete, not continuous. I'm guessing the concept you're really after is the idea of computable numbers. The set of computable numbers is a dense countable subset of the reals.

*With the graphical-network insight in hand, you can give a mathematical explanation of exactly why first-order logic has the wrong properties for the job, and express the correct solution in a compact way that captures all the common-sense details in one elegant swoop.*

Consider the following example, from Menzies's "Causal Models, Token Causation, and Processes"[*]:

An assassin puts poison in the king's coffee. The bodyguard responds by pouring an antidote in the king's coffee. If the bodyguard had not put the antidote in the coffee, the king would...

914y

Um, this sounds not correct. The assassin causes the bodyguard to add the antidote; if the bodyguard hadn't seen the assassin do it, he wouldn't have so added. So if you compute the counterfactual the Pearlian way, manipulating the assassin changes the bodyguard's action as well, since the bodyguard causally descends from the assassin.

514y

Right -- and according to Pearl's causal beam method, you would first note that the guard sustains the coffee's (non)deadliness-state against the assassin's action, which ultimately makes you deem the guard the cause of the king's survival.

410y

Furthermore, if you draw the graph the way Neel seems to suggest, then the bodyguard is adding the antidote without dependence on the actions of the assassin, and so there is no longer any reason to call one "assassin" and the other "bodyguard", or one "poison" and the other "antidote". The bodyguard in that model is trying to kill the king as much as the assassin is, and the assassin's timely intervention saved the king as much as the bodyguard's.

"But until we actually know what smaller-scale structure".

From http://en.wikipedia.org/wiki/Planck_Length: "Combined, these two theories imply that it is impossible to measure position to a precision greater than the Planck length, or duration to a precision greater than the time a photon traveling at c would take to travel a Planck length"

Therefore, one could in fact say that all time- and distance- derived measurements can in fact be truncated to a fixed number of decimal places without losing any real precision, by using precisions b...

"When the basic problem is your ignorance, clever strategies for bypassing your ignorance lead to shooting yourself in the foot."

I like this lesson. It rings true to me, but the problem of ego is not one to be overlooked. People like feeling smart and having the status of being a "learned" individual. It takes a lot of courage to profess ignorance in today's academic climate. We are taught that we have such sophisticated techniques to solve really hard problems. There are armies of scientists and engineers working to advance our so...

I read a book on the philosophy of set theory -- and I get lost right at the point where classical infinite thought was replaced by modern infinite thought. IIRC the problem was paradoxes based on infinite recursion (Zeno et. all) and finding mathematical foundations to satisfy calculus limits. Then something about Cantor, cardinality and some hand wavy 'infinite sets are real!'.

1.999... is just an infinite set summation of finite numbers 1 + 0.9 + 0.09 + ...

Now, how an infinite process on an infinite set can equal an integer is a problem I still grapple...

-413y

1.999... does not equal 2 - it just tends towards 2
For all practical purposes, you could substitute one for the other.
But in theory, you know that 1.9999... is always just below 2, even though it creeps ever closer.
If we ever found a way to magickally "reach infinity" they would finally meet... and be "equal".
Edit: The numbers are always going to be slightly different in a finite-space, but equate to the same thing when you allow infinities. ie mathematically, in the limit, they equate to the same value, but in any finite representation, they are different.
Further Edit: According to mathematical convention, the notation "1.999..." does refer to the limit. therefore, "1.999..." strictly refers to 2 (not to any finite case that is slightly less than two).

212y

It was nonsense in classical theory. Infinite sum has its own separate definition.
There are times in modern mathematics that infinite numbers are used. This is not one of them.
I doubt I'm the best at explaining what limits are, so I won't bother. I may be able to tell you what they aren't. They give results similar to the intuitive idea of infinite numbers, but they don't do it in the most intuitively obvious way. They don't use infinite numbers. They use a certain property that at most one number will have in relation to a sequence. In the case of 1, 1.9, 1.99, ..., this number is two. In the case of 1, 0, 1, 0, ..., there is no such number, so the series is said not to converge.
No. The question is "Can we make a sensible theoretical way to interpret the numeral 1.999..., that approximately matches our intuitions?" It wasn't easy, but we managed it.

Anonymous (re Planck scales etc.), sure you can truncate your representations of lengths at the Planck length, and likewise for your representations of times, but this doesn't simplify your *number* system unless you have acceptable ways of truncating all the other numbers you need to use. And, at present, we don't. Sure, maybe really the universe is best considered as some sort of discrete network with some funky structure on it, but that doesn't give us any way of simplifying (or making more appropriate) our mathematics until we know just what sort of disc...

@James:

If I recall my Newton correctly, the only way to take this "sum of an infinite series" business consistently is to interpret it as shorthand for the *limit* of an infinite series. (Cf. Newton's *Principia Mathematica*, Lemma 2. The infinitesimally wide parallelograms are dubitably real, but the area under the curve between the sets of parallelograms is clearly a real, definite area.)

@Benoit:

Why shouldn't we take 1.9999... as just another, needlessly complicated (if there's no justifying context) way of writing "2"? Just as I could conceivably count "1, 2, 3, 4, d(5x)/dx, 6, 7" if I were a crazy person.

Benquo, I see two possible reasons:

1) '2' leads to confusion as to whether we are representing a real or a natural number. That is, whether we are counting discrete items or we are representing a value on a continuum. If we are counting items then '2' is correct.

2) If it is clear that we are representing numbers on a continuum, I could see the number of significant digits used as an indication of the amount of uncertainty in the value. For any real problem there is *always* uncertainty caused by A) the measuring instrument and B) the representation system it...

Benoit Essiambre,

Right now Wikipedia's article is claiming that calculus cannot be done with computable numbers, but a Google search turned up a paper from 1968 which claims that differentiation and integration can be performed on functions in the field of computable numbers. I'll go and fix Wikipedia, I suppose.

Benoit Essiambre,

You say:

"1) '2' leads to confusion as to whether we are representing a real or a natural number. That is, whether we are counting discrete items or we are representing a value on a continuum."

If I recall correctly, this "confusion" is what allowed modern, atomic chemistry. Chemical substances -- measured as continuous quantities -- seem to combine in simple natural-number ratios. This was the primary evidence for the existence of atoms.

What is the practical negative consequence of the confusion you're trying to avoid?...

Benoit, those are *two different ways* of *writing* the *same* real, just like 0.333... and 1/3 (or 1.0/3.0, if you insist) are the same number. That's *not* a paradox. 2 is a computable number, and thus so are 2.000... and 1.999..., even though you can't write down *those ways of expressing them* in a finite amount of time. See the definition of a computable number if you're confused.

1.999... = 2.000... = 2. Period.

Benoit,

In the decimal numeral system, every number with a terminating decimal representation also has a non-terminating one that ends with recurring nines. Hence, 1.999... = 2, 0.74999... = 0.75, 0.986232999... = 0.986233, etc. This isn't a paradox, and it has nothing to do with the precision with which we measure actual real things. This sort of recurring representation happens in any positional numeral system.

You seem very confused as to the distinction between what numbers are and how we can represent them. All I can say is, these matters have been well thought out, and you'd profit by reading as much as you can on the subject and by trying to avoid getting too caught up in your preconceptions.

This old post led me to an interesting question: will AI find itself in the position of our fictional philosophers of addition? The basic four functions of arithmetic are so fundamental to the operation of the digital computer that an intelligence built on digital circuitry might well have no idea of how it adds numbers together (unless told by a computer scientist, of course).

Bog: You are correct. That is, you do not understand this article at all. Pay attention to the first word, "Suppose..."

We are not talking about how calculators are designed in reality. We are discussing how they are designed in a hypothetical world where the mechanism of arithmetic is not well-understood.

13y

Did anyone else get so profoundly confused that they googled "Artificial Addition"? Only when I was half way though the bullet point list that it clicked that the whole post is a metaphor for common beliefs about AI. And that was on the second time reading, first time I gave up before that point.

"Like shooting blindfolded at a distant target"

So long as you know where the target is within five feet, it doesn't matter how small it is, how far away it is, whether or not you're blindfolded, or whether or not you even know how to use a bow. You'll hit it on a natural twenty. http://www.d20srd.org/srd/combat/combatStatistics.htm#attackRoll

414y

Logical fallacy of generalization from fictional evidence.

314y

Damn right. And the same goes for the oft-quoted "million-to-one chances crop up nine times out of ten".

Thread necromancy:

Suppose that human beings had absolutely no idea how they performed arithmetic. Imagine that human beings had evolved, rather than having learned, the ability to count sheep and add sheep. People using this built-in ability have no idea how it worked, the way Aristotle had no idea how his visual cortex supported his ability to see things. Peano Arithmetic as we know it has not been invented.

It occured to me that a real life example of this kind of thing is *grammar*. I don't know what the grammatical rules are for which of the words "I" or "me" should be used when I refer to myself, but I can still use those words with perfect grammar in everyday life*. This may be a better example to use since it's one that everyone can relate to.

*I do use a rule for working out whether I should say "Sarah and I" or "Sarah and me", but that rule is just "use whichever one you would use if you were just talking about youself". Thinking about it now I can guess at the "I/me" rule, but there's plenty of other grammar I have no idea about.

-213y

Can we get a link to the original thread?

813y

It this thread itself. He's commenting on the top paragraph of the original post. (It seems like thread necromancy at LW is actually very common. It may not be a good term given the negative connotations of necromancy for many people. Maybe thread cryonic revival?)

313y

I'd expect here we'd give necromancy positive connotations. Most of the people here seem to be against death.
I thought it's only thread necromancy if it moves it to the front page. This website doesn't seem to work like that.
I hope it doesn't work like that, because I posted most of my comments on old threads.

213y

Just because we have a specific attitude about things doesn't mean we need to go and use terminology that has pre-existing connotations. I don't think for example that calling cryonics "technological necromancy" or "supercold lichdom" would be helpful to getting people listen although both would be awesome names. However, Eliezer seems to disagree at least in regards to cryonics in certain narrow contexts. See his standard line when people ask about his cryonic medallion that it is a mark of his membership in the "Cult of the Severed Head."
There's actually a general trend in modern fantasy literature to see necromancy as less intrinsically evil. The most prominent example would be Garth Nix's "Abhorsen" trilogy and the next most prominent would be Gail Martin's "Chronicles of the Necromancer" series. Both have necromancers as the main protagonists. However, in this context, most of the cached thoughts about death still seem to be present. In both series, the good necromancers use their powers primarily to stop evil undead and help usher people in to accepting death and the afterlife. Someone should at some point write a fantasy novel in which there's a good necromancer who brings people back as undead.
Posts only get put to the main page if Eliezer decides do so (which he generally does to most high ranking posts).

313y

I dunno - I reckon you might get increased interest from the SF/F crowd. :)

112y

...or would they...nahh.

112y

Funny. I was working on something an awful lot like that back in 2000. I wasn't terribly good at writing back then, unfortunately.

113y

There should be one on whatever page you're viewing my comment in (unless you're doing something unusual like reading this in an rss reader)
Still, here you go: link

McDermott's old article, "Artificial Intelligence and Natural Stupidity" is a good reference for suggestively-named tokens and algorithms.

6[anonymous]12y

even less esoteric: |, ||, |||, ||||, |||||, ....
Then "X" + "Y" = "XY". For example |||| + ||| = |||||||.
It turns out the difficulty in addition is the insight that ordinals are just an unfriendly representation. One needs a map between representations in order that the addition problem becomes trivial.

"I don't think we have to wait to scan a whole brain. Neural networks are just like the human brain, and you can train them to do things without knowing how they do them. We'll create programs that will do arithmetic without we, our creators, ever understanding how they do arithmetic."

This sort of anti-predicts the deep learning boom, but only sort of.

Fully connected networks didn't scale effectively; researchers had to find (mostly principled, but some ad-hoc) network structures that were capable of more efficiently learning complex patterns.

A...

I'd be interested to hear whether it is the case that people were saying things like this about AGI two, three, four, five decades ago:

You're all wrong. Past efforts to create machine arithmetic were futile from the start, because they just didn't have enough computing power. If you look at how many trillions of synapses there are in the human brain, it's clear that calculators don't have lookup tables anywhere near that large. We need calculators as powerful as a human brain. According to Moore's Law, this will occur in the year...

I've pointed out the cases of Moravec (1997) and Shane Legg pre-DM (~2009) as saying pretty much exactly that and in the case of Legg, influencing his DM founding timeline. I am pretty sure that if you were able to go back and do a thorough survey of the connectionist literature and influenced people, you'd find more instances.

For example, yesterday I was collating my links on AI Dungeon and I ran into a 1989 text adventure talk by Doug Sharp mostly about his *King of Chicago* & simple world/narrative simulation approach to IF, where before discussing *King*, to my shock, he casually drops in Moravec's 1988 *Mind Children*'s forecast for human-level compute in 2030 and compute as a prerequisite for "having this AI problem licked", and notes

When true AI does arrive, I think it will pretty much sweep away the groundwork we're laying today and set up its own rules for interaction, narration, and a few other details. In the meantime, there's a lot of fun to be had making the best games we can with our primitive machines and methods.

Well, I can't disagree with that! It's only 2021, and AI Dungeon and its imitators owe essentially nothing to the last 46 years of IF, and have to inven...

At first I was perplexed, thinking that Yudkovsky for some reason wants to use programs for AI, and not neural networks. This article showed me very clearly why you need to understand the general principle first, and not try to do anything now. Even if you can randomly find answers to a specific quadratic equation, it won't solve even other quadratic equations, let alone cubic or any other problem in mathematics.

Suppose that human beings had absolutely

no ideahow they performed arithmetic. Imagine that human beings hadevolved,rather than havinglearned,the ability to count sheep and add sheep. People using this built-in ability have no idea how it worked, the way Aristotle had no idea how his visual cortex supported his ability to see things. Peano Arithmetic as we know it has not been invented. There are philosophers working to formalize numerical intuitions, but they employ notations such asto formalize the intuitively obvious fact that when you add "seven" plus "six", of course you get "thirteen".

In this world, pocket calculators work by storing a giant lookup table of arithmetical facts, entered manually by a team of expert Artificial Arithmeticians, for starting values that range between zero and one hundred. While these calculators may be helpful in a pragmatic sense, many philosophers argue that they're only

simulatingaddition, rather than reallyadding.No machine can reallycount- that's why humans have to count thirteen sheep before typing "thirteen" into the calculator. Calculators can recite back stored facts, but they can never know what the statements mean - if you type in "two hundred plus two hundred" the calculator says "Error: Outrange", when it's intuitivelyobvious,if youknowwhat the wordsmean, that the answer is "four hundred".Philosophers, of course, are not so naive as to be taken in by these intuitions. Numbers are really a purely formal system - the label "thirty-seven" is meaningful, not because of any inherent property of the words themselves, but because the label

refers tothirty-seven sheep in the external world. A number is given this referential property by itssemantic networkof relations to other numbers. That's why, in computer programs, the LISP token for "thirty-seven" doesn't need anyinternalstructure - it's only meaningful because of reference and relation, not some computational property of "thirty-seven" itself.No one has ever developed an Artificial General Arithmetician, though of course there are plenty of domain-specific, narrow Artificial Arithmeticians that work on numbers between "twenty" and "thirty", and so on. And if you look at how slow progress has been on numbers in the range of "two hundred", then it becomes clear that we're not going to get Artificial General Arithmetic any time soon. The best experts in the field estimate it will be at least a hundred years before calculators can add as well as a human twelve-year-old.

But not everyone agrees with this estimate, or with merely conventional beliefs about Artificial Arithmetic. It's common to hear statements such as the following:

learnthe vast network of relations between numbers that humans acquire during their childhood by observing sets of apples."generaladdition, as opposed to domain-specific addition. Probably we will never know the fundamental nature of arithmetic. The problem is just too hard for humans to solve."emerge.We have to acknowledge the basic unpredictability of complex systems."numbersanywhere in the system, just labels that humans use for numbers..."There is more than one moral to this parable, and I have told it with different morals in different contexts. It illustrates the idea of levels of organization, for example - a CPU can add two large numbers because the numbers aren't black-box opaque objects, they're ordered structures of 32 bits.

But for purposes of overcoming bias, let us draw two morals:

Lest anyone accuse me of generalizing from fictional evidence, both lessons may be drawn from the real history of Artificial Intelligence as well.

The first danger is the object-level problem that the AA devices ran into: they functioned as tape recorders playing back "knowledge" generated from outside the system, using a process they couldn't capture internally. A human could tell the AA device that "twenty-one plus sixteen equals thirty-seven", and the AA devices could record this sentence and play it back, or even pattern-match "twenty-one plus sixteen" to output "thirty-seven!", but the AA devices couldn't generate such knowledge for themselves.

Which is strongly reminiscent of believing a physicist who tells you "Light is waves", recording the fascinating words and playing them back when someone asks "What is light made of?", without being able to generate the knowledge for yourself. More on this theme tomorrow.

The second moral is the meta-level danger that consumed the Artificial Arithmetic researchers and opinionated bystanders - the danger of dancing around confusing gaps in your knowledge. The tendency to do just about anything

exceptgrit your teeth and buckle down and fill in the damn gap.Whether you say, "It is emergent!", or whether you say, "It is unknowable!", in neither case are you acknowledging that there is a basic insight required which is possessable, but unpossessed by you.

How can you know when you'll have a new basic insight? And there's no way to get one except by banging your head against the problem, learning everything you can about it, studying it from as many angles as possible, perhaps for years. It's not a pursuit that academia is set up to permit, when you need to publish at least one paper per month. It's certainly not something that venture capitalists will fund. You want to either go ahead and build the system

now,or give up and do something else instead.Look at the comments above: none are aimed at setting out on a quest for the missing insight which would

make numbers no longer mysterious,make "twenty-seven" more than a black box. None of the commenters realized that their difficulties arose from ignorance or confusion in their own minds, rather than an inherent property of arithmetic. They were not trying to achieve a state where the confusing thing ceased to be confusing.If you read Judea Pearl's "Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference" then you will see that the basic insight behind graphical models is

indispensableto problems that require it. (It's not something that fits on a T-Shirt, I'm afraid, so you'll have to go and read the book yourself. I haven't seen any online popularizations of Bayesian networks that adequately convey the reasons behind the principles, or the importance of the math being exactly the way it is, but Pearl's book is wonderful.) There were once dozens of "non-monotonic logics" awkwardly trying to capture intuitions such as "If my burglar alarm goes off, there was probably a burglar, but if I then learn that there was a small earthquake near my home, there was probably not a burglar." With the graphical-model insight in hand, you can give a mathematical explanation of exactly why first-order logic has the wrong properties for the job, and express the correct solution in a compact way that captures all the common-sense details in one elegant swoop. Until you have that insight, you'll go on patching the logic here, patching it there, adding more and more hacks to force it into correspondence with everything that seems "obviously true".You won't

knowthe Artificial Arithmetic problem is unsolvable without its key. If you don't know the rules, you don't know the rule that says you need to know the rules to do anything. And so there will be all sorts of clever ideas that seem like they might work, like building an Artificial Arithmetician that can read natural language and download millions of arithmetical assertions from the Internet.And yet

somehowthe clever ideas never work. Somehow it always turns out that you "couldn't see any reason it wouldn't work" because you were ignorant of the obstacles, not because no obstacles existed. Like shooting blindfolded at a distant target - you can fire blind shot after blind shot, crying, "You can't prove to me that I won't hit the center!" But until you take off the blindfold, you're not even in the aiming game. When "no one can prove to you" that your precious ideaisn'tright, it means you don't have enough information to strike a small target in a vast answer space.Until you know your idea will work, it won't.From the history of previous key insights in Artificial Intelligence, and the grand messes which were proposed prior to those insights, I derive an important real-life lesson:

When the basic problem is your ignorance, clever strategies for bypassing your ignorance lead to shooting yourself in the foot.