Whenever someone exhorts you to "think outside the box", they usually, for your convenience, point out exactly where "outside the box" is located.  Isn't it funny how nonconformists all dress the same...

In Artificial Intelligence, everyone outside the field has a cached result for brilliant new revolutionary AI idea—neural networks, which work just like the human brain!  New AI Idea: complete the pattern:  "Logical AIs, despite all the big promises, have failed to provide real intelligence for decades—what we need are neural networks!"

This cached thought has been around for three decades.  Still no general intelligence.  But, somehow, everyone outside the field knows that neural networks are the Dominant-Paradigm-Overthrowing New Idea, ever since backpropagation was invented in the 1970s.  Talk about your aging hippies.

Nonconformist images, by their nature, permit no departure from the norm.  If you don't wear black, how will people know you're a tortured artist?  How will people recognize uniqueness if you don't fit the standard pattern for what uniqueness is supposed to look like?  How will anyone recognize you've got a revolutionary AI concept, if it's not about neural networks?

Another example of the same trope is "subversive" literature, all of which sounds the same, backed up by a tiny defiant league of rebels who control the entire English Department.  As Anonymous asks on Scott Aaronson's blog:

"Has any of the subversive literature you've read caused you to modify any of your political views?"

Or as Lizard observes:

"Revolution has already been televised. Revolution has been *merchandised*. Revolution is a commodity, a packaged lifestyle, available at your local mall. $19.95 gets you the black mask, the spray can, the "Crush the Fascists" protest sign, and access to your blog where you can write about the police brutality you suffered when you chained yourself to a fire hydrant.  Capitalism has learned how to sell anti-capitalism."

Many in Silicon Valley have observed that the vast majority of venture capitalists at any given time are all chasing the same Revolutionary Innovation, and it's the Revolutionary Innovation that IPO'd six months ago.  This is an especially crushing observation in venture capital, because there's a direct economic motive to not follow the herd—either someone else is also developing the product, or someone else is bidding too much for the startup.  Steve Jurvetson once told me that at Draper Fisher Jurvetson, only two partners need to agree in order to fund any startup up to $1.5 million.  And if all the partners agree that something sounds like a good idea, they won't do it.  If only grant committees were this sane.

The problem with originality is that you actually have to think in order to attain it, instead of letting your brain complete the pattern.  There is no conveniently labeled "Outside the Box" to which you can immediately run off.  There's an almost Zen-like quality to it—like the way you can't teach satori in words because satori is the experience of words failing you.  The more you try to follow the Zen Master's instructions in words, the further you are from attaining an empty mind.

There is a reason, I think, why people do not attain novelty by striving for it.  Properties like truth or good design are independent of novelty:  2 + 2 = 4, yes, really, even though this is what everyone else thinks too.  People who strive to discover truth or to invent good designs, may in the course of time attain creativity.  Not every change is an improvement, but every improvement is a change.

Every improvement is a change, but not every change is an improvement.  The one who says, "I want to build an original mousetrap!", and not, "I want to build an optimal mousetrap!", nearly always wishes to be perceived as original.  "Originality" in this sense is inherently social, because it can only be determined by comparison to other people.  So their brain simply completes the standard pattern for what is perceived as "original", and their friends nod in agreement and say it is subversive.

Business books always tell you, for your convenience, where your cheese has been moved to.  Otherwise the readers would be left around saying, "Where is this 'Outside the Box' I'm supposed to go?"

Actually thinking, like satori, is a wordless act of mind.

The eminent philosophers of Monty Python said it best of all.


New Comment
44 comments, sorted by Click to highlight new comments since: Today at 2:55 AM

Eliezer is right, as usual. But it raises the question: when should you be flattered and when should you be insulted to be called "creative" or "revolutionary"?

One bad algorithm I can think of is to be flattered when you're called such by other people you think are currently "creative" or "revolutionary", as opposed to people who were previously revolutionary and now mainstream. The former is how cliques form.

This as a second thought to my first reaction, which was, "Well, if Robin Hanson calls you "revolutionary" you must practically be insane."

Beats being _im_practically insane, I suppose.

Trying to be original may be justifiable if people will buy a NEW!! product even if it's inferior.

I appreciated your choice of examples. Conformist-nonconformism is about the most annoying thing in the world to me, in addition to making a lot of smart people useless (or worse).

Eliezer is certainly correct that our real goal is to make optimal decisions and perform optimal actions, regardless of how different they are from those of the herd. But that doesn't mean we should ignore information about our conformity or non-conformity. It's often important.

Consider the hawk-dove game. If you're in a group of animals who randomly bump into each other and compete for territory, the minority strategy is the optimal strategy. If all your peers are cowards, you can completely dominate them by showing some fang. Or if your peers follow the "never back down, always fight to the death" strategy, you should be a coward until they've killed each other off. Non-conformity is a valid goal (or subgoal, at least).

On the other hand, in situations with networks effects, you want to be a conformist. If you're selling your widget on Bob's Auction Site, which has 20 users, instead of eBay, your originality is simply stupid.

Eliezer, your first and second thoughts illustrate my question; they are not clearly positive or negative descriptors. :)

Confession: I watched the Monty Python clip before reading the whole post.

Much of what Eliezer talked about in the beginning is discussed in The Rebel Sell. I am actually not as disturbed by those of the "radical counterculture" as the authors, who discuss how to accomplish change as opposed to receiving recognition, because they know enough to be dangerous.

Boxes are always patterns completed by brains, along with ready made outsides of them. Thinking is necessary because to find the outside of a box you have to notice the box is there, which you don't if your brain fills it in automatically. Things are less noticable if you can't concieve of the possibility of an alternative to them.

I probably think this because my brain fills in this pattern. And I only think that (and this) because the idea of recursion is another pattern my brain enjoys filling in. An effective way to simulate originality though: actively fill in the wrong patterns. Choose an automatic response from another set of ideas. Babies are being sold on the black market? don't automatically intone 'the police should stop that', say 'how inefficient - it should be a legal market'. If someone says we will all be dead one day, instead of reflecting on the meaning this gives to your life, politely point out that they have their statistics wrong; about 5% of people have never died, and it correlates well with those born recently. Depending on your comparative preferences for perceived originality and truth, this can be done to convince most people you are insane and possibly completely immoral: nice socially recognisable signals that you are being original without having to conform to current originality.

Actually thinking, like satori, is a wordless act of mind.

Is such an act possible?

Wittgenstein said that 'Philosophy is a battle against the bewitchment of our intelligence by means of language.'. I guess 'thinking' can take the place of 'philosophy' in what he said. If seen this way, the act involves a lot of struggle. Even if we do away with words it seems like something else should take its place against which we would have to battle. Or maybe, I'm thinking a lot inside the box :)

The video at the end makes such nice closure to such a great post. Great taste. I am reminded of Jiddu Krishnamurti.

Eliezer's post here, if I am correct, is meant to make her readers question themselves if they are truly being original or if they are simply following the "other" masses. But a question, here: how many people think, actually think in their daily lives? And by "think", I mean produce truly original thought--impossible without some sort of muse. That's my current hypothesis. Going by that line of reasoning, therefore perhaps truly original thought can only be realized/ created through a true expression of the self through one's ideal (or at least some very synergetic) medium. Perhaps it is only people who reach their true potential/ at the last stage of Maslow's hierarchy who achieve original thought.

We can see from history this amounts to approximately 1% of the population: Da Vinci etc. As individuals then, perhaps the only way we can truly see more original thought from the people around us is to become an original thinker ourselves, to bring ourselves up to that level of so called "genius", which is simply produced by a persistent focus of purpose and passion to get up to that level. Then by changing ourselves, we naturally inspiring others--simply by being ourselves. A wonderful thing.

Of course, this is simply speculation as I'm not at a Da Vinci/Freud/Nietzsche/Krishnamurti (prolific original output) like level--though that is my major life purpose.

I would guess that it is not a state a person has to be in to come up with an original thought but a situation in which unoriginal thoughts seem obviously inapplicable to them. You can't assume because someone produced some great thought they are a separate class of person and will continue to do so. A lot of the things Lord Kelvin said about science near the end of his life seem downright silly today.

Also, Eliezer is not a "her". His wikipedia page has a picture of him, beard and all.

"Whenever someone exhorts you to "think outside the box", they usually, for your convenience, point out exactly where "outside the box" is located. Isn't it funny how nonconformists all dress the same..."

They do? Can you give an example? I can't recall anybody ever pointing out a location.

And NNs are independent of "general intelligence". NNs are being used to great success in many fields today. The fact that we don't have hard AI is no condemnation of NNs, nor a problem with the phrase "think outside the box". That's quite a leap you made, and I've only read 2 paragraphs so far!

Nassim Nicholas Taleb said at one point that his next book will be about tinkering - how many discoveries were made while the researcher was seeking something else. So directed research is good because it provides an excuse to "tinker", to spot the unexpected and go off on a tangent.

Have you spoken to Taleb? Seems there's lots of common ground. He likes to learn directly from people what's happening.

P.S. The YouTube video embedded in the post has been removed. One place where the same excerpt appears is here.

Edit: Possibly better, from the Monty Python channel.

The way I would word this: The box exists in the map, not the territory. Looking "outside of the box" is still looking at the map.

That is a clever way of putting it!

You could always tell them to think inside the chimney. If you're lucky they'll be so confused they'll look at the territory to figure out what you mean, and if you’re really lucky they'll end up thinking downstairs in the attic and never bother you again.

I would say that the box does exist as territory, as the realm of cached human thoughts. However, what most of us perceives as 'outside the box' in our maps is, in reality, 'inside the box' in territory.

This cached thought has been around for three decades. Still no general intelligence. But, somehow, everyone outside the field knows that neural networks are the Dominant-Paradigm-Overthrowing New Idea, ever since backpropagation was invented in the 1970s.

It's been going strong in one form or another since the late nineteenth century. William James was a notable supporter of the notion that the human brain had emergent behavior based on the interaction of many simple units, and from this culture came the term "connectionism" that was popular amongst AI speculators Before the War.

"And if all the partners agree that something sounds like a good idea, they won't do it. If only grant committees were this sane."

but then you say:

"Properties like truth or good design are independent of novelty: 2 + 2 = 4, yes, really, even though this is what everyone else thinks too."

In venture capital it may pay off to avoid doing what every one else does. But in funding grants, it seems there's no advantage to that. It's not like the science get devalued if it's discovered twice. If everyone thinks it's a good grant, then maybe it just is?

It's not like the science get devalued if it's discovered twice

If the knowledge discovered has a value X, then discovering it twice gives the discovery an average value X/2, and discovering it thrice gives the discovery an average value X/3.

This is of course a simplification, because the confirmation received from having multiple copies of the discovery is itself of some value, which flattens the value curve; however the value of a confirmation decreases with each confirmation already extant.

The eminent philosophers of Monty Python said it best of all:

This video is no longer available because the uploader has closed their YouTube account.

Deep, man.

The monty python link is stale

I like how these serious logical and moral discussions are juxtaposed with Monty Python.

[This comment is no longer endorsed by its author]Reply

And now I have the urge to build a mousetrap out of as many lasers and rocket launchers as I can get my hands on...which is not, of course, the least bit optimal for the purpose of catching mice.

I remember the late 90's, when I first gained access to the Internet. Here were my people, people who enjoy thinking, minds communicating at a bare-metal level about interesting and smart things.

It was around that time I ran across the concept of a "free-thinker" and started mulling over that label in my mind. It sounded like a compliment, something I'd like if people started calling me that. After all, I don't think the way other people do (thanks, autism!), and I had always felt like a mind trapped in a body. But the first time I brought up being a free-thinker was in a discussion about religion with an Internet Atheist. I was promptly and patronizingly informed that I couldn't possibly be a free-thinker because I believe in God.


Free-thinker = atheist, apparently. A one-to-one correspondence, a synonym, and a hope for esteem from my peers crushed.

Never mind that I treat the Bible and young-Earth creationism as seriously and geekily as I treat the canons of the various Star Trek series. Never mind that I try to get past the rah-rah-our-team side of religion to follow Jesus' commands to love each other with radical, boundary-breaking see-from-their-eyes empathy. Never mind that I'd been hurt by church hypocrisy as any former-Catholic or raised-Baptist Internet Atheist among my circle of friends.

No, this badge of uniqueness was not for me. I was too unique for it.

And now? Do you still believe in an all-powerful creator? (Not that I have any problem with that)

Yes, and still a young-Earth creationist too. On here I'd probably clarify my concept of omnipotency as "axiomatic ultra-ability", more similar to a programmer of a simulation than a lightning-tosser in a cloud-chariot in the sky.

As a geek-for-life and dedicated devourer of SF, I compare and contrast the details of what I believe with all the god-fictions out there, from Aslan and Eru Ilúvatar to Star Trek's Q and The Prophets, to the God and Satan of Heinlein's Job, to the Anu/Padomay duality at the core of Elder Scrolls lore and the consequent universe literally built out of politics and necromancy. Recently, reading the SSC classic blog post "Meditations on Moloch" helped me coalesce an idea that had been bouncing around my head for twenty years about the "weakling, uncaring opposite of God, waiting with an open mouth at the bottom of the slide."

I just wanted to find a community of experimental theologists who were as willing as I am to ask these questions and posit potentially heretical theories during the process of trying to better model God in our words and minds. Apparently I'm missing an absurdity heuristic that keeps more people from being like me.

So, essentially, the way forward is to attempt to make something 'good' rather than something 'original'. Because of cached thoughts leading all forced 'original' thoughts into truly unoriginal thoughts, the only way to make something truly 'original' is to make something new, not through the attempt of making something new (which would lead you in circles), but to make something better than the rest. By trying to make something better than the rest, it has to be markedly different from everything else.

This intro aged very very poorly. I suspect that the core point of this article may be much weaker than originally claimed because of it. simply constraining your thinking to think outside the box, but then reaching immediately outside the box and not reaching further, is likely a reasoning error. but constraining your thinking to outside the box, then reaching for what is immediately outside it, made Eliezer pick up neural networks. which he then immediately dismissed as not likely to work because so many people had done this. he managed to not see AlphaGo coming, and I have always suspected it was as a result of this article's point in particular that the AI safety crowd were blindsided by neural networks. I think this is a pretty severe prediction error and that this post is likely an incorrect point because of it. interesting disagreement about how to interpret this historical information would be quite welcome.

I think it's not the case that "neural networks" as discussed in this post made AlphaGo. That is, almost of the difficulty in making AlphaGo happen was picking which neural network architecture would solve the problem / buying fast enough computers to train it in a reasonable amount of time. A more recent example might be something like "model-based reinforcement learning"; for many years 'everyone knew' that this was the next place to go, while no one could write down an algorithm that actually performed well.

I think the underlying point--if you want to think of new things, you need to think original thoughts instead of signalling "I am not a traditionalist"--is broadly correct even if the example fails.

That said, I agree with you that the example seems unfortunately timed. In 2007, some CNNs had performed well on a handful of tasks; the big wins were still ~4-5 years in the future. If the cached wisdom had been "we need faster computers," I think the cached wisdom would have looked pretty good.

I worry that this comment dances around the basic update to be made. 

This post makes fun of people who were excited about neural networks. Neural network-based approaches have done extremely well. Eliezer's example wasn't just "unfortunately timed." Eliezer was wrong.

I think that's a pretty simplistic view of the post, but given that view, I agree that's the right update to make.

Why does it seem simplistic? Like, one of the central points of the post you link is that we should think about the specific technical features of proposals, instead of focusing on marketing questions of which camp a proposal falls into. And Eliezer saying he's "no fan of neurons" is in the context of him responding to a comment by someone with the username Marvin Minsky defending the book Perceptrons (the post is from the Overcoming Bias era, when comments did not have threading or explicit parents).

I basically read this as Eliezer making fun of low-nuance people, not people excited about NNs; in that very post he excitedly describes a NN-based robotics project!

But that robotics project was viewed by Eliezer as an example of carefully-designed biological imitation in which the mechanism of action was known by the researchers into the deep details. Across multiple posts, Eliezer's views from this time period emphasize that he believes that AGI can only come from a well-understood AI architecture - either a detailed imitation of the brain, or a crafted logic-based approach. This robotics project was an example of the latter, despite the fact that it used neurons.

This robot ran on a "neural network" built by detailed study of biology.  The network had twenty neurons or so.  Each neuron had a separate name and its own equation.  And believe me, the robot's builders knew how that network worked.

Where does that fit into the grand dichotomy?  Is it top-down?  Is it bottom-up?  Calling it "parallel" or "distributed" seems like kind of a silly waste when you've only got 20 neurons - who's going to bother multithreading that?

So this would be, in my view, another clear example of Eliezer being excited about an AI paradigm that ultimately did not lead to the black-box neural network-based LLMs that actually seem to have put us on the path to AGI.

If the cached wisdom had been "we need faster computers," I think the cached wisdom would have looked pretty good.

If you think neural networks are like brains, you might think that you would get human-like cognitive abilities at human-like sizes. I think this was a very common view (and it has aged quite well IMO).

I can't believe that post is sitting at 185 karma considering how it opens with a complete blatant misquote/lie about moravec's central prediction, and only gets worse from there.

Moravec predicted - in mind children in 1988! - AGI in 2028, based on moore's law and the brain reverse engineering assumption. He was prescient - a true prophet/futurist. EY was wrong and his attempt to smear Moravec here is simply embarrassing.

I think Eliezer does disagree. I find his disagreement fairly annoying. He calls biological anchors the "trick that never works" and gives an initial example of Moravec predicting AGI in 2010 in the book Mind Children.

But as far as I can tell so far that's just Eliezer putting words in Moravec's mouth. Moravec doesn't make very precise predictions in the book, but the heading of the relevant section is "human equivalence in 40 years" (i.e. 2028, the book was written in 1988). Eliezer thinks that Moravec ought to think that human-level AI and shortly thereafter a singularity will occur at the time when a giant cluster is as big as a brain, which Moravec puts in 2010. But I don't see any evidence that Moravec agreed with that implication, and the book seems to generally talk about a timeframe like 2030-2040. Eliezer repeated this claim in our conversation but still didn't really provide any indication Moravec held this view.

To the extent that people were imagining neural networks, I don't think they would expect trained neural networks to be the size of a computing cluster. It's not not the straightforward extrapolation from the kinds of neural networks people were actually computing, so someone going on vibes wouldn't make that forecast. And if you try to actually pencil out the training cost it's clear it won't work, since you have to run a neural network a huge number of times during training, so someone trying to think things through on paper wouldn't think that either. At least since the 1990 I've seen a lot of people making predictions along these lines, but as far as I can tell they seem to give actual predictions in the 2020s or 2030s which currently look quite good to me relative to every other forecasting methodology.

This graph nicely summarizes his timeline from Mind Children in 1988. The book itself presents his view that AI progress is primarily constrained by compute power available to most researchers, which is usually around that of a PC.

Moravec et al were correct in multiple key disagreements with EY et al:

  • That progress was smooth and predictable from Moore's Law (similar to how the arrival of flight is postdictable from ICE progress)
  • That AGI would be based on brain-reverse engineering, and thus will be inherently anthropomorphic
  • That "recursive self-improvement" was mostly relevant only in the larger systemic sense (civilization level)

LLMs are far more anthropomorphic (brain-like) than the fast clean consequential reasoners EY expected:

  • close correspondence to linguistic cortex (internal computations and training objective)
  • complete with human-like cognitive biases!
  • unexpected human-like limitations: struggle with simple tasks like arithmetic, longer term planning, etc
  • AGI misalignment insights from jungian psychology more effective/useful/popular than MIRI's core research

All of this was predicted from the systems/cybernetic framework/rubric that human minds are software constructs, brains are efficient and tractable, and thus AGI is mostly about reverse engineering the brain and then downloading/distilling human mindware into the new digital substrate.

I don't know if the graph settles the question---is Moravec predicting AGI at "Human equivalence in a supercomputer" or "Human equivalence in a personal computer"? Hard to say from the graph.

The fact that he specifically talks about "compute power available to most researchers" makes it more clear what his predictions are. Taken literally that view would suggest something like: a trillion dollar computing budget spread across 10k researchers in 2010 would result in AGI in not-too-long, which looks a bit less plausible as a prediction but not out of the question.

New to LessWrong?