Lone Genius Bias and Returns on Additional Researchers

by ChrisHallquist6 min read1st Nov 201364 comments


FuturismPractice & Philosophy of ScienceAI Timelines
Personal Blog

One thing that most puzzles me about Eliezer's writings on AI is his apparent belief that a small organization like MIRI is likely to be able to beat larger organizations like Google or the US Department of Defense to building human-level AI. In fact, he seems to believe such larger organizations may have no advantage at all over a smaller one, and perhaps will even be at a disadvantage. In his 2011 debate with Robin Hanson, he said:

As far as I can tell what happens when the government tries to develop AI is nothing. But that could just be an artifact of our local technological level and it might change over the next few decades. To me it seems like a deeply confusing issue whose answer is probably not very complicated in an absolute sense. Like we know why it’s difficult to build a star. You’ve got to gather a very large amount of interstellar hydrogen in one place. So we understand what sort of labor goes into a star and we know why a star is difficult to build. When it comes to building a mind, we don’t know how to do it so it seems very hard. We like query our brains to say “map us a strategy to build this thing” and it returns null so it feels like it’s a very difficult problem. But in point of fact we don’t actually know that the problem is difficult apart from being confusing. We understand the star-building problem so we know it’s difficult. This one we don’t know how difficult it’s going to be after it’s no longer confusing.

So to me the AI problem looks like a—it looks to me more like the sort of thing that the problem is finding bright enough researchers, bringing them together, letting them work on that problem instead of demanding that they work on something where they’re going to produce a progress report in two years which will validate the person who approved the grant and advance their career. And so the government has historically been tremendously bad at producing basic research progress in AI, in part because the most senior people in AI are often people who got to be very senior by having failed to build it for the longest period of time. (This is not a universal statement. I’ve met smart senior people in AI.)

But nonetheless, basically I’m not very afraid of the government because I don’t think it’s a throw warm bodies at the problem and I don’t think it’s a throw warm computers at the problem. I think it’s a good methodology, good people selection, letting them do sufficiently blue sky stuff, and so far historically the government has been tremendously bad at producing that kind of progress. (When they have a great big project to try to build something it doesn’t work. When they fund long-term research it works.)

I admit, I don't feel like I fully grasp all the reasons for the disagreement between Eliezer and myself on this issue. Some of the disagreement, I suspect, comes from slightly different views on the nature of intelligence, though I'm having the trouble pinpointing what those differences might be. But some of the difference, I'm think, comes from the fact that I've become convinced humans suffer from a Lone Genius Bias—a tendency to over-attribute scientific and technological progress to the efforts of lone geniuses.

Disclaimer: My understanding of Luke's current strategy for MIRI is that it does not hinge on whether or not MIRI itself eventually builds AI or not. It seems to me that as long as MIRI keeps publishing research that could potentially help other people build FAI, MIRI is doing important work. Therefore, I wouldn't advocate anything in this post being taken as a reason not to donate to MIRI. I've donated recently, and will probably [edit: see below] continue to do so in the future.

Intelligence Explosion Microeconomics has an interesting section labeled "Returns on Population" (section 3.4) where, among other things, Eliezer says:

Although I expect that this section of my analysis will not be without controversy, it appears to the author to also be an important piece of data to be explained that human science and engineering seem to scale over time better than over population—an extra decade seems much more valuable than adding warm bodies.

Indeed, it appears to the author that human science scales ludicrously poorly with increased numbers of scientists, and that this is a major reason there hasn’t been more relative change from 1970–2010 than from 1930–1970 despite the vastly increased num- ber of scientists. The rate of real progress seems mostly constant with respect to time, times a small factor more or less. I admit that in trying to make this judgment I am trying to summarize an overwhelmingly distant grasp on all the fields outside my own handful. Even so, a complete halt to science or a truly exponential (or even quadratic) speedup of real progress both seem like they would be hard to miss, and the exponential increase of published papers is measurable. Real scientific progress is continuing over time, so we haven’t run out of things to investigate; and yet somehow real scientific progress isn’t scaling anywhere near as fast as professional scientists are being added.

The most charitable interpretation of this phenomenon would be that science problems are getting harder and fields are adding scientists at a combined pace which produces more or less constant progress. It seems plausible that, for example, Intel adds new researchers at around the pace required to keep up with its accustomed exponential growth...

Eliezer goes on to suggest, however, that Intel is not at all typical, and proposes some other explanations, two of which ("science is inherently bounded by serial causal depth" and that scientific progress is limited by the need to wait for the last generation to die) suggest that progress doesn't scale at all with added researchers, at least past a certain point.

I'm inclined to think that that Eliezer's basic claim here—that research progress scales better with time than population—is probably correct. Doubling the number of researchers working on a problem rarely means solving the problem twice as fast. However, I doubt the scaling is as ludicrously bad as Eliezer suggests. I suspect the case of Intel is fairly typical, and the "science problems are getting harder" theory of the history of science has a lot more going for it than Eliezer wants to grant.

For one thing, there seems to be a human bias in favor of attributing scientific and technological progress to lone geniuses—call it the Lone Genius Bias. In fiction, it's common for the cast to have a single "smart guy," a Reed Richards type, who does everything important in the the science and technology area, pulling off miraculous achievements all by himself. (If you're lucky, this role will be shared by two characters, like Fitz-Simmons on Joss Whedon's new S.H.I.L.D. TV show.) Similarly, villainous plots often hinge on kidnapping one single scientist who will be able to fulfill all the villain with all the villain's technical know-how needs.

There's some reason to chalk this up to peculiarities of fiction (see TVTtropes articles on the Omnidisciplinary Scientist and The Main Characters Do Everything generally). But it often seems to bleed over into perceptions of real-life scientists and engineers. Saul Kripke, in the course of making a point about proper names, once claimed that he often met people who identified Einstein as the inventor of the atom bomb.

Of course, in reality, Einstein just provided the initial theoretical basis for the atom bomb. Not only did the bomb itself require the Manhattan Project (which involved over 100,000 people) to build, but there there was a fair amount of basic science that had to take place after Einstein's original statement of mass-energy equivalence in 1905 before the Manhattan Project could even be conceived of.

Or: in the popular imagination, Thomas Edison was an amazingly brilliant inventor, almost on par with Reed Richards. A contrarian view, popular among tech geeks, says that actually Edison was a jerk who got famous taking credit for other people's work, and also he depended on having a lot of other people working for him at Menlo Park. But then there's a meta-contrarian view that argues that Menlo Park was "the first industrial research lab," and industrial research labs are very important, to the point that Menlo Park itself was Edison's "major innovation." On this view, it's not Edison's fault that Lone Genius Bias leads people to misunderstand what his true contribution was.

It's easy to see, in evolutionary terms, why humans might suffer from Lone Genius Bias. In the ancestral environment, major achievements would often have been the work of a single individual. Theoretically, there might have been the occasional achievement that required the cooperation of a whole entire hunter-gatherer band, but major achievements were never the work of Intel-sized R&D departments or 100,000 person Manhattan Projects. (The is an instance of the more general principle that humans have trouble fully grokking complex modern societies.)

Once you know about Lone Genius Bias, you should be suspicious when you find yourself gravitating towards future scenarios where the key innovations are the work of a few geniuses. Furthermore, it's not just that big projects are more common now than they were in the ancestral environment. The tendency of major advances to be the work of large groups seems to have noticeably increased over just the last century or so, and that trend may only continue even further in the future.

Consider Nobel Prizes. The first Nobel Prizes were awarded in 1901. When people think of Nobel Prize winners they tend to think of unshared Nobel Prizes, like Einstein's, but in fact a Nobel Prize can be shared by up to three people. And when you look at the list of Nobel Prize winners over the years, the tendency towards giving out more and more shared prizes as time goes on is obvious.

In fact, given the way science currently works, many people find the rule rule that no more than three people can share a prize too restrictive. The Nobel for the discovery of the Higgs Boson, for example, went to two theoreticians who predicted the particle decades ago, while ignoring the contributions of the large number of experimental scientists whose work was required to confirm the particle's existence. An IEEE Spectrum headline went as far as to state the prize "ignores how modern science works."

You can reach the same conclusion just looking at the bylines on scientific papers. The single-author scientific paper "has all but disappeared." Some of that may be due to people gaming the citation-count-as-measure-of-scientific-productivity system, but my impression is that the typical university science lab's PI (principle investigator) really couldn't be nearly as productive without their miniature army of postdocs, grad students, and paid staff. (Consider also that gaming of citation counts hasn't led to an explosion of authors-per-paper in fields like philosophy, where there are obviously fewer benefits to collaboration.)

And if you need one more argument that scientific problems are getting harder, and increasingly unlikely to be solved by lone geniuses... what does anyone honestly think the chances are that the Next Big Thing in science will come in the form some 26 year old publishing a few single-author papers in the same year he got his PhD?

Update: Luke's comments on this post are awesome and I recommend people read them.