In an erratum to my previous post on Pascalian wagers, it has been plausibly argued to me that all the roads to nuclear weapons, including plutonium production from U-238, may have bottlenecked through the presence of significant amounts of Earthly U235 (apparently even the giant heap of unrefined uranium bricks in Chicago Pile 1 was, functionally, empty space with a scattering of U235 dust).  If this is the case then Fermi's estimate of a "ten percent" probability of nuclear weapons may have actually been justifiable because nuclear weapons were almost impossible (at least without particle accelerators) - though it's not totally clear to me why "10%" instead of "2%" or "50%" but then I'm not Fermi.

We're all familiar with examples of correct scientific skepticism, such as about Uri Geller and hydrino theory.  We also know many famous examples of scientists just completely making up their pessimism, for example about the impossibility of human heavier-than-air flight.  Before this occasion I could only think offhand of one other famous example of erroneous scientific pessimism that was not in defiance of the default extrapolation of existing models, namely Lord Kelvin's careful estimate from multiple sources that the Sun was around sixty million years of age.  This was wrong, but because of new physics - though you could make a case that new physics might well be expected in this case - and there was some degree of contrary evidence from geology, as I understand it - and that's not exactly the same as technological skepticism - but still.  Where there are sort of two, there may be more.  Can anyone name a third example of erroneous scientific pessimism whose error was, to the same degree, not something a smarter scientist could've seen coming?

I ask this with some degree of trepidation, since by most standards of reasoning essentially anything is "justifiable" if you try hard enough to find excuses and then not question them further, so I'll phrase it more carefully this way:  I am looking for a case of erroneous scientific pessimism, preferably about technological impossibility or extreme difficulty, where it seems clear that the inverse case for possibility would've been weaker if carried out strictly with contemporary knowledge, after exploring points and counterpoints.  (So that relaxed standards for "justifiability" will just produce even more justifiable cases for the technological possibility.)  We probably should also not accept as "erroneous" any prediction of technological impossibility where it required more than, say, seventy years to get the technology.

New Comment
114 comments, sorted by Click to highlight new comments since: Today at 8:05 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

"Continental drift" is usually the go-to example. For one, the mechanism originally proposed was complete nonsense...

2David_Gerard11y
They didn't have a mechanism at all until subduction and hence plate tectonics was discovered. The expanding earth theory was actually considered not implausible by geologists for quite a while - it didn't have anything like a plausible mechanism, but neither did continental drift. I was surprised to discover how recent this was.

There was a pretty solid basis for believing that 2-dimensional crystals were thermodynamically unstable and thus couldn't exist. Then in 2004 Geim and Novoselov did it (isolated graphene for the first time) and people had to re-scrutinize the theory, since it was obviously wrong somehow. It turns out that the previous theory was correct for 2D crystals of essentially infinite size, but it seems to not apply for non-infinite crystals. At least that is how it was explained to me once by a theorist on the subject.

The opening paragraph of this paper cites the relevant literature: http://cdn.intechopen.com/pdfs/40438/InTech-The_cherenkov_effect_in_graphene_like_structures.pdf

Single-layer Graphene is really really unstable and if you let it sit free, readily scrolls up and is very hard to get unstuck. In this sense, Landau's impossibility proof is entirely correct.

And that's why we don't use free-standing graphene without a frame, for just about anything. The closest we get is graphene oxide dissolved in a liquid, or extremely extremely tiny platelets that don't really deserve to be called crystals.

The pessimism about non-usefulness of graphene lay entirely in forgetting that you could put it on a backing or stretch it out (or thinking that it would lose its interesting properties if you did the former), and that was not justifiable at all.

Lord Kelvin was wrong but was he pessimistic? He wasn't saying we could never know the answer, or visit the sun, or anything like that. Yes, he guessed wrongly, and too low, but it doesn't seem to be the case that 'underestimating a quantity' is pessimism. If nothing else, the quantity might be 'number of babies killed'.

1Luke_A_Somers11y
It was pessimistic in the sense that under his estimate the sun was steadily cooling and so we'd all freeze to death long before the real sun will present us any trouble.
2Jack11y
Did he give an estimate of when we'd all freeze to death?
8Plasmon11y
He estimated the sun was no more than 20 million years old, and presumably did not expect it to last for more than a few tens of millions of years more.
4Luke_A_Somers11y
Not that I know of. Gravitational collapse is a really lousy, short-term source of energy, which is why he gave such a shorter estimate. Still on the scale of millions of years, I think.

The claim that the Sun revolves around the Earth. If the Earth revolved around the Sun, there would have been a parallax in the observations of stars from different positions in the orbit. There was no observable parallax, so Earth probably didn't revolve around the Sun.

2gwern11y
I thought that parallax argument was applied to the stars, not the Sun?
6Randaly11y
Yeah, that's what I meant. (No parallax in star observations -> the Earth isn't moving -> the Sun is revolving around the Earth.)
1Jack11y
*there would have been a parallax given assumptions at the time regarding the distance of the stars. I've wondered though: if there were no planets besides Earth would we have persisted as geocentrists until the 19th century?
0SilasBarta11y
If there were no celestial bodies but Earth and the sun, we would have been just as correct as heliocentrists.
3Jack11y
I don't think that's right.
8ArisKatsaris11y
The center of mass for the Earth-sun system is inside the sun; so, yeah, the heliocentrists wouldn't be "just as correct". If the two masses were equal, then Earth and Sun would orbit a point that was equidistant to them; and in that scenario heliocentrists and geocentrists would be equally wrong....
-5Kawoomba11y
0Luke_A_Somers11y
That's a justifiable error, but I don't see how it's pessimistic.
4[anonymous]11y
"Pessimistic" is a loaded term and I'm not sure if it's all that useful in the context of this discussion in the first place.
2Luke_A_Somers11y
It's crucial to the original point that Eliezer was making, which was differentiating technological pessimism from technological optimism. This isn't technology, and though it makes a difference to the universe as a whole, it wouldn't be better or worse for us either way.

Off the top of my head, how about the Landau Pole? A famous and usually right genius calculated that the gauge theories of quantum fields are a dead end, and set the Soviet and to some degree Western physics a few years back, if I recall correctly. His calculation was not wrong, he simply missed the alternate possibilities.

EDIT: hmm, I'm having trouble locating any links discussing the negative effects of the Landau pole discovery on the QED research.

We also know many famous examples of scientists just completely making up their pessimism, for example about the impossibility of human heavier-than-air flight.

This isn't what you asked for, but I might as well enumerate a few of these examples, for everyone's benefit. For the field of AI research:

"You can build a machine to draw [logical] conclusions for you, but I think you can never build a machine that will draw [probabilistic] inferences."

George Pólya (1954), ch. 15 — a few decades before the probabilistic revolution in AI.

[Machines

... (read more)

[Machines] cannot play chess any more than they can play football.

Technically, he was correct.

0NancyLebovitz11y
I like the idea of football (soccer) played by quadrupeds.

Taube did not mean "Machines cannot be made to choose good chess moves" (a claim that has, indeed, been amply falsified). Here's a bit more context, from the linked paper.

[...] there are analog relationships in real chess -- such as the emptiness of a line [...] which cannot be directly handled by any digital machine. These analog relationships can be approximated digitally [...] in order to determine whether a given line is empty [...] such a set of calculations is not identical to the visual recognition that the space between two pieces is empty. A large part of the enjoyment of chess [...] derives from its deployment or topological character, which a machine cannot handle except by elimination. If game is used in the usual sense -- that is, as it was used before the word was redefined by computer enthusiasts with nothing more serious to do -- it is possible to state categorically that machines cannot play games. They cannot play chess any more than they can play football.

Taube's point, if I'm not misunderstanding him grossly, is that part of what it means to play a game of chess is (not merely to choose moves repeatedly until the game is over, but) to have somethin... (read more)

You accuse lukeprog of being misleading in taking a quote from a mere "librarian", and as we all know, a librarian is a harmless drudge who just shelves books, hence

it doesn't confer the kind of expertise that would make it surprising or even very interesting for Taube to have been wrong here.

I accuse you of being highly misleading in at least two ways here:

  1. in 1960, a librarian is one of the occupations - outside actual computer-based occupations - most likely to have hands-on familiarity with computers & things like Boolean logic, for the obvious reason that being a librarian is often about research where computers are invaluable. A librarian could well have extensive experience, and so it's not much of a mark against him.
  2. Mortimer Taube turns out to be the kind of 'librarian' who exemplifies this; the little byline to his letter about "Documentation Incorporated" should have been an indicator that maybe he was more than just a random schoolhouse librarian stamping in kids' books, but because you did not see fit to add any background on what sort of 'librarian' Taube was, I will:

    ...He is on the list of the 100 most important leaders in Library and I

... (read more)
8gjm11y
I really can't think of a polite way to say this, so: Bullshit. 1. I wasn't accusing Luke of anything; I was disagreeing with him. Disagreement is not accusation. When I want to make an accusation, I will make an accusation, like this one: You have mischaracterized what I wrote, and made totally false insinuations about my opinions and attitudes, and I have to say I'm pretty shocked to see someone as generally excellent as you behaving in such a way. 2. I do not think, and I did not say, and I had not the slightest intention of implying, that "a librarian is a harmless drudge who just shelves books". Allow me to remind you how Luke's comment begins. The boldface emphasis is mine. Taube was, despite his many excellent qualities, not a scientist as that term is generally understood, and he was, despite his many excellent qualities, not working in "the field of AI research". (Yes, I know the Wikipedia page says he was "a true innovator in the field of science". Reading what it says he did, though, I really can't see that what he did was science. For the avoidance of doubt, and in the probably overoptimistic hope that saying this will stop you pulling the same what-a-snob-this-person-is move as you already did above, I don't think that "not science" is in any way the same sort of thing as "not valuable" or "not important" or "not difficult". What the creators of (say) the Firefox web browser did was important and valuable and difficult, but happens not to be science. What Beethoven did was important and valuable and difficult, but happens not to be science. What Martin Luther King did was important and valuable and difficult, but happens not to be science.) Pointing this out doesn't mean I think there's anything wrong with being a librarian. When I said "a librarian is a fine thing to be", I meant it. (And, for the avoidance of doubt, it is my opinion both when "librarian" means "someone who shelves books in a library" and when it means "a world-class expert on
-5gwern11y
3wedrifid11y
Thankyou for your research. I was mislead by the grandparent.
2lukeprog11y
"Eliezer" should be "lukeprog".
6gwern11y
Hah, whups. And so it goes - you correct Eliezer's lack of examples, gjm corrects your description of Taube, I correct gjm's description of Taube, and you correct my description of gjm's description...
0yli11y
Would a chess program that has a table of all the lines on the board that keeps track of whether they are empty or not and that uses that table as part of its move choosing algorithm qualify? If not, I think we might be into qualia territory when it comes to making sense of how exactly a human is recognizing the emptiness of a line and that program isn't.
1gjm11y
Yup. I strongly suspect that Taube was in fact "into qualia territory", or something along those lines, when he wrote that.
-16private_messaging11y

Here is another famous example:Chandrasekhar's limit. Eddington rejected the idea of black holes ("I think there should be a law of Nature to prevent a star from behaving in this absurd way!"). Says wikipedia:

Chandra's discovery might well have transformed and accelerated developments in both physics and astrophysics in the 1930s. Instead, Eddington's heavy-handed intervention lent weighty support to the conservative community astrophysicists, who steadfastly refused even to consider the idea that stars might collapse to nothing.

I guess this ... (read more)

3betterthanwell11y
Eddington erroneously dismissed M_(white dwarf) > M_limit ⇒ "a black hole" , but didn't he correctly anticipate new physics? Do event horizons (Finkelstein, 1958) not prevent nature from behaving in "that absurd way", so far as we can ever observe? * http://en.wikipedia.org/wiki/Cosmic_censorship_hypothesis
0shminux11y
It's hard to know what Eddington meant by "absurd way". Presumably he meant that this hypothetical law would prevent matter from collapsing into nothing. Possibly if Chandrasekhar had figured out the strange properties of the event horizon back in 1935 and had emphasized that whatever weird stuff is happening beyond the final Chandrasekhar limit is hidden from view, Eddington would not have reacted as harshly. But that took another 20-30 years, even though the relevant calculations require at most 3rd year college math. Besides, Chandrasekhar's strength was in mathematics, not physics, and he could not compete with Eddington in physics intuition (which happened to be quite wrong in this particular case).

The general success rate of breakthroughs is pretty damn low, and so I'd argue that most examples of "invalid" pessimism (excluding some stupid ones coming from scientists you never heard of before coming across a quote, and excluding things like PR campaigning by Edison), viewed in the context of almost all breakthroughs failing for some reason you can't anticipate, are not irrational but simply reflect absence of strong evidence in favour of success (and absence of strong evidence against unknown obstacles), at the time of assessment (and corre... (read more)

0ESRogs11y
I'm having trouble understanding your second paragraph. This is probably just due to missing background knowledge on my part, but would you mind explaining what you mean by: and Thanks!
1private_messaging11y
There was a really silly argument about Fermi's 10% estimate , scattered over several threads (which OP talks about). Yudkowsky been arguing that Fermi's estimate was too low. He came up with the idea that surely there would have been one element (out of many) that would have worked so the probability should have been higher, that was wrong because a: its not as if some element's fissions released neutrons and some didn't, and b: there was only 1 isotope to start from (U-235), not many.
2ESRogs11y
Do all elements' fissions release neutrons?
2private_messaging11y
Yes. The issue is that the argument "look at periodic table, it's so big, there would be at least one" requires that the fact of fission releasing neutrons would be assumed independent across nuclei.
0ESRogs11y
Gotcha, thanks.

I'm not sure if this is justifiable or just an old-fashioned blunder...

On the subject of stars, all investigations which are not ultimately reducible to simple visual observations are…necessarily denied to us… We shall never be able by any means to study their chemical composition.

-- August Comte, 1835

I'm leaning towards "blunder" myself...

Yeah, blunder. Wikipedia says:

In the 1820s both John Herschel and William H. F. Talbot made systematic observations of salts using flame spectroscopy. In 1835, Charles Wheatstone reported that different metals could be easily distinguished by the different bright lines in the emission spectra of their sparks, thereby introducing an alternative mechanism to flame spectroscopy.

5wedrifid11y
Well, the first half seems approximately correct. The second sentence should have begun with "And by clever application of this means we shall...".
5A1987dM11y
Even if you interpret “visual” as ‘mediated by photons’, there's such a thing as neutrino astronomy.
5sketerpot11y
It wasn't until the 1850s that Ångström discovered that elements both emit and absorb light at characteristic wavelengths, which is what spectroscopic analysis of stars is based on, so I'm leaning toward justifiable.

it has been plausibly argued to me that all the roads to nuclear weapons, including plutonium production from U-238, may have bottlenecked through the presence of significant amounts of Earthly U235

This has interesting repercussions for Fermi's paradox.

8JoshuaZ11y
Yes, particularly in the context that you and I discussed earlier that intelligent life arising earlier might have had an easier time wiping itself out. Although the consensus there seemed to be that it wouldn't be a large enough difference to matter for serious filtration issues.

I posted the following in a quotes page a few months back. I don't know how justifiable these were, and these are only questionably pessimism, but there may be some interesting examples in this. In particular, my light knowledge of the subject suggests that there really were extremely compelling reasons to disregard Feynman's formulation of QED for many years after it was first introduced.

It is interesting to note that Bohr was an outspoken critic of Einstein's light quantum (prior to 1924), that he mercilessly denounced Schrodinger's equation, discourag

... (read more)

Here's an example of the 'opposite' - a case of unjustifiable correct optimism:

Columbus knew the Earth was round but should also have known the radius of the Earth and size of Eurasia well enough to know that the voyage East to Asia was simply impossible with the ships and supplies he went with. It seems to have turned out OK for him, though.

This is probably not a very useful example and I wouldn't be surprised to see that there were plenty more of these examples.

Kuhn's Structure of Scientific Revolutions is all about how an old scientific approach is often more right than the new school -- fits the data better, at least in the areas widely acknowledged to be central. Only later does the new approach become refined enough to fit the data better.

2Bruno_Coelho11y
To him(Kuhn) evidence don't maintain old paradigms statuos quo, but persuasion. Old fellas making remarks about the virtues of their theory. New folks in academia have to convince a good amount of people to make the new theory relevant.
2JoshuaFox11y
Yes, "Science advances one funeral at a time", but this, from Wikipedia, is a pretty good summary of a typical "scientific revolution": "...Copernicus' model needed more cycles and epicycles than existed in the then-current Ptolemaic model, and due to a lack of accuracy in calculations, Copernicus's model did not appear to provide more accurate predictions than the Ptolemy model. Copernicus' contemporaries rejected his cosmology, and Kuhn asserts that they were quite right to do so: Copernicus' cosmology lacked credibility."

Thomas Malthus' view that in the long run we will always be stuck in (what we now call) the Malthusian trap. He would have been right if not for the sustained growth given to us by the industrial revolution.

Not clear his view is erroneous given suitable values for "long run".

1gwern11y
How so? Last I checked, human populations could still pop out children if they wanted to faster than the average real global growth rate since the IR of ~2%.
4James_Miller11y
What's relevant to whether we are in a Malthusian trap is the actual birth rate, not what the birth rate would be if people wanted to have far more children.
9gwern11y
I'll be more explicit then: the 'sustained growth' is almost irrelevant since per the usual Malthusian mechanisms it is quickly eliminated. What made Malthus wrong, what he was pessimistic about, was whether people would exercise "moral restraint" - in other words, he didn't think the demographic transition would happen. It did, and that's why we're wealthy.
3SilasBarta11y
But how do you know it's the "moral restraint" that averted the Malthusian catastrophe, rather than the innovations (by the additional humans) that amplified the effective carrying capacity of available resources? In fact, the moral restraint could be keeping us closer to the catastrophe than if we had been producing more humans.
1gwern11y
Because population growth can outpace innovation growth. This is not a hard concept.
0SilasBarta11y
I know. But your post seemed to be taking the position in favor of population growth (change) as the relevant factor rather than innovation. I was asking why you (seemed to have) thought that.
6gwern11y
Population growth and innovation are two sides of a scissor: innovation drives potential per capita up, population growth drives it down. But the blade of population growth is far bigger than the blade of innovation growth, because everyone can pump out children and few can pump out innovation. Hence, innovation can be seen as necessary - but it is not sufficient, in the absence of changes to reproductive patterns.
2SilasBarta11y
Okay, that's where I disagree: Each additional person is also another coin toss (albeit heavily stacked against us) in the search for innovators. The question then is whether the possible innovations, weighted by probability of a new person being an innovator (and to what extent) favors more or fewer people. There's no reason why one effect is necessarily greater than the other and hence no reason for the presumption of one blade being larger.
2gwern11y
There is no a priori reason, of course. We can imagine a world in which brains were highly efficient and people looked more like elephants, in which one could revolutionize physics every year or so but it takes a decade to push out a calf. Yet, the world we actually live in doesn't look like that. A woman can (and historically, many have) spend her life in the kitchen making no such technological contributions but having 10 kids. (In fact, one of my great-grandmothers did just that.) It was not China or India which launched the Scientific and Industrial Revolutions.
0SilasBarta11y
The ability to produce lots of children does not at all work against the ability of innovators and innovator probability to overcome their resource-extraction load. In order for your strategy to actually work against the potential innovation, you would have to also suppress the intelligence (probability) of your children to the point where the innovation blade is sufficiently small. And you would have to do it without that action itself causing the die-off, and while ensuring they can continue to execute the strategy on the next generation. And keep in mind, you're working against the upper tail of the intelligence bell curve, not the mode. Innovation in this context needn't be revolution-size. China and India (and the Islamic Empire) did innovate faster than the West, and averted many Malthusian overtakings along the way (probably reaching 800 years ahead at their zenith). Malthus would have known about this at the time.
0gwern11y
I'm not following your terms here. Obviously the ability to produce lots of children does in fact sop up all the additional production, because that's why per capita incomes on net essentially do not change over thousands of years and instead populations may get bigger. So you can't mean that, but I don't know what you mean. They innovated faster at some points, arguably. And the innovation such as in farming techniques helped support a higher population - and a poorer population. Malthus would have known this about China, did, and used China as an example of a number of things, for example, the consequences of a subsistence wage which is close to starvation http://en.wikisource.org/wiki/An_Essay_on_the_Principle_of_Population/Chapter_VII :
0randallsquared11y
That's not even required, though. What we're looking for (blade-size-wise) is whether a million additional people produce enough innovation to support more than a million additional people, and even if innovators are one in a thousand, it's not clear which way that swings in general.
0gwern11y
Sure, it's just an example which does not seem to be impossible but where the blade of innovation is clearly bigger than the blade of population growth. But the basic empirical point remains the same: the world does not look like one where population growth drives innovation in a virtuous spiral or anything remotely close to that*. * except, per Miller's final reply, in the very wealthiest countries post-demographic-transition where reproduction is sub-replacement and growth maybe even net negative like Japan and South Korea are approaching, then in these exceptional countries some more population growth may maximize innovation growth and increase rather than decrease per capita income.
1James_Miller11y
I can't prove this, but I believe that in the United States and Western Europe we would still be rich (in the sense that calorie deprivation wouldn't pose a health risk to the vast majority of the population) if the birth rate had stayed the same since Malthus's time.
0gwern11y
That makes no sense to argue: Malthus's time was part of the demographic transition. Of course I would agree that if the demographic transition continued post-Malthus - as it did - we would see higher per capita (as we did). But look up the extremely high birth rates of some times and places (you can borrow some figures from http://www.marathon.uwc.edu/geography/demotrans/demtran.htm ), apply modern United States & Western Europe infant and child mortality rates, and tell me whether the population growth rate is merely much higher than the real economic growth rates of ~2% or extraordinarily higher. You may find it educational.
3James_Miller11y
But I believe that from the point of view of maximizing the per person wealth of the United States and Western Europe the population growth rate has been much, much too low since the industrial revolution. (I admittedly have no citations to back this up.)
4gwern11y
Maybe. That's not the same thing as what you said initially, though.
0Error11y
I was always under the impression that what thwarted his hypothesis was the rise of effective and widespread birth control. I remember reading one of his works and noting that it was operating on the assumption that, to reduce birthrate to sustainable levels, sex would have to be reduced, and that was unlikely. It is unlikely, but it's also mostly decoupled from childbirth now, at least in the developed world. Have I misinterpreted something here?
2Eugine_Nier11y
I believe he considered the possibility of birth control, referring to it as "immorality".
-1private_messaging11y
We'll just evolve for restraint not to work any more.
3gwern11y
Yes, that's the question: is the demographic transition temporary? I've brought it up before: http://lesswrong.com/lw/5dl/is_kiryas_joel_an_unhappy_place/
2A1987dM11y
(Was there a SMBC comic or something about men evolving a condom-breaking mechanism in their penis?)

We're rapidly evolving condom-not-putting-on mechanism in the brain.

2[anonymous]11y
"Watch out for that cliff!" "It looks pretty far off, and besides, we're turning left soon anyway." "But we could keep accelerating!"
2gwern11y
Your reply seems completely irrelevant to the Malthusian point that population growth can always exceed total factor production, and so it is population growth - or lack of growth - which dominates and determines per capita.

This blog post claims that only a few years before the Wright brother's success, the consensus was that flying machines would necessarily have to be less dense than air (like hot air balloons).

it has been plausibly argued to me that all the roads to nuclear weapons, including plutonium production from U-238, may have bottlenecked through the presence of significant amounts of Earthly U235 (apparently even the giant heap of unrefined uranium bricks in Chicago Pile 1 was, functionally, empty space with a scattering of U235 dust).

All is such a strong word unless supplemented with qualifiers. I question the plausibility the arguments at supporting that absolute. The route "wait for an extra century or two of particle physics research and spend a few trillion producing the initial seed stock" would still be available.

In context, Fermi was considering something rather more short-term: WW2.

That said, he may not have scoped his statement to such a small scale.

2wedrifid11y
One of many suitable and sufficient qualifiers that could make the arguments plausible.