Justifiable Erroneous Scientific Pessimism

by Eliezer Yudkowsky1 min read8th May 2013114 comments


Personal Blog

In an erratum to my previous post on Pascalian wagers, it has been plausibly argued to me that all the roads to nuclear weapons, including plutonium production from U-238, may have bottlenecked through the presence of significant amounts of Earthly U235 (apparently even the giant heap of unrefined uranium bricks in Chicago Pile 1 was, functionally, empty space with a scattering of U235 dust).  If this is the case then Fermi's estimate of a "ten percent" probability of nuclear weapons may have actually been justifiable because nuclear weapons were almost impossible (at least without particle accelerators) - though it's not totally clear to me why "10%" instead of "2%" or "50%" but then I'm not Fermi.

We're all familiar with examples of correct scientific skepticism, such as about Uri Geller and hydrino theory.  We also know many famous examples of scientists just completely making up their pessimism, for example about the impossibility of human heavier-than-air flight.  Before this occasion I could only think offhand of one other famous example of erroneous scientific pessimism that was not in defiance of the default extrapolation of existing models, namely Lord Kelvin's careful estimate from multiple sources that the Sun was around sixty million years of age.  This was wrong, but because of new physics - though you could make a case that new physics might well be expected in this case - and there was some degree of contrary evidence from geology, as I understand it - and that's not exactly the same as technological skepticism - but still.  Where there are sort of two, there may be more.  Can anyone name a third example of erroneous scientific pessimism whose error was, to the same degree, not something a smarter scientist could've seen coming?

I ask this with some degree of trepidation, since by most standards of reasoning essentially anything is "justifiable" if you try hard enough to find excuses and then not question them further, so I'll phrase it more carefully this way:  I am looking for a case of erroneous scientific pessimism, preferably about technological impossibility or extreme difficulty, where it seems clear that the inverse case for possibility would've been weaker if carried out strictly with contemporary knowledge, after exploring points and counterpoints.  (So that relaxed standards for "justifiability" will just produce even more justifiable cases for the technological possibility.)  We probably should also not accept as "erroneous" any prediction of technological impossibility where it required more than, say, seventy years to get the technology.

Personal Blog


114 comments, sorted by Highlighting new comments since Today at 3:42 PM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

"Continental drift" is usually the go-to example. For one, the mechanism originally proposed was complete nonsense...

2David_Gerard8yThey didn't have a mechanism at all until subduction and hence plate tectonics was discovered. The expanding earth theory was actually considered not implausible by geologists for quite a while - it didn't have anything like a plausible mechanism, but neither did continental drift. I was surprised to discover how recent this was.

There was a pretty solid basis for believing that 2-dimensional crystals were thermodynamically unstable and thus couldn't exist. Then in 2004 Geim and Novoselov did it (isolated graphene for the first time) and people had to re-scrutinize the theory, since it was obviously wrong somehow. It turns out that the previous theory was correct for 2D crystals of essentially infinite size, but it seems to not apply for non-infinite crystals. At least that is how it was explained to me once by a theorist on the subject.

The opening paragraph of this paper cites the relevant literature: http://cdn.intechopen.com/pdfs/40438/InTech-The_cherenkov_effect_in_graphene_like_structures.pdf

Single-layer Graphene is really really unstable and if you let it sit free, readily scrolls up and is very hard to get unstuck. In this sense, Landau's impossibility proof is entirely correct.

And that's why we don't use free-standing graphene without a frame, for just about anything. The closest we get is graphene oxide dissolved in a liquid, or extremely extremely tiny platelets that don't really deserve to be called crystals.

The pessimism about non-usefulness of graphene lay entirely in forgetting that you could put it on a backing or stretch it out (or thinking that it would lose its interesting properties if you did the former), and that was not justifiable at all.

Lord Kelvin was wrong but was he pessimistic? He wasn't saying we could never know the answer, or visit the sun, or anything like that. Yes, he guessed wrongly, and too low, but it doesn't seem to be the case that 'underestimating a quantity' is pessimism. If nothing else, the quantity might be 'number of babies killed'.

2Luke_A_Somers8yIt was pessimistic in the sense that under his estimate the sun was steadily cooling and so we'd all freeze to death long before the real sun will present us any trouble.
1Jack8yDid he give an estimate of when we'd all freeze to death?
4Plasmon8yHe estimated the sun was no more than 20 million [https://en.wikipedia.org/wiki/William_Thomson,_1st_Baron_Kelvin#Age_of_the_Earth:_geology_and_theology] years old, and presumably did not expect it to last for more than a few tens of millions of years more.
2Luke_A_Somers8yNot that I know of. Gravitational collapse is a really lousy, short-term source of energy, which is why he gave such a shorter estimate. Still on the scale of millions of years, I think.

The claim that the Sun revolves around the Earth. If the Earth revolved around the Sun, there would have been a parallax in the observations of stars from different positions in the orbit. There was no observable parallax, so Earth probably didn't revolve around the Sun.

1Jack8y*there would have been a parallax given assumptions at the time regarding the distance of the stars. I've wondered though: if there were no planets besides Earth would we have persisted as geocentrists until the 19th century?
0SilasBarta8yIf there were no celestial bodies but Earth and the sun, we would have been just as correct as heliocentrists.
2Jack8yI don't think that's right.
5ArisKatsaris8yThe center of mass for the Earth-sun system is inside the sun; so, yeah, the heliocentrists wouldn't be "just as correct". If the two masses were equal, then Earth and Sun would orbit a point that was equidistant to them; and in that scenario heliocentrists and geocentrists would be equally wrong....
-2Kawoomba8yWhy privilege the center of mass as the reference point? Do we need to find the densest concentration of mass in the known universe to determine what we call the punctum fixum and what we call the punctum mobile? As far as I can tell, most of the local universe revolves around me. That may be a common human misconception, seeing as I'm not a black hole, if we only go by centers of mass. But do we have to? (Also, "densest concentration of mass" would probably be in the bible belt.)
3rocurley8yI think the center of mass thing is a bit of a red herring here. While velocity and position are all relative, rotation is absolute. You can determine if you're spinning without reference to the outside world. For example, imagine a space station you spin for "gravity". You can tell how fast it's spinning without looking outside by measuring how much gravity there is. You can work in earth-stationary coordinates, there will just be some annoying odd terms in your math as a result (it's a non-inertial reference frame).
3SilasBarta8yTechnically, no you can't. Per EY's points on Mach's principle [http://lesswrong.com/lw/qm/machs_principle_antiepiphenomenal_physics/], spinning yourself around (with the resulting apparent movement of stars and feeling of centrifugal stresses) is observationally equivalent to the rest of the universe conspiring to rotate around you oppositely. The c.g. of the earth/sun solar system would likewise lack a privileged position in such a world.
3rocurley8yI agree that it's at least quite plausible (as per your post, it's not proven to follow from GR) that if the universe spun around you, it might be exactly the same as if you were spinning. However, if there's no background at all, then I'm pretty sure the predictions of GR are unambiguous. If there's no preferred rotation, then what do you predict to happen when you spin newton's bucket at different rates relative to each other? EDIT: Also, although now I'm getting a bit out of my league, I believe that even in the massive external rotating shell case, the effect is miniscule. EDIT 2: See this comment [http://lesswrong.com/lw/qm/machs_principle_antiepiphenomenal_physics/kb8].
0SilasBarta8yAre you sure you linked the right comment? That's just someone talking about centripetal vs centrifugal.
3rocurley8yNo, I didn't. It's fixed now, thanks.
1satt8yIs that correct? Spinning implies rotation implies acceleration, which I'd always thought could be detected without external reference points. Without taking a stance on Mach's principle or that specific question of observational equivalence, what about a spinning body in an otherwise empty universe? As an extreme example, my own body could spin only so fast before tearing itself apart. Surely this holds even if I'm floating in an otherwise utterly empty universe?
0SilasBarta8yThis is addressed later in the article, very well IMHO. Let me just give the relevant excerpts:
0satt8yI worry I'm missing something obvious, but that EY quote doesn't seem to address my belief (namely, that detecting accleration doesn't need an external reference point). It just argues there's no absolute origin to use as an external reference point.
0arundelo8ySilas is talking aboutthis [http://lesswrong.com/lw/qm/machs_principle_antiepiphenomenal_physics/]: Edit: You are correct from a classical physics standpoint that if you are in a windowless room on a merry-go-round, you can tell whether the merry-go-round is standing still versus spinning at a constant speed. (For instance, you could shoot a billiard ball and see whether its path is straight or curved.) This contrasts with the analogous situation in a windowless train car, where you cannot tell whether the train is standing still versus moving with a constant velocity.
1SilasBarta8yRight, that (a small portion of it) was what I quoted first [http://lesswrong.com/lw/hdx/justifiable_erroneous_scientific_pessimism/8xz3], one exchange upthread, and satt still held to the intuition that there are rotational stresses in the absence of the universe's background matter. So I went back/up/down[1] a level to the basic question of when you can rule out a certain "absolute" in nature: when the simplest laws stop requiring it. The point I was trying to make (which I should have been more specific on) was that, just as the Galilean observation set sufficed to rule out "special" velocities and leave only relative ones, our observation set now has, as an optimal description, laws that give no privilege to any non-relative motion, including higher derivatives of velocity. [1] whichever preposition would be least offensive
0arundelo8yAh, sorry. Upthread reading fail on my part.
0satt8yAs far as I can tell, what I'm saying holds even for non-spinning accelerating objects, and under quantum physics. According to QFT, a sufficiently sensitive thermometer accelerating through a vacuum detects a higher temperature [https://en.wikipedia.org/wiki/Unruh_effect] than a non-accelerating thermometer would. This appears to be a way for a thermometer to tell whether it's accelerating without having to "look" at distant stars & such.
0nonplussed8yHm, I'm not sure the thermometer can conclude that it's accelerating from seeing the black body radiation. I think it's equivalent to there being an event horizon behind it emitting hawking radiation (this happens when you accelerate at a constant rate). The thermometer can't tell if it's next to a black hole or if it's accelerating. Could be wrong though, but I vaguely remember something along these lines.
0satt8yI don't see anything incorrect in what you say. (Sounds to me like a direct consequence of the equivalence principle, although I'm no GR expert.) But I'm assuming away the possibility of rogue black holes in this hypothetical, since I'm wondering whether a sufficiently sensitive sensor could detect its own acceleration even inside an otherwise empty universe [http://lesswrong.com/lw/hdx/justifiable_erroneous_scientific_pessimism/8y41] (or at least without reference to the rest of the cosmos).
0arundelo8yI think I misunderstood what you and Silas were talking about. (Note though that my train thought experiment was about a train with a constant velocity. The billiard ball technique works to detect acceleration of the train even if no rotation is involved.)
0shminux8yYes, all acceleration is absolute, not relative. You don't need hypothetical esoteric effects to detect it, a usual weighing scale will do. Gravity throws a bit of a quirk in it, of course.
0satt8yI'm simultaneously reassured (that my intuition's correct) & confused (about SilasBarta & Eliezer's remarks, since they read to me like they contradict my intuition). Maybe I should post a comment on the Sequences post [http://lesswrong.com/lw/qm/machs_principle_antiepiphenomenal_physics/] rather than continuing to press the point here, though. [Edit: originally linked the wrong Sequences post, fixed that.]
1gwern8yI thought that parallax argument was applied to the stars, not the Sun?
4Randaly8yYeah, that's what I meant. (No parallax in star observations -> the Earth isn't moving -> the Sun is revolving around the Earth.)
0Luke_A_Somers8yThat's a justifiable error, but I don't see how it's pessimistic.
3CellBioGuy8y"Pessimistic" is a loaded term and I'm not sure if it's all that useful in the context of this discussion in the first place.
1Luke_A_Somers8yIt's crucial to the original point that Eliezer was making, which was differentiating technological pessimism from technological optimism. This isn't technology, and though it makes a difference to the universe as a whole, it wouldn't be better or worse for us either way.

We also know many famous examples of scientists just completely making up their pessimism, for example about the impossibility of human heavier-than-air flight.

This isn't what you asked for, but I might as well enumerate a few of these examples, for everyone's benefit. For the field of AI research:

"You can build a machine to draw [logical] conclusions for you, but I think you can never build a machine that will draw [probabilistic] inferences."

George Pólya (1954), ch. 15 — a few decades before the probabilistic revolution in AI.


... (read more)

[Machines] cannot play chess any more than they can play football.

Technically, he was correct.

0NancyLebovitz8yI like the idea of football (soccer) played by quadrupeds.

Taube did not mean "Machines cannot be made to choose good chess moves" (a claim that has, indeed, been amply falsified). Here's a bit more context, from the linked paper.

[...] there are analog relationships in real chess -- such as the emptiness of a line [...] which cannot be directly handled by any digital machine. These analog relationships can be approximated digitally [...] in order to determine whether a given line is empty [...] such a set of calculations is not identical to the visual recognition that the space between two pieces is empty. A large part of the enjoyment of chess [...] derives from its deployment or topological character, which a machine cannot handle except by elimination. If game is used in the usual sense -- that is, as it was used before the word was redefined by computer enthusiasts with nothing more serious to do -- it is possible to state categorically that machines cannot play games. They cannot play chess any more than they can play football.

Taube's point, if I'm not misunderstanding him grossly, is that part of what it means to play a game of chess is (not merely to choose moves repeatedly until the game is over, but) to have somethin... (read more)

7gwern8yYou accuse lukeprog of being misleading in taking a quote from a mere "librarian", and as we all know, a librarian is a harmless drudge who just shelves books, hence I accuse you of being highly misleading in at least two ways here: 1. in 1960, a librarian is one of the occupations - outside actual computer-based occupations - most likely to have hands-on familiarity with computers & things like Boolean logic, for the obvious reason that being a librarian is often about research where computers are invaluable. A librarian could well have extensive experience, and so it's not much of a mark against him. 2. Mortimer Taube [http://en.wikipedia.org/wiki/Mortimer_Taube] turns out to be the kind of 'librarian' who exemplifies this; the little byline to his letter about "Documentation Incorporated" should have been an indicator that maybe he was more than just a random schoolhouse librarian stamping in kids' books, but because you did not see fit to add any background on what sort of 'librarian' Taube was, I will: So to summarize: he was a trained philosopher and tech startup co-founder who invented new information technology and handled documentation tasks who was familiar with the cybernetics literature and traveled in the same circles as people like Vannevar Bush. And you write ! An upvote for correctly contextualizing what Taube wrote, and a mental downvote for being lazy or deceptive in your final paragraph.
6gjm8yI really can't think of a polite way to say this, so: Bullshit. 1. I wasn't accusing Luke of anything; I was disagreeing with him. Disagreement is not accusation. When I want to make an accusation, I will make an accusation, like this one: You have mischaracterized what I wrote, and made totally false insinuations about my opinions and attitudes, and I have to say I'm pretty shocked to see someone as generally excellent as you behaving in such a way. 2. I do not think, and I did not say, and I had not the slightest intention of implying, that "a librarian is a harmless drudge who just shelves books". Allow me to remind you how Luke's comment begins. The boldface emphasis is mine. Taube was, despite his many excellent qualities, not a scientist as that term is generally understood, and he was, despite his many excellent qualities, not working in "the field of AI research". (Yes, I know the Wikipedia page says he was "a true innovator in the field of science". Reading what it says he did, though, I really can't see that what he did was science. For the avoidance of doubt, and in the probably overoptimistic hope that saying this will stop you pulling the same what-a-snob-this-person-is move as you already did above, I don't think that "not science" is in any way the same sort of thing as "not valuable" or "not important" or "not difficult". What the creators of (say) the Firefox web browser did was important and valuable and difficult, but happens not to be science. What Beethoven did was important and valuable and difficult, but happens not to be science. What Martin Luther King did was important and valuable and difficult, but happens not to be science.) Pointing this out doesn't mean I think there's anything wrong with being a librarian. When I said "a librarian is a fine thing to be", I meant it. (And, for the avoidance of doubt, it is my opinion both when "librarian" means "someone who shelves books in a library"
-4gwern8yYou were claiming he cherrypicked the example; I'll quote again: If that were true, Luke would be seriously cherrypicking and that is not a harmless error but the sort of biased selection and lying which one would rightly take into account in considering flipping the bozo bit on someone and henceforth ignoring anything they said. This isn't a harmless mistake of attribution or minor peccadilloe that might hurt a single clause or subpoint or tangential argument, this is the sort of thing that discredits an entire line of thought. Maybe you didn't mean it as an accusation, but I treat it as one since if it was true it would be very serious; in much the same way maybe someone bringing up the fact that the lead author on a drug study has taken millions of dollars from the dug company doesn't mean anything serious by it, hey, they're just discussing the paper, but I would take it very seriously indeed and maybe even ignore the study entirely. Duly noted, but see above, I don't especially care what you actually think, I care just what you wrote and whether it is a serious issue with Luke's comment. Right. I'm sure you actually meant "I think librarians are fantastic smart people who know everything about everything and have many valid and expert opinions, however it just so happens that chess and AI and cybernetics happens to be one of the few areas where their informed commentary is worthless and ' it doesn't confer the kind of expertise that would make it surprising or even very interesting for Taube to have been wrong here'". If working on key organization schemes and pushing forward the field of information science cannot be construed as 'science' no matter how broadly defined, then I guess we'd better exempt computer science and AI from that moniker too. ಠ_ಠ Actually, that doesn't quite convey my impression of your no-true-Scotsmanning, I'll try that again: ಠ_ಠ ಠ_ಠ ಠ_ಠA PhD in philosophy is not enough to be called a philosopher? zomgwtfbbq.
3gjm8yThis appears to me to be an instance of a common error: assuming that when someone says something, they intended every inference you find it natural to make from it. It doesn't appear to me, at all, that for Luke to have been wrong in the way I say he was he needs to have been a liar or bozo or whatever else you're trying to suggest I accused him of being. (I'm puzzled, too. We seem to be agreed that Luke's quotation gives a misleading impression about what claim Taube was making, and -- rightly, in my opinion -- you don't appear to have concluded from this that Luke was dishonestly cherrypicking and needs the bozo bit flipped. But I don't understand, at all, why giving a misleading impression about Taube's relevant expertise is a worse thing to "accuse" him of than giving a misleading impression about what Taube was claiming. Either of them means that the quotation from Taube fails to serve the purpose Luke put it there for.) If you don't especially care what I actually think, then what the hell are you doing putting words into my mouth about how librarians are uninteresting low-status unintellectual drudges? (Which, just in case it needs saying again, in no way resemble my actual opinion.) I meant what I said. I did not mean what you said. I also did not mean the particular equally-ridiculous thing you now sarcastically suggest I could have meant. I honestly have no idea what I've done to bring forth all this hostility, but if you want an actual reasoned discussion then I politely suggest that you stop flinging shit at me and then we can have one. Those last five words are yours, not mine. I'm sure you can find definitions according to which Taube's work was "science". I'm also sure you can quickly and easily think of plenty of instances where "no matter how broadly defined" ends up meaning "way too broadly defined for most purposes". (Here's an extreme example: Richard Dawkins is on record as accepting the term "cultural Christian" as applying to him. I would
1gwern8yIt's a common error indeed, and one that is justifiable when enough other people draw that error. Yeah Hitler said to kill all the Jews, but he really meant to kill the Jew inside, not real Jews. [http://lesswrong.com/lw/fm/a_parable_on_obsolete_ideologies/] If I may quote your other comment: Indeed. Right, because you just threw that in for no reason... And I even gave several. Feel free to deal with the examples; do you think computer science and AI are not 'science'? I don't see what's the least bit silly about describing him a a "cultural Christian" especially if he accepts the label. He was indeed raised in a Christian culture and implicitly accepts a lot of the background beliefs like belief in guilt and sin (heck, I still think in those terms to some degree and say things like 'goddamn it'); even if we don't go quite as far as Moldbug in diagnosing Dawkins as holding to a puritanical secular Christanity, the influence is ineradicable. There is no view from nowhere. Wow, so not only is he a trained historian who has published & defended his doctorate of original research, you describe him as actually having been in academia post-graduate school, and you still won't describe him as a historian? Would I describe him as a historian? Heck yes. Because if I won't even grant that description to Bostridge, I don't know who the heck I would grant it to. You know, describing someone as a historian is not committing to describing him as a 'great historian' or a 'ground-breaking historian' or a 'famous historian'. You don't need to be Marvin Minsky to be called 'an AI researcher' and you don't need to be a pre-eminent figure to be described as a worker in a field. Even a bad programmer is still a 'programmer'; someone who has moved up into management is still a programmer even if they haven't written a large program in years. From Wikipedia: "After being awarded a doctorate (Dr. rer. nat.) for her thesis on quantum chemistry,[17] she worked as a researcher and pub
0gjm8yOn flipping the bozo bit Before you bother to read any of what follows, I would be grateful if you would answer the following question: Have you, in fact, bozo-bitted me? Because I've been proceeding on the assumption that it is in principle possible for us to have a reasoned discussion, but that's looking less and less true, and if I'm wasting my time here then I'd prefer to stop. On librarians and librarianship Unless I misunderstand you badly, you are arguing either that I have been lying constantly about this or that I am appallingly unaware of my own opinions and attitudes and you know them better than I do. And, if I understand this remark correctly ... ... your basis for this is that you can't think of any reason why I might have mentioned that Taube was a librarian other than that I have "contempt for librarians" and that I wanted to put Taube down by calling him names. So, allow me to propose a very simple alternative explanation (which is, in fact, the correct explanation, so far as I can tell by introspection): I said it because, having listed a bunch of things that weren't Taube's profession, it seemed appropriate to say what his profession actually was. On the basis of this thread so far, I'm guessing that you still don't believe me; so let me ask: Is there, in fact, anything I could possibly say or do that would convince you that I do not hold librarians in contempt? Because it looks to me as if there isn't, and it seems rather odd that describing someone who was in fact a librarian as a librarian could be such strong evidence of contempt for librarians as to outweigh all future testimony from the person in question. On professions and the like There are at least three things you can mean by saying someone is, e.g., "a biologist". (1) That they know something about biology and think about it from time to time. (2) That doing biology is their job, or at least that they do it as much and as well as you could reasonably expect if it were. (3) That
-3gwern8yI haven't yet, but if you're going to persist in claiming that people with PhDs in philosophy are not even allowed the description 'philosopher', it's tempting because why should I bother with people who abuse language and redefine words so abysmally? Which was pursuant to your belief that a mere librarian could have nothing to say about the issue, could not be any sort of authority or indicator of the times, and so does not belong in the list lukeprog presented. Yes, I've said all this before. The obvious reading of your concluding paragraph was obvious, before you started trying to defend it. Indeed. And I think it's absurd to restrict usage of descriptions to the rarefied and elevated #2s (how many biologists get tenure?) and even more absurd to restrict it to the even more rarefied and elevated #3s. (Merkel & Bostridge were both #2s at some point, but seem likely to never be #3s in those fields; whether we could consider Soros a #3 - because he claims his philosophical approach of reflexivity guides his philanthropy & investing and so his inarguably historic roles there are part and parcel of philosophy - is an interesting question, but getting a bit far afield.) It gives him a great deal of expertise in organizing and searching data mechanically, which is relevant to AI; and inasmuch as chess-playing falls under AI... No, he didn't write his thesis on chess-playing, but here again I would say it's absurd to insist on such doctrinaire rigidity that no one can have respectable expertise without being the expert on a topic. (I would note in passing that Fermi's laurea thesis was not on fission, but X-ray imaging; is that close enough? Well, probably, but then why is indexing and search so out of bounds? Search at Google involves a great deal of AI work, so clearly there is a real connection at some point in time...) I'm afraid I have shocking news for you, many respected philosophers in AI may not have written their theses directly on AI: Dennett's dissertat
1gjm8y(I'm going to be brief, because I'm losing hope that you're going to pay any attention to anything I say. I haven't the least intention of bozo-bitting you globally because you have been consistently extremely impressive elsewhere, but in this particular discussion it seems that at least one of us -- and I'm perfectly willing to consider that it may be me -- is being sufficiently irrational that we're doomed to produce more heat than light. More specifically, what it looks like to me is that you're treating me as an enemy combatant who needs to be defeated, rather than a person who disagrees with you who needs to be either taught or learned from or both.) [EDITED to add: well, it turns out I wasn't so brief. But I tried.] What's annoying here is not so much your evident belief that I am lying through my teeth about my own opinion about librarians (why on earth would I even do that?) as your refusal even to acknowledge that your fantasy about that opinion is anything other than a mutually-agreed truth. I'm sorry, was that meant to be an answer to the question I asked? I wasn't asking the question just to make a rhetorical point. Your behaviour in this thread suggests to me that as soon as you read the last sentence of what I wrote you leapt to a conclusion, got angry about it, and came out fighting, and that ever since you've refused even to consider the possibility that you leapt to the wrong conclusion. It's just as well that I'm not insisting on any such thing. So far as I know, there were other people around who were about as expert on nuclear physics as Fermi. I am not an expert on the history, so maybe that's wrong, but I haven't been assuming it's wrong and when I say "comparable to Fermi's expertise in nuclear fission" I don't mean "expertise as of the world's greatest expert", I mean "expertise as of someone very expert in the field". Because it seems to me that that's the level of expertise that's actually relevant to Eliezer's original point and his
0Luke_A_Somers8yDoing a perfect post on this topic would be hitting a dead horse right between the eyes at a thousand paces.
0[anonymous]8yDoing X for a living is a lower bar than being tenured.
1satt8yPeccadillo [http://books.google.com/ngrams/graph?content=peccadillo%2Cpeccadilloe&year_start=1700&year_end=2008&corpus=15&smoothing=0] . (Sorry; couldn't resist the temptation to flag that accidental autology [http://www.segerman.org/autological.html] for posterity.)
2wedrifid8yThankyou for your research. I was mislead by the grandparent.
1lukeprog8y"Eliezer" should be "lukeprog".
3gwern8yHah, whups. And so it goes - you correct Eliezer's lack of examples, gjm corrects your description of Taube, I correct gjm's description of Taube, and you correct my description of gjm's description...
0yli8yWould a chess program that has a table of all the lines on the board that keeps track of whether they are empty or not and that uses that table as part of its move choosing algorithm qualify? If not, I think we might be into qualia territory when it comes to making sense of how exactly a human is recognizing the emptiness of a line and that program isn't.
1gjm8yYup. I strongly suspect that Taube was in fact "into qualia territory", or something along those lines, when he wrote that.

Off the top of my head, how about the Landau Pole? A famous and usually right genius calculated that the gauge theories of quantum fields are a dead end, and set the Soviet and to some degree Western physics a few years back, if I recall correctly. His calculation was not wrong, he simply missed the alternate possibilities.

EDIT: hmm, I'm having trouble locating any links discussing the negative effects of the Landau pole discovery on the QED research.

Here is another famous example:Chandrasekhar's limit. Eddington rejected the idea of black holes ("I think there should be a law of Nature to prevent a star from behaving in this absurd way!"). Says wikipedia:

Chandra's discovery might well have transformed and accelerated developments in both physics and astrophysics in the 1930s. Instead, Eddington's heavy-handed intervention lent weighty support to the conservative community astrophysicists, who steadfastly refused even to consider the idea that stars might collapse to nothing.

I guess this ... (read more)

2betterthanwell8yEddington erroneously dismissed M_(white dwarf) > M_limit ⇒ "a black hole" , but didn't he correctly anticipate new physics? Do event horizons (Finkelstein, 1958) not prevent nature from behaving in "that absurd way", so far as we can ever observe? * http://en.wikipedia.org/wiki/Cosmic_censorship_hypothesis [http://en.wikipedia.org/wiki/Cosmic_censorship_hypothesis]
0shminux7yIt's hard to know what Eddington meant by "absurd way". Presumably he meant that this hypothetical law would prevent matter from collapsing into nothing. Possibly if Chandrasekhar had figured out the strange properties of the event horizon back in 1935 and had emphasized that whatever weird stuff is happening beyond the final Chandrasekhar limit is hidden from view, Eddington would not have reacted as harshly. But that took another 20-30 years, even though the relevant calculations require at most 3rd year college math. Besides, Chandrasekhar's strength was in mathematics, not physics, and he could not compete with Eddington in physics intuition (which happened to be quite wrong in this particular case).

The general success rate of breakthroughs is pretty damn low, and so I'd argue that most examples of "invalid" pessimism (excluding some stupid ones coming from scientists you never heard of before coming across a quote, and excluding things like PR campaigning by Edison), viewed in the context of almost all breakthroughs failing for some reason you can't anticipate, are not irrational but simply reflect absence of strong evidence in favour of success (and absence of strong evidence against unknown obstacles), at the time of assessment (and corre... (read more)

0ESRogs8yI'm having trouble understanding your second paragraph. This is probably just due to missing background knowledge on my part, but would you mind explaining what you mean by: and Thanks!
1private_messaging8yThere was a really silly argument about Fermi's 10% estimate , scattered over several threads (which OP talks about). Yudkowsky been arguing that Fermi's estimate was too low. He came up with the idea that surely there would have been one element (out of many) that would have worked so the probability should have been higher, that was wrong because a: its not as if some element's fissions released neutrons and some didn't, and b: there was only 1 isotope to start from (U-235), not many.
1ESRogs8yDo all elements' fissions release neutrons?
2private_messaging8yYes. The issue is that the argument "look at periodic table, it's so big, there would be at least one" requires that the fact of fission releasing neutrons would be assumed independent across nuclei.
0ESRogs8yGotcha, thanks.

I'm not sure if this is justifiable or just an old-fashioned blunder...

On the subject of stars, all investigations which are not ultimately reducible to simple visual observations are…necessarily denied to us… We shall never be able by any means to study their chemical composition.

-- August Comte, 1835

I'm leaning towards "blunder" myself...

Yeah, blunder. Wikipedia says:

In the 1820s both John Herschel and William H. F. Talbot made systematic observations of salts using flame spectroscopy. In 1835, Charles Wheatstone reported that different metals could be easily distinguished by the different bright lines in the emission spectra of their sparks, thereby introducing an alternative mechanism to flame spectroscopy.

4wedrifid8yWell, the first half seems approximately correct. The second sentence should have begun with "And by clever application of this means we shall...".
3[anonymous]8yEven if you interpret “visual” as ‘mediated by photons’, there's such a thing as neutrino astronomy.
4sketerpot8yIt wasn't until the 1850s that Ångström [http://en.wikipedia.org/wiki/Anders_Jonas_%C3%85ngstr%C3%B6m] discovered that elements both emit and absorb light at characteristic wavelengths, which is what spectroscopic analysis of stars is based on, so I'm leaning toward justifiable.

it has been plausibly argued to me that all the roads to nuclear weapons, including plutonium production from U-238, may have bottlenecked through the presence of significant amounts of Earthly U235

This has interesting repercussions for Fermi's paradox.

5JoshuaZ8yYes, particularly in the context that you and I discussed earlier that intelligent life arising earlier might have had an easier time wiping itself out [http://lesswrong.com/lw/8ad/open_thread_november_2011/55in]. Although the consensus there seemed to be that it wouldn't be a large enough difference to matter for serious filtration issues.

I posted the following in a quotes page a few months back. I don't know how justifiable these were, and these are only questionably pessimism, but there may be some interesting examples in this. In particular, my light knowledge of the subject suggests that there really were extremely compelling reasons to disregard Feynman's formulation of QED for many years after it was first introduced.

It is interesting to note that Bohr was an outspoken critic of Einstein's light quantum (prior to 1924), that he mercilessly denounced Schrodinger's equation, discourag

... (read more)

Here's an example of the 'opposite' - a case of unjustifiable correct optimism:

Columbus knew the Earth was round but should also have known the radius of the Earth and size of Eurasia well enough to know that the voyage East to Asia was simply impossible with the ships and supplies he went with. It seems to have turned out OK for him, though.

This is probably not a very useful example and I wouldn't be surprised to see that there were plenty more of these examples.

Kuhn's Structure of Scientific Revolutions is all about how an old scientific approach is often more right than the new school -- fits the data better, at least in the areas widely acknowledged to be central. Only later does the new approach become refined enough to fit the data better.

1Bruno_Coelho8yTo him(Kuhn) evidence don't maintain old paradigms statuos quo, but persuasion. Old fellas making remarks about the virtues of their theory. New folks in academia have to convince a good amount of people to make the new theory relevant.
1JoshuaFox8yYes, "Science advances one funeral at a time [http://en.wikiquote.org/wiki/Max_Planck]", but this [http://en.wikipedia.org/wiki/The_Structure_of_Scientific_Revolutions#The_Copernican_Revolution] , from Wikipedia, is a pretty good summary of a typical "scientific revolution": "...Copernicus' model needed more cycles and epicycles than existed in the then-current Ptolemaic model, and due to a lack of accuracy in calculations, Copernicus's model did not appear to provide more accurate predictions than the Ptolemy model. Copernicus' contemporaries rejected his cosmology, and Kuhn asserts that they were quite right to do so: Copernicus' cosmology lacked credibility."

Thomas Malthus' view that in the long run we will always be stuck in (what we now call) the Malthusian trap. He would have been right if not for the sustained growth given to us by the industrial revolution.

Not clear his view is erroneous given suitable values for "long run".

0gwern8yHow so? Last I checked, human populations could still pop out children if they wanted to faster than the average real global growth rate since the IR of ~2%.
3James_Miller8yWhat's relevant to whether we are in a Malthusian trap is the actual birth rate, not what the birth rate would be if people wanted to have far more children.
5gwern8yI'll be more explicit then: the 'sustained growth' is almost irrelevant since per the usual Malthusian mechanisms it is quickly eliminated. What made Malthus wrong, what he was pessimistic about, was whether people would exercise "moral restraint" - in other words, he didn't think the demographic transition would happen. It did, and that's why we're wealthy.
2SilasBarta8yBut how do you know it's the "moral restraint" that averted the Malthusian catastrophe, rather than the innovations (by the additional humans) that amplified the effective carrying capacity of available resources? In fact, the moral restraint could be keeping us closer to the catastrophe than if we had been producing more humans.
1gwern8yBecause population growth can outpace innovation growth. This is not a hard concept.
0SilasBarta8yI know. But your post seemed to be taking the position in favor of population growth (change) as the relevant factor rather than innovation. I was asking why you (seemed to have) thought that.
3gwern8yPopulation growth and innovation are two sides of a scissor: innovation drives potential per capita up, population growth drives it down. But the blade of population growth is far bigger than the blade of innovation growth, because everyone can pump out children and few can pump out innovation. Hence, innovation can be seen as necessary - but it is not sufficient, in the absence of changes to reproductive patterns.
1SilasBarta8yOkay, that's where I disagree: Each additional person is also another coin toss (albeit heavily stacked against us) in the search for innovators. The question then is whether the possible innovations, weighted by probability of a new person being an innovator (and to what extent) favors more or fewer people. There's no reason why one effect is necessarily greater than the other and hence no reason for the presumption of one blade being larger.
1gwern8yThere is no a priori reason, of course. We can imagine a world in which brains were highly efficient and people looked more like elephants, in which one could revolutionize physics every year or so but it takes a decade to push out a calf. Yet, the world we actually live in doesn't look like that. A woman can (and historically, many have) spend her life in the kitchen making no such technological contributions but having 10 kids. (In fact, one of my great-grandmothers did just that.) It was not China or India which launched the Scientific and Industrial Revolutions.
0SilasBarta8yThe ability to produce lots of children does not at all work against the ability of innovators and innovator probability to overcome their resource-extraction load. In order for your strategy to actually work against the potential innovation, you would have to also suppress the intelligence (probability) of your children to the point where the innovation blade is sufficiently small. And you would have to do it without that action itself causing the die-off, and while ensuring they can continue to execute the strategy on the next generation. And keep in mind, you're working against the upper tail of the intelligence bell curve, not the mode. Innovation in this context needn't be revolution-size. China and India (and the Islamic Empire) did innovate faster than the West, and averted many Malthusian overtakings along the way (probably reaching 800 years ahead at their zenith). Malthus would have known about this at the time.
0gwern8yI'm not following your terms here. Obviously the ability to produce lots of children does in fact sop up all the additional production, because that's why per capita incomes on net essentially do not change over thousands of years and instead populations may get bigger. So you can't mean that, but I don't know what you mean. They innovated faster at some points, arguably. And the innovation such as in farming techniques helped support a higher population - and a poorer population. Malthus would have known this about China, did, and used China as an example of a number of things, for example, the consequences of a subsistence wage which is close to starvation http://en.wikisource.org/wiki/An_Essay_on_the_Principle_of_Population/Chapter_VII [http://en.wikisource.org/wiki/An_Essay_on_the_Principle_of_Population/Chapter_VII] :
0randallsquared8yThat's not even required, though. What we're looking for (blade-size-wise) is whether a million additional people produce enough innovation to support more than a million additional people, and even if innovators are one in a thousand, it's not clear which way that swings in general.
0gwern8ySure, it's just an example which does not seem to be impossible but where the blade of innovation is clearly bigger than the blade of population growth. But the basic empirical point remains the same: the world does not look like one where population growth drives innovation in a virtuous spiral or anything remotely close to that*. * except, per Miller's final reply, in the very wealthiest countries post-demographic-transition where reproduction is sub-replacement and growth maybe even net negative like Japan and South Korea are approaching, then in these exceptional countries some more population growth may maximize innovation growth and increase rather than decrease per capita income.
1James_Miller8yI can't prove this, but I believe that in the United States and Western Europe we would still be rich (in the sense that calorie deprivation wouldn't pose a health risk to the vast majority of the population) if the birth rate had stayed the same since Malthus's time.
0gwern8yThat makes no sense to argue: Malthus's time was part of the demographic transition. Of course I would agree that if the demographic transition continued post-Malthus - as it did - we would see higher per capita (as we did). But look up the extremely high birth rates of some times and places (you can borrow some figures from http://www.marathon.uwc.edu/geography/demotrans/demtran.htm [http://www.marathon.uwc.edu/geography/demotrans/demtran.htm] ), apply modern United States & Western Europe infant and child mortality rates, and tell me whether the population growth rate is merely much higher than the real economic growth rates of ~2% or extraordinarily higher. You may find it educational.
2James_Miller8yBut I believe that from the point of view of maximizing the per person wealth of the United States and Western Europe the population growth rate has been much, much too low since the industrial revolution. (I admittedly have no citations to back this up.)
2gwern8yMaybe. That's not the same thing as what you said initially, though.
1private_messaging8yWe'll just evolve for restraint not to work any more.
2[anonymous]8y(Was there a SMBC comic or something about men evolving a condom-breaking mechanism in their penis?)
7private_messaging8yWe're rapidly evolving condom-not-putting-on mechanism in the brain.
2gwern8yYes, that's the question: is the demographic transition temporary? I've brought it up before: http://lesswrong.com/lw/5dl/is_kiryas_joel_an_unhappy_place/ [http://lesswrong.com/lw/5dl/is_kiryas_joel_an_unhappy_place/]
0Error8yI was always under the impression that what thwarted his hypothesis was the rise of effective and widespread birth control. I remember reading one of his works and noting that it was operating on the assumption that, to reduce birthrate to sustainable levels, sex would have to be reduced, and that was unlikely. It is unlikely, but it's also mostly decoupled from childbirth now, at least in the developed world. Have I misinterpreted something here?
2Eugine_Nier8yI believe he considered the possibility of birth control, referring to it as "immorality".
2[anonymous]8y"Watch out for that cliff!" "It looks pretty far off, and besides, we're turning left soon anyway." "But we could keep accelerating!"
1gwern8yYour reply seems completely irrelevant to the Malthusian point that population growth can always exceed total factor production, and so it is population growth - or lack of growth - which dominates and determines per capita.

This blog post claims that only a few years before the Wright brother's success, the consensus was that flying machines would necessarily have to be less dense than air (like hot air balloons).

it has been plausibly argued to me that all the roads to nuclear weapons, including plutonium production from U-238, may have bottlenecked through the presence of significant amounts of Earthly U235 (apparently even the giant heap of unrefined uranium bricks in Chicago Pile 1 was, functionally, empty space with a scattering of U235 dust).

All is such a strong word unless supplemented with qualifiers. I question the plausibility the arguments at supporting that absolute. The route "wait for an extra century or two of particle physics research and spend a few trillion producing the initial seed stock" would still be available.

8Luke_A_Somers8yIn context, Fermi was considering something rather more short-term: WW2. That said, he may not have scoped his statement to such a small scale.
1wedrifid8yOne of many suitable and sufficient qualifiers that could make the arguments plausible.