I really liked Robin's point that mainstream scientists are usually right, while contrarians are usually wrong. We don't need to get into details of the dispute - and usually we cannot really make an informed judgment without spending too much time anyway - just figuring out who's "mainstream" lets us know who's right with high probability. It's type of thinking related to reference class forecasting - find a reference class of similar situations with known outcomes, and we get a pretty decent probability distribution over possible outcomes.

Unfortunately deciding what's the proper reference class is not straightforward, and can be a point of contention. If you put climate change scientists in the reference class of "mainstream science", it gives great credence to their findings. People who doubt them can be freely disbelieved, and any arguments can be dismissed by low success rate of contrarianism against mainstream science.

But, if you put climate change scientists in reference class of "highly politicized science", then the chance of them being completely wrong becomes orders of magnitude higher. We have plenty of examples where such science was completely wrong and persisted in being wrong in spite of overwhelming evidence, as with race and IQ, nuclear winter, and pretty much everything in macroeconomics. Chances of mainstream being right, and contrarians being right are not too dissimilar in such cases.

Or, if the reference class is "science-y Doomsday predictors", then they're almost certainly completely wrong. See Paul Ehrlich (overpopulation), and Matt Simmons (peak oil) for some examples, both treated extremely seriously by mainstream media at time. So far in spite of countless cases of science predicting doom and gloom, not a single one of them turned out to be true, usually not just barely enough to be discounted by anthropic principle, but spectacularly so. Cornucopians were virtually always right.

It's also possible to use multiple reference classes - to view impact on climate according to "highly politicized science" reference class, and impact on human well-being according to "science-y Doomsday predictors" reference class, what's more or less how I think about it.

I'm sure if you thought hard enough, you could come up with other plausible reference classes, each leading to any conclusion you desire. I don't see how one of these reference class reasonings is obviously more valid than others, nor do I see any clear criteria for choosing the right reference class. It seems as subjective as Bayesian priors, except we know in advance we won't have evidence necessary for our views to converge.

The problem doesn't arise only if you agree to reference classes in advance, as you can reasonably do with the original application of forecasting costs of public projects. Does it kill reference class forecasting as a general technique, or is there a way to save it?


94 comments, sorted by Click to highlight new comments since: Today at 8:02 PM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

'Tis remarkable how many disputes between would-be rationalists end in a game of reference class tennis. I suspect this is because our beliefs are partially driven by "intuition" (i.e. subcognitive black boxes giving us advice) (not that there's anything wrong with that), and when it comes time to try and share our intuition with other minds, we try to point to cases that "look similar", or the examples whereby our brain learned to pattern-recognize and judge "that sort" of case.

My own cached rule for such cases is to try and look inside the thing itself, rather than comparing it to other things - to drop into causal analysis, rather than trying to hit the ball back into your own preferred concept boundary of similar things. Focus on the object level, rather than the meta; and try to argue less by similarity, for the universe itself is not driven by Similarity and Contagion, after all.

Sometimes "looking at the thing itself" is too costly or too difficult. How can the proverbial "bright sixteen-year-old" sitting in a high school classroom figure out the truth about, say, the number of protons in an atom of gold, without having to accept the authority of his textbooks and instructors? If there were a bunch of well-funded nutcases dedicated to arguing that gold atoms have seventy-eight protons instead of seventy-nine, the only way you can really judge who's correct is to judge the relative credibility of the people presenting the evidence. After all, one side's evidence could be completely fraudulent and you'd have no way of knowing that. Far too often, reference classes and meta-level discussions are all we have.
6Eliezer Yudkowsky12y
Then let us try to figure out whose authority is to be trusted about experimental results and work from there. Cases where you can reduce it to a direct conflict about easily observable facts, and then work from there, are much more likely to have one dramatically more trustworthy party.
How should we unpack black boxes we don't have yet? For example a non-neural language capable self-maintaining goal-oriented system*. We have a surfeit of potential systems (with different capabilities of self-inspection and self-modification) with no way to test whether they will fall into the above category or how big the category actually is. *I'm trying to unpack AGI here somewhat
I estimate that even fairly bad reference class / outside view analysis is still far more reliable than the best inside view that can be realistically expected. People are just spectacularly bad at inside view analysis, and reference class analysis puts hard boundaries within which truth is almost always found.
1Eliezer Yudkowsky12y
http://lesswrong.com/lw/vz/the_weak_inside_view/ [http://lesswrong.com/lw/vz/the_weak_inside_view/]
If I may attempt to summarize the link: Eliezer maintains that, while the quantitative inside view is likely to fail in cases where the underlying causes are not understood or planning biases are likely to be in effect, the outside view cannot be expected to work when the underlying causes undergo sufficiently severe alterations. Rather, he proposes what he calls the "weak inside view" - an analysis of underlying causes noting the most extreme of changes and stating qualitatively their consequences.
Is there any evidence that in cases that where neither "outside view" nor "strong inside view" can be applied, "weak inside view" is at least considerably better than pure chance? I have strong doubts about it.
Yes, it would be good to have a clearer data set of topics at dates, the views suggested by different styles of analysis, and what we think now about who was right. I'm pretty skeptical about this weak inside view claim, but will defer to some more systematic data. Of course that is my suggesting we take an outside view to evaluating this claim about which view is more reliable.

Or, if the reference class is "science-y Doomsday predictors", then they're almost certainly completely wrong. See Paul Ehrlich (overpopulation), and Matt Simmons (peak oil) for some examples, both treated extremely seriously by mainstream media at time.

I think you are unduly confusing mainstream media with mainstream science. Most people do. Unless they're the actual scientists having their claims deformed, misrepresented, and sensationalised by the media.

This says it all.

When has there been a consensus in the established scientific literature about either certitude of catastrophic overpopulation, or imminent turnaround in oil production?

We have plenty of examples where such science was completely wrong and persisted in being wrong in spite of overwhelming evidence, as with race and IQ, nuclear winter, and pretty much everything in macroeconomics.

Hm. Apparently you also have non-conventional definitions of "overwhelming" and "completely wrong".

A great project: collect a history of topics and code each of them for these various features, including what we think of who was right today. Then do a full stat analysis to see which of these proposed heuristics is actually supported by this data.

I think taw's problem is just a case of the more general and simple problem of what kind of similarity is required for induction?.

And it's unwise to use political issues as case studies of unsolved philosophical problems.

I think you're completely right, this is a special case of the problem of induction. The Stanford Encyclopedia of Philosophy has a wonderfully exhaustive article about it that also discusses subjective Bayesianism at length. Among other things, that article offers a simple recommendation for taw's original problem: intersect your proposed reference classes to get a smaller and more relevant reference class.

Agreed with the first part and with the heuristic, but taw is using the possibility of politicization as an element of reference class membership. Honestly, I wouldn't even consider global warming to be a "political issue". The science seems completely trivial to understand at the object level.
I'd be shocked if it is.
The logic used and the predictions made are trivial. But the underlying facts and observations have been (politically, I presume) called into question. For instance in the recent CRU possibly-scandal, see Eric Raymond saying CRU published fake data [http://esr.ibiblio.org/?p=1447] and Willis Eschenbach describing how the CRU illegally denied FOIA requests for their weather data and even threatened to destroy them to prevent others from trying to replicate their studies [http://omniclimate.wordpress.com/2009/11/24/willis-vs-the-cru-a-history-of-foi-evasion/] . Because this issue is so heavily politicized, I for one have no clear idea of the real extent of GW danger.
Not really, induction problems philosophers talk about are pure theory, and totally irrelevant to daily life. Everybody knows blue/green are correct categories, while grue/bleen are not. Figuring out proper reference class on the other hand, is a serious problem of applied rationality.
Philosophers invented grue/bleen in order to be obviously incorrect categories, yet difficult to formally separate from the intuitively correct ones. There are of course less obvious cases, but the elucidation of the problem required them to come up with a particularly clear example.
I don't know about "bleen", but "grue [http://en.wikipedia.org/wiki/Grue_%28monster%29]" is perfectly sensible as the category of "things that may eat you if you venture around Zork without a light".

Your examples of "highly politicised science" are very one-sided (consider autism-vaccines, GM crops, stem cell research, water floridisation, evolution), which I suppose reinforces your point.

In your set-up, some reference classes correspond to systematic biases, and some to increased/decreased variance: they don't all change your probability distribution in the same way.

For example: it takes extreme levels of arrogance to conclude, in ignorance, that most scientists are incorrect on the area of their speciality. By this argument, you should pla... (read more)

This is not what they were about. What they predicted was massive suffering in each case. Overpopulation doomsdayers predicted food and resource shortages, wars for land and water and such; peak oilers predicted total collapse of economy, death of over half of humanity, and such. Other than for their supposedly massive consequences peak oil is as interesting as peak typewriters, that is not at all unless you work in oil/typewriter industry. By the way false predictions of underlying process were false in all three cases you mention - population growth is sublinear for quite some time, peak oil reliably doesn't take place on any of predicted dates, and total fish production is increasing via aquaculture - or true in only the most restricted way, far more restricted than what was claimed - population did increase at all, old oil fields are depleting, wild fish production is not increasing - but this is irrelevant - the core of doomsdayer predictions is the doom part, which almost invariably doesn't happen.
That's exactly my position. Doomesday predictions are combinations of reasonable science and unwaranted conclusions. They're like the mirror image of homeopathy, which has wild craziness leading to a partially correct conculsion: "take this pill, and you'll feel better".

You encounter a bear. On the one hand, it's in the land mammals reference class, most of whom are not dangerous. On the other hand, it's in the carnivorous predators reference class, most of whom are.

Is the bear dangerous? I'm sure if you thought hard enough, you could come up with other plausible reference classes, each leading to any conclusion you desire.

I would estimate that vast majority of carnivorous predators are tiny insects and such, so the class is even less dangerous than land mammals class. ;-) On the other hand class of "animals bigger than me" tends to be quite dangerous.
"Animals bigger than me" are dangerous once you've encountered them up close, but normally there's no reason to do so unless you're hunting them. The total life risk of "being hurt by a carnivore" is much greater than the total life risk of "being hurt by an animal bigger than me". This is true both today and in prehistoric environments: most of the predators who tend to tangle with humans aren't much bigger than us - snakes and leopards, mostly. OTOH, predators who are much bigger than humans don't routinely hunt humans (tigers, lions). (Although tigers may have done so long ago??? I don't really know.)
Hippopotamuses are the most dangerous mammals in Africa, and they are much bigger than humans. * http://answers.yahoo.com/question/index?qid=20070226210159AAuE506 [http://answers.yahoo.com/question/index?qid=20070226210159AAuE506] * http://www.on-the-matrix.com/africa/hippo.asp [http://www.on-the-matrix.com/africa/hippo.asp] * http://www.straightdope.com/columns/read/1862/are-hippos-the-most-dangerous-animal [http://www.straightdope.com/columns/read/1862/are-hippos-the-most-dangerous-animal] * http://en.wikipedia.org/wiki/Hippopotamus#Aggression [http://en.wikipedia.org/wiki/Hippopotamus#Aggression] Note that its closest competitor is the Cape Buffalo. Also bigger than humans.
To downvoters: It is customary to explain unobvious downvotes. I've just demonstrated with multiple references that both of the top human killers on the second most populated continent in the world are larger than humans, and they are herbivores to boot. This would seem, to me anyway, to argue pretty decisively against Armak's theory that carnivores are more dangerous than large animals.
I didn't downvote you, but the example didn't seem to contradict the claim, which was: Being hurt =/= being killed. Even in Africa, I'm sure people get scratched by housecats or bitten by dogs sometimes, and I don't think so many people are attacked (fatally or no) by hippos that hippos are more likely to hurt any given person than small carnivores. (Heck, if we count mosquitoes...) DanArmak's point seems to be that large animals are mostly avoidable if you want to avoid them. Small carnivores are not necessarily as easy to avoid.
Literally read, 'hurt' doesn't mean being killed. But look at the examples Dan was using: tigers, snakes, leopards, lions. Is it unreasonable to infer that he was really talking about mortal dangers & hurts?
Good point. I couldn't find any statistics on human deaths or injuries by animal type in a minute's search, and I don't have time to spare right now. But I agree that my hypothesis needs to be fact checked. (Just two animal examples, hippos and buffalos, in a single continent in a couple of decades don't make a theory. And all four of your links don't refer to any actual data, they just state that hippos are the most dangerous.)

I am confused by your inclusion of nuclear winter in the list of failed scientific predictions.

As far as I understand history of this claim, back during the Cold War it was common to predict that even small a scale nuclear exchange will get the world back to long term Ice Age, due to widespread urban fires. Similar predictions were even made about Kuwait oil wells fires in 1991 (it's a good model, as the effect was not supposed to be related to nuclear explosions as such, just resulting fires). It turns out from more recent models, and actual data from the Gulf War that the actual magnitude of cooling is orders of magnitude smaller than what was predicted, and there was never any genuine research that really suggested levels that were widely claimed; the most straightforward explanation is that people opposed to nuclear weapons wanted to exaggerate its effect to scare people off. It might have been a calculated lie, or something they genuinely wanted to believe - the point is that politicized science is not very accurate even if you agree with its political goals.
Wikipedia and some searching didn't show these models. Do you have citations?
I see. Thank you.

You can always look at the argument at an object level carefully enough to figure out which components fit into each category. That's not too difficult.

Also, the cornucopians haven't been right either IMHO, for the last 40 years. Rather, the last 40 years has been the age of "things will stay just the same as they are today" being a much better predictor than cornucopian or doomsday predictions, at least for people unlike us for whom the internet doesn't count as much of a cornucopia.

"things will stay just the same as they are today" would be a horrible horrible predictor for the last 40 years. Check gapminder [http://graphs.gapminder.org/world/] 1969 to 2009 for the poorest end how drastic and cornucopian the changes were for most of them. For the rich end, Internet, mobiles, and other wonderful technology counts very much as cornucopia.

Isn't Robin Hanson a contrarian economist? Or does he not include economists in that.

Can't you just put the situation in all reference classes where you think it fits and multiply your prior by the Bayes factor for each? Then, of course, you would have to discount for all of the correlation between the reference classes. That is, if there were two reference classes, you couldn't use the full factors if one of them were already evidence of it being the other.

Or, if the reference class is "science-y Doomsday predictors", then they're almost certainly completely wrong. ...

This is just the Doomsday Problem, which has been discu... (read more)

That's not the Doomsday I was talking about, just prediction of massive suffering due to one cause or another, be in overpopulation, nuclear war, biological war, food shortages, water shortages, oil shortages, phosphorus shortages, guano shortages, whale oil shortages, rare earth metal shortages, shortages of virtually every commodity, flu pandemic, AIDS pandemic, mass scale terrorism, workers' revolutions, Barbarian takeover, Catholic takeover, Communist takeover, Islamic takeover, or whatnot, just to name a few. Pretty much none of them caused the massive suffering and collapse of civilization predicted. Most do not involve end of humanity, so invocations of the antropic principle are misguided.
Nevertheless, if most everyone in the world is affected by a such a disaster, then a large fraction of people will be right, so the point still applies.
On the other hand, if many disasters are predicted and (at most) one actually happens, then averaging over separate predictions or scenarios (instead of over people), we should expect any one scenario to be very improbable.
Why does that measure matter? You care about the risk of any existential threat. The fact that it happened by grey goo rather than Friendliness failure is little consolation.
It may matter because, if many scenarios have costly solutions that are very specific and don't help at all with other scenarios, and you can only afford to build a few solutions, you don't know which ones to choose.
Yes, I know that reasons exist to distinguish them, but I was asking for a reason relevant to the present discussion, which was discussing how to assess total existential risk.
Well, it has to do more with the original discussion. If you're going to discount doomsday scenarios by putting them in appropriate reference classes and so forth, then either you automatically discount all predictions of collapse (which seems dangerous and foolish); or you have to explain very well indeed why you're treating one scenario a bit seriously after dismissing ten others out of hand.
The original discussion was on this point: taw was saying that you should discount existential risk as such because it (the entire class of scenarios) is historically wrong. So it is the existential risk across all scenarios that was relevant. We'd see the exact same type of evidence today if a doomsday (of any kind) were coming, so this kind of evidence is not sufficient.
I thought I addressed this with "usually not just barely enough to be discounted by anthropic principle, but spectacularly so" part. Anthropic principle style of reasoning can only be applied to disasters that have binary distributions - wipe out every observer in the universe (or at least on Earth), or don't happen at all - or at least extremely skewed power law distributions [http://www.overcomingbias.com/2009/08/downturns-are-not-existential-risks.html] . I don't see any evidence that most disasters would follow such distribution. I expect any non-negligible chance of destruction of humanity by nuclear warfare implying an almost certainty of limited scale nuclear warfare with millions dying every couple of years. I think anthropic principle reasoning is so overused here, and so sloppily, that we'd be better off throwing it away completely.
This is a good point. Fortunately as it happens we can just create an FAI and pray unto Him to 'deliver us from evil'.
A relative absence of smaller disasters must count as evidence against the views of those predicting large disasters. There are some disasters which are all-or-nothing - but most disasters are variable in scale. We have had some wars and pandemics - but civilization has mostly brushed them off so far.
What absence of smaller disasters? Why don't the brushes with nuclear war and other things you mention count? Also, civilizations have fallen. Not in the sense of their genes dying out [1] but in the sense of losing the technological level they previously had. The end of the Islamic Golden Age, China after the 15th century, the fall of Rome (I remember reading one statistic that said that the glass production peak during the Roman empire wasn't surpassed until the 19th century, and it wasn't from lack of demand.) [1] unless you count the Neanderthals, which were probably more intelligent than h. sapiens. And all the other species in genus homo.
Really? Can you give more detail (or a link) please?
Here's [http://en.wikipedia.org/wiki/Human_evolution#Comparative_table_of_Homo_species] a summary of the different species in homo -- note the brain volumes. (I didn't mean to say all were intelligent, just that they were all near-human and went extinct, but the Neanderthals were likely more intelligent.) And here [http://en.wikipedia.org/wiki/H._neanderthalensis#Extinction]:
The WP table you link to gives these cranial volume ranges: H. sapiens, 1000-1850. H. neanderthalensis, 1200-1900. Given the size of the ranges and > 70% overlap, the difference between 1850 and 1900 at the upper end doesn't seem necessarily significant. Besides, brain size correlates strongly with body size, and Neanderthals were more massive, weren't they? More importantly, if the contemporary variation for H. sapiens (i.e. us) is all or most of that huge range (1000-1850 cc), do we know how it correlates with various measures of intellectual and other capabilities? Especially if you throw away the upper and lower 10% of variation.
It wasn't just the brain size, but the greater technological and cultural achievements that are evidenced in their remains, which are listed and cited in the articles.
By greater do you mean greater than those of H. sapiens who lived at the same time? AFAICS, the Wikipedia articles seem to state the opposite: that Neanderthals, late ones at least, were technologically and culturally inferior to H. sapiens of the same time. The paragraph right after the one you quoted from your second link states: The following paragraphs (through to the end of that section of the article) detail tools and cultural or social innovations that were (by conjecture) exclusive to H. sapiens. There are no specific things listed that were exclusive to Neanderthals. What "greater achievements" do you refer to? Also, I see no basis (at least in the WP article) for "the obvious fact that Neanderthals were highly intelligent", except for brain size which is hardly conclusive. Why can't we conclude that they were considerably less intelligent than their contemporary H. sapiens?
Okay, I confess, it's above my pay grade at this point: all I can do is defer to predominant theory in the field that Neanderthals were more intelligent at the level of the individual. Note that this doesn't mean they were more "collectively intelligent". If they were better at problem solving on their own, but weren't as social as humans, they may have failed to pass knowledge between people and ended up re-inventing the wheel too much.
But that's just what I'm asking about! Can you please give me some references that present or at least mention this theory? Because the WP articles don't even seem to mention it, and I can't find anything like it on Google.
The theory is that they had bigger brains - e.g. see the reference at: http://lesswrong.com/lw/165/how_inevitable_was_modern_human_civilization_data/124q [http://lesswrong.com/lw/165/how_inevitable_was_modern_human_civilization_data/124q]
Yes, but they also had more massive bodies, possibly 30% more massive [http://www.ecotao.com/holism/hu_neand.htm] than modern humans. I'm not sure that they had a higher brain/body mass ratio than we do and even if they had, a difference on the order of 10% isn't strong evidence when comparing intelligence between species.
Maybe their additional brain mass was used to give them really good instincts instead of the more general purpose circuits we have. This is a quote from Wikipedia supposedly paraphrasing Jordan, P. (2001) Neanderthal: Neanderthal Man and the Story of Human Origins.. "Since the Neanderthals evidently never used watercraft, but prior and/or arguably more primitive editions of humanity did, there is argument that Neanderthals represent a highly specialized side branch of the human tree, relying more on physiological adaptation than psychological adaptation in daily life than "moderns". Specialization has been seen before in other hominims, such as Paranthropus boisei which evidently was adapted to eat rough vegetation."
Given the circumstances that would have been quite some achievement!
Can you expand please? Exactly what measurement is correlated with cranial capacity at .2?
This is still civilisation's very first attempt, really. I did acknowledge the existence of wars and pandemics. However, disasters that never happened (such as nuclear war) are challenging to accurately assess the probability of.
Well, there was the (drawn out) fall of the Western Roman Empire. It was quite a collapse of civilization, with a lot of death and suffering.
Stories of collapse of the Roman Empire are greatly exaggerated. A more accurate description would be that center of Roman civilization shifted from Italy to Eastern Mediterranean long before that (Wikipedia says [http://en.wikipedia.org/wiki/Late_Antiquity#Cities] that population of Rome fell from almost a million to mere 30 thousand in Late Antiquity, making it really just a minor town before the Barbarians moved in). Yes, Western peripheries of the Empire were lost to Barbarians (who became increasingly civilized in process), and southern peripheries to Arabs (who also became increasingly civilized in process). In neither of these civilization really collapsed, and most importantly at least until battle of Manzikert in 1071 the central parts of Roman (Byzantine) Empire were doing just fine.
Roman civilization had had several major centers. The ones in the West gradually ceased to exist. That's the only sense in which the center of civilization "shifted". Some wealthy citizens of the city of Rome may have fled east, but the vast majority of the population of the western empire (Italy, Gaul, Iberia, Britain, Africa, not to mention the western Balkans and adjacent areas which were also conquered by barbarians in the 4th century) were agricultural and could flee only if they left all their posessions behind. IOW, the fall of population by 60-80% in these areas during the 4th and 5th centuries wasn't accomplished by emigration. (Not to mention the immigration of barbarians.) As for the city of Rome, it was sacked [http://en.wikipedia.org/wiki/Sack_of_Rome] by barbarians in the years 410 and 455. WP suggests [http://en.wikipedia.org/wiki/History_of_Rome#Roman_empire] that its population declined from several hundred thousand to 80,000 during approximately the fifth century, but this is unsourced and I would like better information. At any rate, at the time of the 410 sack the population was already far below its 2nd century peak of 2 million. By the 4th century the emperors didn't live there anymore (some of the 5th century ones apparently did though), so its decline started before the invasions. Still, it was much more than a "minor town" in 410, containing many riches to plunder and rich and noble people to hold for ransom. All in all, the Roman Empire did collapse. In ~400 the Western parts of the empire existed as it had for >200 years. By 450 it was effectively restricted to Italy and parts of southern Gaul, and in 476 it was officially terminated by the death of the last Western Emperor. Compare this map [http://en.wikipedia.org/wiki/File:RomanEmpire_117.svg] of the entire empire in 117 (not much different than in 400). That's a loss, inside 60 years, of all of Europe west of the Balkans (including Italy), and all of Africa west of Egypt (the pr
Not quite accurate; in 376 a big bunch of barbarians half-forced, half-negotiated their way into the Empire, became disloyal subjects, and subsequently pillaged the Balkans and defeated killed an (Eastern) emperor and his army. So it's better to say that the Western Empire declined almost entirely during the 100 years 376-476. (Politically, militarily, and on a local rule level this is true. Culturally the collapse did take longer in some places.)
It'd argue that culturally the Roman Empire didn't end: today 200 million Europeans (and even more outside Europe) speak languages descended from Latin; to a first approximation, all writing is in the Roman script; and the Roman Catholic Church is the largest religion in areas and populations much greater than ancient Rome. Oh and that last paragraph included c.15 words derived from Latin.
A few small, scattered, out of context, highly mutated facets of Roman culture have survived here and there. None of these, except Christianity, were among those most important to Romans, or those they saw as primarily distinguishing them from other cultures. And RC Christianity, apart from the name, is vastly different today than in 500 CE (and both are vastly different from RC Christianity in, say, 1300 CE). A modern Catholic would certainly be considered a sinner and a heretic many times over in 500 CE, and probably vice versa as well (I haven't checked). Incidentally, we are corresponding in a language that has much more in common with old Germanic tongues than with Latin, but it doesn't follow that we retain any of their culture. And here in Israel I talk and write a Hebrew which is quite similar to late Roman-era Hebrew - certainly more so than English is to German or Latin - and Orthodox Jews are the biggest religious segment in the country, but it doesn't follow that we (the non-religious people) have anything in common with ancient Jewish culture. (Consider that the vast majority of Europeans don't strictly follow RC rules either.)

Just so we are clear: What do you think about climate science?

It is important to remember that most of its work was before it was political. Just because energy (mainly coal and oil) companies don't like the policy implications of climate science and are willing to pay lots of people to speak ill of it, shouldn't make it a politicized science. Indeed this would place evolutionary biology into the highly politicized science category.

Allowing a subject's ideological enemies to have a say in its status without having hard evidence is not rational at all.

"Just because energy (mainly coal and oil) companies don't like the policy implications of climate science and are willing to pay lots of people to speak ill of it, shouldn't make it a politicized science." It seems as though energy companies have an incentive to downplay science that provides justification for limiting CO2, but don't scientists with government funding have incentive to play up science that provides justification for an increase in government power? How could we find out the magnitude of there effects without actually understanding the research ourselves?
You just confirm my point. The very fact that you use phrases like "policy implications of climate science", and "subject's ideological enemies" shows it's a highly politicized field. You wouldn't say "policy implications of quantum physics" or "chemistry's ideological enemies". In case you didn't follow Climategate, it look that scientists from East Anglia University engaged in politics a lot, including dirty politics; and they were nothing like neutral scientists merely seeking the truth and letting others deal with policy. You may find their actions warranted due to some greater good, or not, but it's not normal scientific practice, and I'd be willing to bet against pretty high rates that you would not find anything like that on any evolutionary biology department. That doesn't mean their findings are wrong. There are plenty of highly politicized issues where the mainstream is right or mostly right, but this rate is significantly lower than for non-politicized science. For example mainstream accounts of histories of most nations tend to be strongly whitewashed, as it's politically convenient. They are mostly accurate when it comes to events, but you can be fairly sure there are some systemic distortions. That's the reference class in which I put climate science - most likely right on main points, most likely with significant distortions, and with non-negligible chance of being entirely wrong. On the other hand the moment climate scientists switch from talking about climate to talking about policy or impact of climate change of human well-being, I estimate that they're almost certainly wrong. There is no reference class I can think of which suggests otherwise, and the closest reference class of Doomsday predictors has just this kind of track record. If you want some more, I did blog a bit about climate change recently: 1 [http://t-a-w.blogspot.com/2009/11/inevitability-of-geoengineering.html], 2 [http://t-a-w.blogspot.com/2009/11/inevitability-of-geoengineerin
Look, when you are sure you are right everything confirms your belief. Who are these 'neutral scientists'? When did climate scientists leave this class? What expert would just cede policy considerations to non-experts? I hope this class of people is a rare breed. Climate science has obvious policy implications since CO2 is the problem. Other sciences have had results that have clear policy implications. CFCs were bad. Marijuana is not that harmful. Cigarettes kill. Sometimes these results have helped develop good policy. Other times they were ignored. Saying CO2 is a problem is bound to become much more political. How does that have any effect on the science? It doesn't. The noise around a subject can be a measure of the subject's importance. It doesn't translate into some sort of useful truth measure.
Of course it does. Science is predicated on scientists practicing honestly. If scientists deliberately suppress disconfirmatory data, then peer review and reproducibility constraints won't mean anything. (And no I'm not addressing climatology here, just making a general point.) This does not mean you must assign a low probability to the science. It just means that this particular feature attenuates the odds you assign to it. Remember: The fact that a theory is good (high probability) does not mean everything about it must be evidence of its credibility!
One of these is significantly less certain than the other two, IMHO.
I replied to your point about evolutionary biology here [http://lesswrong.com/lw/1gw/contrarianism_and_reference_class_forecasting/1a5x] .

Good reference classes should be uncontroversial - most people will agree about what constitutes "mainstream scientists", but you'll probably get more disagreement about which parts of science are highly politicized.

Would that not bias the results? Category like "mainstream science" bundles together chemistry with virtually impeccable record, and psychology with highly dubious one. Using category like that we'll greatly underestimate certainty of chemical predictions, and greatly overestimate certainty of psychological ones. What I wanted to say is that we move from supporters and opponents arguing about particulars of a situation to supporters and opponents arguing about proper reference class. Which might be an improvement, but it doesn't solve the issue.
You can put something in multiple categories, like I said before, and like cousin_it also said [http://lesswrong.com/lw/1gw/contrarianism_and_reference_class_forecasting/1a53] . The fact that mainstream science covers fields of widely-varying veracity just means that it has a near-unity Bayes factor. The reason that chemistry is so much more credible is that it's also in several other high-Bayes-factor reference classes. (ETA: e.g., "theories on which products in daily use are predicated") There seems to be a halo bias going on in some commenters here. You can put something in a low-credibility class and still consider it high credibility -- for example, if it belongs to other classes with a high Bayes factor. So you can consider e.g. evolutionary biology to be politicized, but still credible because its other achievements outweigh the discount from politicization. Agreeing with something doesn't mean saying only positive things about it.
Clinical psychology, sure. But psychology in general, i.e. cognitive science, umm, no.
I'm pretty sure that if you compare track record of any field of psychology with track record of chemistry, it will be highly unflattering for the former. I did not wish to imply that psychology is entirely without results, just that it compares rather poorly with hard science.
People who agree with a political bias generally don't believe there's a bias. Witness all the academic "liberals" who accept global warming, CFC fearmongering, overpopulation, and resource exhaustion without question and attack people who question any of these received wisdoms.
[-][anonymous]9y 0

We have plenty of examples where such science was completely wrong and persisted in being wrong in spite of overwhelming evidence, as with race and IQ, nuclear winter, and pretty much everything in macroeconomics.

What overwhelming evidence has there been against the hypothesis that differences in average IQ among ethnic groups are at least partly genetic? Am I missing something? And what about nuclear winter? From a glance at the Wikipedia article I can't see such big differences between 21st-century predictions and 20th-century ones as to call the latter “completely wrong”.

We have plenty of examples where such science was completely wrong and persisted in being wrong in spite of overwhelming evidence, as with race and IQ[,]

The science was never wrong in this case. Stephen Jay Gould is certainly a scientist, but differential psychology and psychometrics are not his areas of scientific expertise. Jensen's views today are essentially what they were 40 years ago, and among the relevant community of experts they have remained relatively uncontroversial throughout this period.

What are you basing this claim of uncontroversial status on? http://en.wikipedia.org/wiki/Snyderman_and_Rothman_(study [http://en.wikipedia.org/wiki/Snyderman_and_Rothman_(study]) Surveys of psychometricians and other relevant psychologists have never shown a consensus in support of Jensen's views on group differences. At most, a more moderate version of his position (some genetic component, perhaps small) has held plurality or bare majority support in the past (but might not anymore, in light of work such as Flynn's) while remaining controversial.
Hi Carl, I claimed that Jensen's views are relatively uncontroversial, not that they are entirely so. In making that claim, I wasn't thinking only of Jensen's views about the genetic component of the Black-White gap in IQ scores, but also about his views on the existence of such a gap and on the degree to which such scores measure genuine differences in intellectual ability. Perhaps it was confusing on my part to use Jensen's name to refer to the cluster of views I had in mind. The point I wished to make was that the various views about race and IQ that taw might have had in mind in writing the sentence quoted above are not significantly more controversial today than they were in the past, and are shared by a sizeable portion of the relevant community of experts. As Snyderman and Rothman write (quoted by Gottfredson [http://dx.doi.org/10.1007/BF02693231], p. 54), Anecdotally, I myself have become an agnostic about the source of the Black-White differences in IQ, after reading Richard Nisbett's Intelligence and How to Get It [http://www.amazon.com/Intelligence-How-Get-Schools-Cultures/dp/0393065057].
IIRC Jensen's original argument was based on very high estimates for IQ heritability (>.8). When within-group heritability is so high, a simple statistical argument makes it very likely that large between-group differences contain at least a genetic component. The only alternative would be that some unknown environmental factor would depress all blacks equally (a varying effect would reduce within-group heritability), which is not very plausible. Now that estimates of IQ heritability have been revised down to .5, the argument loses much of its power.
Bouchard's recent meta-analysis [http://dx.doi.org/10.1111/j.0963-7214.2004.00295.x] upholds such high estimates, at least for adulthood. These are the figures listed on Table 1 (p. 150):
Did you type the number for Age 16 correctly? I can think of no sensible reason why there should be a divot there.
I uploaded Bouchard's paper here [http://www.stafforini.com/txt/bouchard_-_genetic_influence_on_human_psychological_traits.pdf] . I also uploaded Snyderman and Rothman's study here [http://www.stafforini.com/txt/snyderman_&_rothman_-_survey_of_expert_opinion_on_intelligence_and_aptitude_testing.pdf] .
Yes, the figure is correct.
The Dickens-Flynn model, with high gene-environment correlations (the effects of genetic differences seem large because those genetic differences lead to assortment into different environments, but broad environmental change can still have major effects, as in the Flynn Effect) seems a very powerful indicator that environmental explanations are possible.