I recommend John Hempton's blog post on how badly people judge seeming madmen in the case the conventional view has only conventional-wisdom support. I also like how he explains his research and conclusions in general.

the gist:

In early June Carson Block and his firm Muddy Waters research published a report which made outrageous sounding allegations against Sino Forest - then a highly respected Canadian listed Chinese forestry company that had borrowed well over $2 billion to develop and expand forestry operations in China.

The base allegation in the report was that most the forests did not exist and by implication the (more than) $2 billion borrowed was stolen. Presumably many more shares have been sold too taking the total theft well above $2 billion.

I am obsessed about discovering the ways my positions can be wrong.

Dundee Securities was the most prominent Sino-supporter labeling Muddy Water's research a "pile of crap". Somewhat more considered sounding (but also flat wrong just more reasonable sounding) was Metal Augmentor who found Carson "loose with the facts and somewhat breathless". On the naive-sounding side was Susan Mallin whose complaint was that she had "never seen a research report written in this manner". More prominent people were fooled too.

The analysis of these people was staggeringly weak and self-referential. They judged Sino Forest against data provided by Sino Forest or people associated with Sino Forest. This is an elementary mistake in assessing fraud. To find fraud you need to be able to judge against things you are fairly sure are not fraudulent.

Everything the Carson Block doubters said sounded reasonable. Certainly more reasonable than Carson Block sounded because Carson Block held the radical position. Sounding reasonable however was wrong.

I think what is going on here is a general problem. When someone says something - anything - that is so far from the consensus as to sound outrageous then they will be considered mad, and sometimes they will be considered mad even after they are proven right.

New to LessWrong?

New Comment
10 comments, sorted by Click to highlight new comments since: Today at 9:03 PM

Finance does not seem to me to produce clean examples because there is such a strong incentive to talk one's book. People who owned the company and immediately believed the report had a strong incentive to condemn the report so that they could sell first. (this is illegal, but it's easy to maintain plausible deniability)

That does not address the title of the post, still being judged mad after being proven right, but I don't see that discussed in Hempton's post, let alone JG's excerpts, just flatly asserted.

I agree. It's just asserted, but the idea struck me.

I definitely expect JH to talk his book.

I just wanted to point out that I thought the article was going a different direction than it ended up going based off of misinterpreting the word 'mad'. I thought that it was going to be about a study that showed some kind of persecution complex in the contrarian who was unable to derive joy from being proven right. Maybe it was just me affected this way, but that threw me off until the last paragraph or so.

I have normalized your formatting.

Thanks. I have no idea why my pasting text into a blockquote in the post editor came out so odd and fancy.

I'm not certain that the Sino Forest 'case' is a very good one for use here, from reading the lw'er's "tl;dr" version of the full article.

Consider: While the "source" of information should not be used to evaluate the probability of that information's accuracy, the manner in which information is delivered should. If an improper format of accreditation is used; if obvious and overt logical fallacies are present -- that information would rightly be discounted from any estimate of the accuracy of a statement.

For example; I consider myself a strong/gnostic atheist. So for me, it is obvious that "atheism" is the "correct" answer. Should I encounter someone, however, whose sole reason for being an atheist was because their little sister died (WARNING: TVTROPES.org link!!) -- I would be forced to tell that person ve was being a complete moron, and that nobody should listen to ver reasoning.

The mere accident of a conclusion coinciding with the truth does not make that conclusion justified. As part of the assessment tools of judging the accreditation weighting we provide any given input, the 'proper indicators of subject-expertise' should weigh in highly. If a hobo with a speech impediment walks up to me on the street and tells me he's got a surefire plan for making money on the stock-market written on the back of a Subway's wrapper, I'm not going to be inclined to waste my time reading it. On the other hand, if a man makes the news from raking in billions over the course of a week from mere thousands and then offers me a secret algorithm that's the source of his wealth, I'm going to take him quite seriously: because he's got the right indicators of veracity of his beliefs.

Now, that being said -- there needs to be a means to re-weight assessments of credulity of statemets to account for outlier positions. I would tend to believe that willingness to publicize resources and citing credible sources of data, rigorousness fo documentation of 'radical' claims, and so one would be useful to this end.


As Sagan said: "Extraordinary claims require extraordinary evidence." While we should remember that's just a heuristic -- I don't know that I see a problem here. Limited information and constant need for reassessment based on it means mistakes will be made. They are unavoidable. Do not let the perfect become the enemy of the good.

Reading the links, the one mentioned error by the whistleblower was actually just an example of real data contradicting the official lies. All the other mentioned criticisms are one of surface style.


What I personally found most interesting about this was not that the whistleblowers were discounted before and after they were proven right (we see this in many bubbles, for example, the housing bubble, and apparently it's not yet certain that Sino Forestry is a massive fraud), but how one could use a sort of Outside View/Fermi calculation to sanity-check the claims. If Sino Forestry was really causing 17m cubic meters of wood to be processed a year, where was all the processing? This basic simple question tells us a lot.

I'm personally wary of hindsight bias sort of stuff. Of course this all seems obvious to us now, but... would it have when we first examined it? I don't know.

That question reminds me of http://www.gladwell.com/2007/2007_01_08_a_secrets.html

As far as Sino Forestry goes; I haven't read the full analysis published that triggered it, but nothing in what I've read has mentioned hot new data which makes the fraud case convincing.