Yes, this needed to be said, and I suspect a few LessWrongians have a blind spot here.

Example 1

"Oh," she said, "and do I have to be absolutely certain before my advice can shift your opinions? Does it not suffice that I am a domain expert, and you are not?" [...]

"So," Jeffreyssai said. "Not for the sake of arguing. Only because I want to know the answer. Are you sure?" He didn't even see how she could guess.

"Pretty sure," she said, "we've been collecting statistics for a long time, and in nine hundred and eight-five out of a thousand cases like yours—"

Then she laughed at the look on his face. "No, I'm joking. Of course I'm not sure. This thing only you can decide. But I am sure that you should go off and do whatever it is you people do—I'm quite sure you have a ritual for it, even if you won't discuss it with outsiders—when you very seriously consider abandoning a long-held premise of your existence."

It was hard to argue with that, Jeffreyssai reflected, the more so when a domain expert had told you that you were, in fact, probably wrong.

Example 2

What you're talking about above is not a concrete experimental result. Neither is it a standard causal theory, nor is it a causal theory that strikes me as particularly likely to be true in the absence of experimental validation. Nor is it valid math validly interpreted, or logic that seems necessarily true across lawful possible worlds. I don't care if it works for you and for other people you know; that doesn't show anything about the truth of the model; there's this thing called a placebo effect. The advice fails to meet the standard we're accustomed to, and that's why we're ignoring it. It is just one more theory on the Internet at this point, and one more set of orders delivered in a confident tone but not explained well enough to interpret at all, really.

I wish I knew of some good experimental results to back [my post] up, as this would render it less ignorable.

Hm. This seems a bit like writing the conclusion at the bottom of the page, to me.

5Jayson_Virissimo8yIt bothers me when people say something doesn't work because it is a placebo effect. If it actually has a placebo effect, then it does work!

Knowledge value = knowledge quality × domain importance

by John_Maxwell 1 min read16th Apr 201241 comments

10


Months ago, my roommate and I were discussing someone who had tried to replicate Seth Roberts' butter mind self-experiment. My roommate seemed to be making almost no inference from the person's self-reports, because they weren't part of a scientific study.

But knowledge does not come in two grades, "scientific" and "useless". Anecdotes do count as evidence, they are just weak evidence. And well designed scientific studies constitute stronger evidence then poorly designed studies. There's a continuum for knowledge quality.

Knowing that humans are biased should make us take their stories and ad hoc inferences less seriously, but not discard them altogether.


There exists some domains where most of our knowledge is fairly low-quality. But that doesn't mean they're not worth study, if the value of information in the domain is high.

For example, a friend of mine read a bunch of books on negotiation and says this is the best one. Flipping through my copy, it looks like the author is mostly just enumerating his own thoughts, stories, and theories. So one might be tempted to discard the book entirely because it isn't very scientific.

But that would be a mistake. If a smart person thinks about something for a while and comes to a conclusion, that's decent-quality evidence that the conclusion is correct. (If you disagree with me on this point, why do you think about things?)

And the value of information in the domain of negotiation can be very high: If you're a professional, being able to negotiate your salary better can net you hundreds of thousands over the course of a career. (Anchoring means your salary next year will probably just be an incremental raise from your salary last year, so starting salary is very important.)

Similarly, this self-help book is about as dopey and unscientific as they come. But doing one of the exercises from it years ago destroyed a large insecurity of mine that I was only peripherally aware of. So I probably got more out of it in instrumental terms than I would've gotten out of a chemistry textbook.

In general, self-improvement seems like a domain of really high importance that's unfortunately flooded with low-quality knowledge. If you invest two hours implementing some self-improvement scheme and find yourself operating 10% more effectively, you'll double your investment in just a week, assuming a 40 hour work week. (ALERT: this seems like a really important point! I'd write an entire post about it, but I'm not sure what else there is to say.)

Here are some free self-improvement resources where the knowledge quality seems at least middling: For people who feel like failuresFor students. For mathematiciansProductivity and general ass kicking (web implementation for that last idea). Even more ass kicking ideas that you might have seen already.

10