Once upon a time I tried to tell my mother about the problem of expert calibration, saying: “So when an expert says they’re 99% confident, it only happens about 70% of the time.” Then there was a pause as, suddenly, I realized I was talking to my mother, and I hastily added: “Of course, you’ve got to make sure to apply that skepticism evenhandedly, including to yourself, rather than just using it to argue against anything you disagree with—”

And my mother said: “Are you kidding? This is great! I’m going to use it all the time!”

Taber and Lodge’s “Motivated Skepticism in the Evaluation of Political Beliefs” describes the confirmation of six predictions:

1. Prior attitude effect. Subjects who feel strongly about an issue—even when encouraged to be objective—will evaluate supportive arguments more favorably than contrary arguments.

2. Disconfirmation bias. Subjects will spend more time and cognitive resources denigrating contrary arguments than supportive arguments.

3. Confirmation bias. Subjects free to choose their information sources will seek out supportive rather than contrary sources.

4. Attitude polarization. Exposing subjects to an apparently balanced set of pro and con arguments will exaggerate their initial polarization.

5. Attitude strength effect. Subjects voicing stronger attitudes will be more prone to the above biases.

6. Sophistication effect. Politically knowledgeable subjects, because they possess greater ammunition with which to counter-argue incongruent facts and arguments, will be more prone to the above biases.

If you’re irrational to start with, having more knowledge can hurt you. For a true Bayesian, information would never have negative expected utility. But humans aren’t perfect Bayes-wielders; if we’re not careful, we can cut ourselves.

I’ve seen people severely messed up by their own knowledge of biases. They have more ammunition with which to argue against anything they don’t like. And that problem—too much ready ammunition—is one of the primary ways that people with high mental agility end up stupid, in Stanovich’s “dysrationalia” sense of stupidity.

You can think of people who fit this description, right? People with high g-factor who end up being less effective because they are too sophisticated as arguers? Do you think you’d be helping them—making them more effective rationalists—if you just told them about a list of classic biases?

I recall someone who learned about the calibration/overconfidence problem. Soon after he said: “Well, you can’t trust experts; they’re wrong so often—as experiments have shown. So therefore, when I predict the future, I prefer to assume that things will continue historically as they have—” and went off into this whole complex, error-prone, highly questionable extrapolation. Somehow, when it came to trusting his own preferred conclusions, all those biases and fallacies seemed much less salient—leapt much less readily to mind—than when he needed to counter-argue someone else.

I told the one about the problem of disconfirmation bias and sophisticated argument, and lo and behold, the next time I said something he didn’t like, he accused me of being a sophisticated arguer. He didn’t try to point out any particular sophisticated argument, any particular flaw—just shook his head and sighed sadly over how I was apparently using my own intelligence to defeat itself. He had acquired yet another Fully General Counterargument.

Even the notion of a “sophisticated arguer” can be deadly, if it leaps all too readily to mind when you encounter a seemingly intelligent person who says something you don’t like.

I endeavor to learn from my mistakes. The last time I gave a talk on heuristics and biases, I started out by introducing the general concept by way of the conjunction fallacy and representativeness heuristic. And then I moved on to confirmation bias, disconfirmation bias, sophisticated argument, motivated skepticism, and other attitude effects. I spent the next thirty minutes hammering on that theme, reintroducing it from as many different perspectives as I could.

I wanted to get my audience interested in the subject. Well, a simple description of conjunction fallacy and representativeness would suffice for that. But suppose they did get interested. Then what? The literature on bias is mostly cognitive psychology for cognitive psychology’s sake. I had to give my audience their dire warnings during that one lecture, or they probably wouldn’t hear them at all.

Whether I do it on paper, or in speech, I now try to never mention calibration and overconfidence unless I have first talked about disconfirmation bias, motivated skepticism, sophisticated arguers, and dysrationalia in the mentally agile. First, do no harm!

128

79 comments, sorted by Highlighting new comments since Today at 10:34 PM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Humans aren't just not perfect Bayesians. Very very few of us are even Bayesian wannabes. In essence, everyone who thinks that it is more moral/ethical to hold some proposition than to hold it's converse is taking some criterion other than appearent truth as normative with respect to the evaluation of beliefs.

8DSimon10yThis is something of a nitpick, but I think that it is more moral/ethical to hold a proposition than to hold its converse if there is good reason to think that that proposition is true. Is this un-Bayesian?

It's a meta-level/aliasing sort of problem, I think. You don't believe it's more ethical/moral to believe any specific proposition, you believe it's more ethical/moral to believe 'the proposition most likely to be true', which is a variable which can be filled with whatever proposition the situation suggests, so it's a different class of thing. Effectively it's equivalent to 'taking apparent truth as normative', so I'd call it the only position of that format that is Bayesian.

3christopherj7yThis website seems to have two definitions of rationality: rationality as truth-finding, and rationality as goal-achieving. Since truth deals with "is", and morality deals with "ought", morality will be of the latter kind. Because they are two different definitions, at some point they can be at odds -- but what if your primary goal is truth-finding (which might be required by your statement if you make no exceptions for beneficial self-deception)? How would you feel about ignoring some truths, because they might lead you to miss other truths? This article is about how learning some truths can prevent you from learning other truths, with an implication that order of learning will mitigate these effects. In some cases, you might be well served by purging truths from your mind (for example, "there is a miniscule possibility of X" will activate priming and availability heuristic). Some truths are simply much more useful than others, so what do you do if some lesser truths can get in the way of greater truths?
2Nornagest7yNeither truth-finding nor goal-achieving quite captures the usual sense of the word around here. I'd say the latter is closer to how we usually use it, in that we're interested in fulfilling human values; but explicit, surface-level goals don't always further deep values, and in fact can be actively counterproductive thanks to bias or partial or asymmetrical information. Almost everyone who thinks they terminally value truth-finding is wrong; it makes a good applause light, but our minds just aren't built that way. But since there are so many cognitive and informational obstacles in our way, finding the real truth is at some point going to be critically important to fulfilling almost any real-world set of human values. On the other hand, I don't rule out beneficial self-deception in some situations, either. It shouldn't be necessary for any kind of hypothetical rationalist super-being, but there aren't too many of those running around.
-3datadataeverywhere10yThis seems like a shorthand for denying the existence of morals and ethics. I don't think that's what you mean, but I've heard that exact argument used to support nihilism. If I say "torture is unethical", I might mean "I believe that torture, for its own sake and without a greater positive offset, is unethical", which is objectively true (please, I entreat you to examine my source code). But it would be just as objectively true to say the negation if I actually believed the negation. Is it neither moral nor immoral to hold the belief that torture is a bad thing?

Hmm... thanks for writing this. I just realized that I may resemble your argumentative friend in some ways. I should bookmark this.

Stanovich's "dysrationalia" sense of stupidity is one of my greatest fears.

I didn't know whether to post this reply to "Black swans from the future" or here, so I'll just reference it:

http://www.overcomingbias.com/2007/04/black_swans_fro.html#comment-65404590

Good post, Eliezer.

I've pointed before to this very good review of Philip Tetlock's book, Expert Political Judgment. The review describes the results of Tetlock's experiments evaluating expert predictions in the field of international politics, where they did very poorly. On average the experts did about as well as random predictions and were badly outperformed by simple statistical extrapolations.

Even after going over the many ways the experts failed in detail, and even though the review is titled "Everybody’s An Expert", the reviewer concludes, "But the best lesson of Tetlock’s book may be the one that he seems most reluctant to draw: Think for yourself."

Does that make sense, though? Think for yourself? If you've just read an entire book describing how poorly people did who thought for themselves and had a lot more knowledge than you do, is it really likely that you will do better to think for yourself? This advice looks like the same kind of flaw Eliezer describes here, the failure to generalize from knowledge of others' failures to appreciation of your own.

There's a better counterargument than that in Tetlock - one of the data points he collected was from a group of university undergraduates, and they did worse than the worst experts, worse than blind chance. Thinking for yourself is the worst option Tetlock considered.

-1Peterdjones8yThinking for yourself is the worst option Tetlock considered. Worse for making predictions, I suppose. But if people never think for themselves, we are never going to have any new ideas. Statistical extrapolation may be great for prediction, but it is poor for originality. So we value thinking for oneself. But the hit-rate is terrtlble. We have to put up with huge amounts of crap to get the gems. Most Ideas are Wrong, as I like to say when people tell me I'm being "too critical".
7RobinZ8yOh, it's less general than that - it's worse for political forecasting specifically. Other kinds of prediction (e.g. will this box fit under this table?), thinking for yourself is often one of the better options. But, you know, political forecasting is one of the things we often care about. So knowing rules of thumb like "trust the experts, but not very much" is quite helpful.
5RobinZ9yActually, when I was rereading the comments and saw your mention of Tetlock, I thought you would point out the bit where he noted the hedgehog predictors made inferior predictions within their area of expertise than without.
1MarsColony_in10years6yFantastic article. The problem is that now I have a pet theory with which to dismiss anything said by a TV pundit with whom I disagree: I'd be better off guessing myself or at random than listening to them. Maybe I can estimate how many variables various conclusions rest on, and how much uncertainty is in each, in order to estimate the total uncertainty in various possible outcomes. I'll have to pay special attention to any evidence that undercuts my beliefs and assumptions, to try to avoid confirmation bias.
7ChristianKl6yThat's great, stop watching TV. TV pundits are an awful source of information.
6MarkusRamikin6yOne of my past life decisions I consistently feel very happy about.
6Epictetus6yTV pundits are entertainers. They're hired less for their insightful commentary and more for their ability to engage an audience.

Hal, to be precise, the bias is generalizing from knowledge of others' failures to skepticism about disliked conclusions, but failing to generalize to skepticism about preferred conclusions or one's own conclusions. That is, the error is not absence of generalization, but imbalance of generalization, which is far deadlier. I do agree with you that the reviewer's conclusion is not supported (to put it mildly) by the evidence under review.

So why, then, is this blog not incorporating more statistical and collective de-biasing mechanisms? There are some out-of-the-box web widgets and mildly manual methods to incorporate that would at the very least provide new grist for the discussion mill.

The error here is similar to one I see all the time in beginning philosophy students: when confronted with reasons to be skeptics, they instead become relativists. That is, where the rational conclusion is to suspend judgment about an issue, all too many people instead conclude that any judgment is as plausible as any other.

I would love to hear more about such methods, Rafe. This blog tends to be a somewhat abstract and "meta" but I would like to do more case studies on specific issues and look at how we could come to a less biased view of the truth. I did a couple of postings on the "Peak Oil" controversy a few months ago along these lines.

Rafe, name three.

Rooney, I don't disagree that this would be a mistake, but in my experience the balance of evidence is very rarely exactly even - because hypotheses have inherent penalties for complexity. Where there is no evidence in favor of a complicated proposed belief, it is almost always correct to reject it, not suspend judgment. The only cases I can think of where I suspend judgment are binary or small discrete hypothesis spaces, like "Was it murder or suicide?", or matters like the anthropic principle, where there is no null hypothesis to take refuge in, and any position is attackable.

I have also had repeated encounters with individuals who take the bias literature to provide 'equal and opposite biases' for every situation, and take this as reason to continue to hold their initial beliefs. The situation is reminiscent of many economic discussions, where bright minds question whether the effect of a change on some quantity will be positive, negative or ambiguous. The discussants eagerly search for at least one theoretical effect that could move the quantity in a positive direction, one that could move it in the negative, and then declare the effect ambiguous after demonstrating their cleverness, without evaluating the actual size of the opposed effects.

I would recommend that when we talk about opposed biases, at least those for which there is an experimental literature, we should give rough indications of their magnitudes to discourage our audiences from utilizing the 'it's all a wash' excuse to avoid analysis.

As someone who seems to have "thrown the kitchen sink" of cognitive biases at the free will problem, I wonder if I've suffered from this meta-bias myself. I find only modest reassurance in the facts that: (i) others have agreed with me and (ii) my challenge for others to find biases that would favor disbelief in free will has gone almost entirely unanswered.

But this is a good reminder that one can get carried away...

Eliezer, I agree that exactly even balances of evidence are rare. However, I would think suspending judgment to be rational in many situations where the balance of evidence is not exactly even. For example, if I roll a die, it would hardly be rational to believe "it will not come up 5 or 6", despite the balance of evidence being in favor of such a belief. If you are willing to make >50% the threshold of rational belief, you will hold numerous false and contradictory beliefs.

Also, I have some doubt about your claim that when "there is n... (read more)

4DanielLC11yIf you gave him almost anything else that complex, it actually would be false. Once something gets even moderately complex, there is a huge number of other things that complex. Technically, he should figure that there's just a one in 10^somethingorother chance that it's true, but you can't remember all 10^somethingorother things that are that unlikely, so you're best off to reject it.
5bigjeff510yA Bayesian would not say definitively that it would not come up as 5 or 6. However, if you were to wager on whether or not the dice will come up as either 5 or 6, the only rational position is to bet against it. Given enough throws of the die, you will be right 2/3 of the time. At the most basic level, the difference between Bayesian reasoning and traditional rationalism is a Bayesian only thinks in terms in likelihoods. It's not a matter of "this position is at a >50% probability, therefore it is correct", it is a matter of "this position is at a >50% probability, so I will hold it to be more likely correct than incorrect until that probability changes". It's a difficult way of thinking, as it doesn't really allow you to definitively decide anything with perfect certainty. There are very few beliefs in this world for which a 100% probability exists (there must be zero evidence against a belief for this to occur). Math proofs, really, are the only class of beliefs that can hold such certainty. As such the possibility of being wrong pretty much always exists, and must always be considered, though by how much depends on the likelihood of the belief being incorrect. If no evidence is given for the belief, of course he is right to reject it. It is the only rational position Archimedes can take. Without evidence, Archimedes must assign a 0%, or near 0%, probability to the likelihood that the 20th century position is correct. However, if he is presented with the evidence for which we now believe such things, his probability assignment must change, and given the amount of evidence available it would be irrational to reject it. Just because you were wrong does not mean you were thinking irrationally. The converse of that is also true: just because you were right does not mean you were thinking rationally. Also note that it is a fairly well known fact that 20th century physics are broken - i.e. incorrect, or at least not completely correct. We simply have nothing parti
9wedrifid10yYou need to specify even odds. Bayesians will bet on just about anything if the price is right.
3bigjeff510yOdds on dice are usually assumed even unless specified otherwise, but it's never wrong to specify it, so thanks.
0wedrifid10yOn the other hand when considering rational agency some come very close to defining 'probability' based on what odds would be accepted for bets on specified events.
8JGWeissman10yThere are none [http://lesswrong.com/lw/mp/0_and_1_are_not_probabilities/].
7bigjeff510yThanks, I was a little unsure of stating that there is no such thing as 100% probability. That post is very helpful.
1raylance9yAh, the Godelian "This sentence is false."
-1encounterpiyush8yIt would be irrational to believe "it will not come up 5 or 6" because P(P(5 or 6) = 0) = 0, so you know for certain that its false. As you said "Claims about the probability of a given claim being true, helpful as they may be in many cases, are distinct from the claim itself." Before taking up any belief (if the situation demands taking up a belief, like in a bet, or living life), a Bayesian would calculate the likelihood of it being true vs the likelihood of it being false, and will favour the higher likelihood. In this case, the likelihood that "it will not come up 5 or 6" is true is 0, so a Bayesian would not take up that position. Now, you might observe that the belief that "1,2,3 or 4 will come up" is true also holds holds the likelihood of zero. In the case of a dice role, any statement of this form will be false, so a Bayesian will take up beliefs that talk probabilities and not certainties . (As Bigjeff explains, "At the most basic level, the difference between Bayesian reasoning and traditional rationalism is a Bayesian only thinks in terms in likelihoods") Ofcourse, one can always say "I don't know", but saying "I don't know" would have an inferior utility in life than being a Bayesian. So, for example, assume that your life depends on a series of dice rolls. You can take two positions: 1) You say "I believe I don't know what the outcome would be" on every roll. 2) You bet on every dice roll according to the information you have (in other words, You say "I believe that outcome X has Y chance of turning up". Both positions would be of course be agreeable, but the second position would give you a higher payoff in life. Or so Bayesians believe.

"Nonetheless, it would not be correct for Archimedes to conclude that Bell's theorem is therefore false."

I think this is a terrible hypothetical to use to illuminate your point, since most of Archimedes' decision would be based on how much evidence is proper to give to the source of information he gets the theorem from. I would say that, for any historically plausible mechanism, he'd certainly be correct in rejecting it.

Rooney, where there isn't any evidence, then indeed it may be appropriate to suspend judgment over a large hypothesis space, which indeed is not the same as being able to justifiably adopt a random such judgment - anyone who wants to assign more than default probability mass is being irrational.

I concur that Bell's theorem is a terrible hypothetical, because the whole point is that, in real life, without evidence, there's absolutely no way for Archimedes to just accidentally hit on Bell's theorem - in his lifetime he will not reach that part of the search ... (read more)

Eliezer, I think we are misunderstanding each other, possibly merely about terminology.

When you (and pdf) say "reject", I am taking you to mean "regard as false". I may be mistaken about that.

I would hope that you don't mean that, for if so, your claim that "no evidence in favor -> almost always false" seems bound to lead to massive errors. For example, you have no evidence in favor of the claim "Rooney has string in his pockets". But you wouldn't on such grounds aver that such a claim is almost certainly false... (read more)

The probability that an arbitrary person has string in their pockets (given that they're wearing pockets at the time) is knowable, and given no other information we could say that it's X%. The proper attitude towards the claim "Rooney has string in his pockets" is that it has about an X% chance of being true. (Unless we get other evidence to the contrary--and the fact that someone made the claim might be evidence here.)

Say X is 3%. Then I should say that Rooney very likely has no string in his pockets. Say X were 50%. Then I should say that there... (read more)

Pdf, maybe you're referring to "I Don't Know"?

Rooney, I think you're interpreting "reject" as "state with certainty that it is not true" or "behave as if there is definite evidence against it". Whereas what I mean is that one should bet at odds that are tiny or even infinitesimal when dealing with an evidentially unsupported belief in a very large search space. You have no choice but to deal this way with the vast majority of such beliefs if you want your total probabilities to sum to 1.

By "suspending judgment" I mean neither accepting a claim as true, nor rejecting it as false. Claims about the probability of a given claim being true, helpful as they may be in many cases, are distinct from the claim itself. So, pdf, when you say "The proper attitude towards the claim "Rooney has string in his pockets" is that it has about an X% chance of being true", where X is unknown, I don't see how this is materially different from saying "I don't know if Rooney has string in his pockets", which is to say tha... (read more)

You have no choice but to bet at some odds. Life is about action, action is about expected utility, and expected utility demands that you assign some subjective weighting to outcomes based on how likely they are. Walking down the street, I offer to bet you a million dollars against one dollar that a stranger has string in their pockets. Do you take the bet? Whether you say yes or no, you've just made a statement of probability. The null action is also an action. Refusing to bet is like refusing to allow time to pass.

Nor do I permit probabilities of zero and one. All belief is belief of probability.

I have to bet on every possible claim I (or any sentient entity capable of propositional attitudes in the universe) might entertain as a belief? That is highly implausible as a descriptive claim. Consider the claim "Xinwei has string in his pockets" (where Xinwei is a Chinese male I've never met). I have no choice but to assign probability to that claim? And all other claims, from "language is the house of being" to "a proof for Goldbach's conjecture will be found by an unaided human mind"? If Eliezer offers me a million ... (read more)

Michael Rooney: I don't think Eliezer is saying that it's invalid to say "I don't know." He's saying it's invalid to have as your position "I should not have a position."

The analogy of betting only means that every action you take will have consequences. For example, the decision not to try to assign a probability to the statement that Xinwei has a string in his pocket will have some butterfly effect. You have recognized this, and have also recognized that you don't care, and have taken the position that it doesn't matter. The key here is that, as you admit, you have taken a position.

And now that we know that we're going to be more biased. Why'd you have to say that?

3wedrifid11yBecause knowing about biases can also help people. A cornerstone premise of Eliezer's entire life strategy.

"Sophistication effect. Politically knowledgeable subjects, because they possess greater ammunition with which to counter-argue incongruent facts and arguments, will be more prone to the above biases."

Well, what about that always taking on the strongest opponent and the strongest arguments business? ;)

Actually, when I see a fellow with third degree in Philosophy, I leave him for someone, who'll have a similiar degree. It isn't that Sorbonne initiates are hopeless, it's arguments with 'em, that really are (hopeless).

"Things will continue historically as they have" is in some contexts hardly the worst thing you could assume, particularly when the alternative is relying on expert advice that a) is from people who historically have not had skill at predicting things and b) are making predictions reliant on complex ideas that you're in no position to personally evaluate.

I think I've got a pretty good feeling on those 6 predictions and have seen them in action numerous times. Most especially in discussions on religion. Does the following seem about right LWers?

The prior attitude effect, both atheists and theists have prior strong feelings of their respective positions and many of them tend to evaluate their supportive arguments more favourably, whilst also aggressively attacking counters to their arguments as predicted by the disconfirmation bias.

The internet being what it is, provides a ready source of material to confi... (read more)

The link to the paper is dead. I found a copy here: Taber & Lodge (2006).

2Kenny7yHere's yet another link, this one not seemingly associated with an individual course: http://www.unc.edu/~fbaum/teaching/articles/AJPS-2006-Taber.pdf [http://www.unc.edu/~fbaum/teaching/articles/AJPS-2006-Taber.pdf]

As far as I can tell, there have been few other studies which demonstrate the sophistication effect. One new study on this is West et al. (forthcoming), "Cognitive Sophistication Does Not Attenuate the Bias Blind Spot."

Here is the abstract:

The so-called bias blind spot arises when people report that thinking biases are more prevalent in others than in themselves. Bias turns out to be relatively easy to recognize in the behaviors of others, but often difficult to detect in our own judgments. Most previous research on the bias blind spot has focu

... (read more)
6John_Maxwell9yHave there been any attempts to measure biases in researchers who study biases?
0lukeprog9yNot that I know of.
0[anonymous]9yNo formal ones I know of, although I'm sure Will Newsome would like that. But Kahneman and Tversky did say that every bias they studied, they first detected in themselves.
1TheOtherDave9yUnfortunately, the results of all such studies were rejected, due to... well, you know.

"For a true Bayesian, information would never have negative expected utility". I'm probably being a technicality bitch, attacking an unintended interpretation, but I can see bland examples of this being false if taken literally: A robot scans people to see how much knowledge they have and harms them more if they have more knowledge, leading to a potential for negative utility given more knowledge.

"For a true Bayesian, information would never have negative expected utility."

Is this true in general? It seems to me that if a Bayesian has limited information handling ability, then they need to give some thought (not too much!) to the risks of being swamped with information and of spending too many resources on gathering information.

1alex_zag_al8yYeah, certainly. The search might be expensive. Or, some of its resources might be devoted to distinguishing the most relevant among the information it receives - diluting its input with irrelevant truths makes it work harder to find what's really important. An interpretation of the original statement that I think is true, though, is that in all these cases, receiving the information and getting a little more knowledgeable offsets the negative utility of whatever price was paid for it. The negative utility of the combination of search+learning is always negative because of the searching part of it - if you kept the searching but removed the learning at the end, it'd be even worse.
7beoShaffer8yI believe that in this situation "true Bayesian" implies unbounded processing power/ logical omniscience.
0NancyLebovitz8yI suggest that "true Bayesian" is ambiguous enough (this [http://lesswrong.com/lw/ii/conservation_of_expected_evidence/] seems to use it in the sense of a human using the principles of Bayes) that some other phrase-- perhaps "unlimited Bayesian"-- would be clearer.
4[anonymous]8yThe cost of gathering or processing the information may exceed the value of information, but the information is always positive value; At worst, you do nothing different, and the rest of the time you make a more informed choice.
-2TheOtherDave8yI'm not exactly sure what "a true Bayesian" refers to, if anything, but it's possible that being whatever that is precludes having limited information handling ability.
3RichardKennaway8yYes, in this technical sense [http://www.cmp.uea.ac.uk/~jrk/distribution/UtilityOfInformation.pdf]. A true Bayesian has unlimited information handling ability.
2alex_zag_al8yI think I see that - because if it didn't, then not all of its probabilities would be properly updated, so its degrees of belief wouldn't have the relations implied by probability theory, so it wouldn't be a true Bayesian. Right?
2RichardKennaway8yYes, one generally ignores the cost of making these computations. One might try to take it into account, but then one is ignoring the cost of doing that computation, etc. Historically, the "Bayesian revolution" [http://www.google.co.uk/search?q=%22bayesian+revolution%22] needed computers before it could happen. And, I notice, it has only gone as far as the computers allow. "True Bayesians" also have universal priors, that assign non-zero probability density to every logically possible hypothesis. Real Bayesian statisticians never do this; all those I have read deny that it is possible.
-1Eugine_Nier8yIt is impossible, even in principal. The only way to have universal priors over all computable universes is if you have access to a source of hypercomputation, but that would mean the universe isn't computable so the truth still isn't in your prior set.
0RichardKennaway8yIs that written up as a theorem anywhere?
-2Eugine_Nier8yThat depends on how one wants to formalize it.
-11shminux8y

Given the unbelievable difficulty in overcoming cognitive bias (mentioned in this article and many others), is it even realistic to expect that it's possible? Maybe there are a lucky few who may have that capacity, but what about a majority of even those with above-average intelligence, even after years of work at it? Would most of them not just sort of drill themselves into a deeper hole of irrationality? Even discussing their thoughts with others would be of no help, given the fact that most others will be afflicted with cognitive biases as well. Since t... (read more)

3orthonormal8yMy main takeaway from this is that "I know about this bias, therefore I'm more immune to it" is wrong. To be less susceptible to a bias, you need to practice habits that help (like the premortem [http://www.mckinsey.com/insights/strategy/strategic_decisions_when_can_you_trust_your_gut?p=1] as a counter to the planning fallacy), not just know a lot of cognitive science.

Critical Review recently devoted an issue to discussions of this 2006 study. Taber & Lodge's reply to the symposium on their paper is available here.

I Think is a good thing to be Humble to yourself, not to ague with yourself. if you you are always in self-doubt, you never speak out and learn. If you don't hear yourself, only how 'smart' you sound, you never learn from your mistakes. I try to learn from my - and other's- mistakes but I think observation of yourself is truly the key to being a rationalist, to remove self-imposed blocks on the path of understanding.

I Think it is great that you have such real-life experience, and have the courage to try. Keep living, learning and trying!

(I know this might be off-topic, but this is my first post and I don't know where to start, so i posted somewhere that inspired me to write.)

On a related note to such despicable people; I just had a few minutes talk with a very old friend on mine who matched this description. I just wanted an update on his situation and see if the boundless rage and annoyance I experienced then still fit. It's not super relevant, but the exact moment i started writing to him, my hands started shaking and i could feel a pressure on my chest, and my mind started clouding over. It's probably something that's shot into my system, but the exact reason why and what i dont know. Do any of you happen to know about this... (read more)

You don't believe in free will, correct?

I fear that the most common context in which people learn about cognitive biases is also the most detrimental. That is, they're arguing about something on the internet and someone, within the discussion, links them an article or tries to lecture them about how they really need to learn more about cognitive biases/heuristics/logical fallacies etc.. What I believe commonly happens then is that people realise that these things can be weapons; tools to get the satisfaction of "winning". I really wish everyone would just learn this in some neutral con... (read more)

2hairyfigment6yYour last sentence is funny, considering I immediately thought: 'If we taught them in school and plenty of bad effects remained, which seems well within the realm of possibility, you might be wishing people learned about fallacies in a context that made them seem more important.'

THIS is the proper use of humility. I hope I'm less of a fanatic and more tempered in my beliefs in the future.