Kaj Sotala asked:

I was wondering, is there an avenue for us non-contributor readers to raise questions we think would be interesting to discuss?

If you have a suggested Overcoming Bias topic you'd like to see discussed, post it in a comment here.  But please don't actually discuss the topic with further comments, just give us the suggestion.  This post is for topic suggestions, not topic discussions.

New to LessWrong?

New Comment
16 comments, sorted by Click to highlight new comments since: Today at 1:36 PM

The effects of relationship bias (bias toward friends, family, lovers, etc.) in our daily decision making? Does it makes sense to believe in anything unconditional (e.g. unconditional love)?


Politics and self-interest - To what extent do people hold normative (e.g. political) views which if implemented would be in their self-interest? Should we change our own opinions in the light of this?

I'll repeat here, then, the original question(s) which prompted that comment - how careful one should be to avoid generalization from fictional evidence [described as a fallacy here, but I'd interprete it as a bias as well - which raises another potentially interesting question, how much overlap is there between fallacies and bias]? When writing about artificial intelligence, for instance, would it be acceptable to mention Metamorphosis of Prime Intellect as a fictional example of an AI whose "morality programming" breaks down when conditions shift to ones its designer had not thought about? Or would it be better to avoid fictional examples entirely and stick purely to the facts?

I don't know if this has been done before and I apologize if I'm suggesting something that's been covered.

Prospect theory tells us that people are more afraid of losing something they already have (the "endowment effect") than they are excited about the possibility of getting more. People are somewhat risk averse.

It seems to me that this stops many positive policy proposals--organ markets, school choice, etc.

People over-weight the possibility of failure and under-weight the possibility of gain.

"The Devil you know is better than the Devil you don't."

What can be done to Overcome this Bias [sic]?

How do the residues of thousands of years of monotheism continue to bias even secular Westerners?

Why do people over-extend the correspondence theory of truth into moral questions? Do we over-extend the correspondence theory anywhere else?

Morality is objective in a given reference frame - but this reference frame comes from the same place as our biases - from our evolutionary heritage. If it is good to reject genetically programmed biases, isn't it also good to reject the genetically programmed desires and aversions that give us our sense of morality? If so, how to avoid total paralysis?

How and why the current reigning philosophical hegemon (reductionistic materialism) is obviously correct, unbiased, rational, and factual, while the reigning philosophical viewpoints of all past societies and civilizations are obviously suspect and based solely on superstitious nonsense and irrational dogma, and how the future will judge us as essentially correct in this matter, unlike how we judge all previous civilizations.

A nice coda would be some well-placed rocks thrown at anyone who would challenge the current paradigm, since they are obviously just fools or liars.

I'm interested in the institutionalization of bias in the justice system. The system as a whole is supposed to be unbiased, but it is based on a competition between people who are biased by definition (i.e. defense lawyers and prosecution lawyers). I wonder if there are reasonable alternatives to this selective implementation of bias.

In previous discussions here of statistical bias, you have considered cases where bias may be acceptable because of trade-offs. But in some other cases, e.e. the Poisson distribution parameter estimation, an unbiased estimator is obviously absurd, e.g. giving negative estimates in some cases when the parameter value is positive. The maximum likelihood estimator is a far better choice for common purposes, and of course it is a biased estimator.

Do you think cases where the maximum likelihood estimator differs from the unbiased estimator, or where the unbiased estimator is plain absurd, have any relation to the non-statistical sense of bias, and if so do you have any thoughts on bias in those cases?

tom's comment reminded me of this, which is paraphrased from Theodore Dalrymple. It's not exactly the same thing, but I did find it interesting.

Is there yet a measurement of how biased a person behaves suitable to determining the effectiveness of debiasing techniques? It seems the key to a lot of what is discussed here is binding yourself toward data rather than just confirming to yourself what you want to believe. People who want to debias themselves will be tempted to assume that they are doing a good job at it, so it would be good if there was a method they could follow. This might sound similar to the earlier Dojo post, but that seemed far too vague, basically just hanging out with other people who are into debiasing and relying on their judgements to know if you've progressed.


Are we biased toward more altruism than is socially optimal?

I had this thought the other day. If our altrustic instincts developed during a period of communal property and less organized law enforcement (a system where altruism might be more beneficial than it is today), might we be left with some amount of vestigial altruism?

It would be interesting to see more informed criticisms of popular bias beliefs. Are all of the popular bias experiments correct? Are there any commonly cited experiments which purport to measure bias but are influenced significantly by an uncontrolled variable?

Why has no serious effort been mounted to recruit and appeal to women in this forum? Relatedly, what forms of bias are typical of male thinking/communication and what forms are typical of female (or is this a false dichotomy) ?


Matthew C's comment appears to be sarcastic and thus I interpret it as meaning the opposite. Regardless, I will request that a discussion be stared on the different sorts of biases that exist in a reductionist (i.e. standard scientific) paradigm. This is a hard one to discuss given everyone's background and the nature of human language. Relatedly, what does a complex systems or emergent paradigm tell us about reductionist biases?


Why does statistical hypothesis testing continue to be used in many research fields despite its very many flaws? Are there biases at work here in that the widespread rejection of hypothesis testing would lead to the trashing of many senior researchers' works? How skeptical should we be of science in general if such shaky methodology is so widely adopted?

How would one counter objections to the assumptions underlying the desire to overcome bias? Does this general attempt to overcome bias present any notable risks of authoritarianism? Also, if any of the distinguished bloggers would like to point me privately to already-existing resources that address these questions, my hailing frequencies are open.

How can we debias our assessments of others' biases. Or get others to debias their assessments of other others' biases?

Voting and other things where the behavior of a single individual has relatively little effect. "A single vote does not matter either way, therefore I shall not waste my time voting" versus "I shall vote, for if everybody thought their votes did not matter we'd be screwed". Or alternatively the same lines of thought when it comes to boycotting big corporations, or donating small sums to charities, et cetera. Which way of thinking is more rational?