Today's post, Doublethink (Choosing to be Biased) was originally published on 14 September 2007. A summary :

 

George Orwell wrote about what he called "doublethink", where a person was able to hold two contradictory thoughts in their mind simultaneously. While some argue that self deception can make you happier, doublethink will actually lead only to problems.


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Human Evil and Muddled Thinking, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

New Comment
8 comments, sorted by Click to highlight new comments since: Today at 5:05 AM

You want a comforting lie? You can't handle a comforting lie!

I once said to a friend that I suspected the happiness of stupidity was greatly overrated. And she shook her head seriously, and said, "No, it's not; it's really not."

I envision this conversation, except about the happiness of meth addiction, and the friend saying that it is not overrated. Then I think that the high, even if it is amazing, is still not worth the lows that come from it. I disagree with this friend for the same reason in both cases.

[-][anonymous]13y20

Consider the problem in the Least Convenient Possible World. What if the happiness of stupidity really is much better than what smart people have, and there aren't any significant lows? Would you still consider it overrated?

(Personally, I don't know where I stand on this. In LCPW, there seems to be no incentive whatsoever to be smart, and I would only be hurting myself by remaining so. But in a more realistic hypothetical scenario, stupidity is clearly deficient because intelligence makes it possible to avoid some of the common pitfalls that less intelligent people fall into, such as the lottery.)

Consider the problem in the Least Convenient Possible World. What if the happiness of stupidity really is much better than what smart people have, and there aren't any significant lows?

This is not a valid application of the LCPW. The LCPW is not one in which the assertion that someone is arguing for is simply supposed to be false. It is one in which all ways of getting undeserved assistance to the desired conclusion are eliminated.

If someone argues that rationality is better than stupidity, the LCPW does not consist of saying "but suppose stupidity was better than rationality?" That is not even an imagined possible world, just a verbally supposed contradiction of the conclusion.

[-][anonymous]13y20

I agree, but look at beriukay's reply--he/she stated that stupidity would still be undesirable even if there were no lows. This implies that his/her true rejection of self-imposed stupidity was what he/she initially stated in the grandparent comment. Though my application of LCPW wasn't entirely correct, all I was trying to do was determine if "lows" was really beriukay's true rejection.

Thanks for the reminder.

In the least convenient world, the problem seems to me to strongly resemble my aversion to wireheading. In both cases, I currently hate the idea of me being one of them; and I hate the idea of being one who is surrounded by them. Previously, I have railed against the loss of potential, but it seems that applies more to wireheading than just being stupid. A bunny rabbit doesn't have much potential for intelligence, even in the best of circumstances.

So there has to be something else that bothers me. After reading Sam Harris' The Moral Landscape (I wouldn't exactly recommend it to anyone), I think my objection would rest more on the fact that happiness and well-being are not the same thing, and that I value well-being far more than happiness in the same way I value health far more than running the fastest mile possible. Obviously to be able to run that fast means that you have to be rather healthy, but you can still get a heart attack and die on the spot... and obviously you need to be able to run at least a little bit, or else you aren't healthy.

Thus, I reject the entire basis for the claim that extreme level of happiness can be the pinnacle of ratings, because happiness is just one facet of a greater well-being, and that seeking this trait without balancing for other traits is detrimental to your overall well-being.

Now if we were to find a way (or make a way) for all increases in happiness to have a nondecreasing effect in well-being, my claim would be refuted and I would have to accept that any way to get happier would be worth the effort. However, I would need some amazingly powerful assurances before I allowed Omega to give me a lobotomy for my own good, which I take to be a fairly daunting practical limitation.

The least convenient possible world is a principle for use in thought experiments. In a hypothetical thought experiment where indeed, knowing the truth and being happy is counterproductive, then we can use that principle to investigate the moral nature of truth. In the real world, the one where we actually make decisions that will affect our lives, stupidity causes lots of problems.

This post is related, it discusses a rational way for beliefs to seem to cause themselves, and some tips for becoming less reliant on the belief to increase utility.