Doublethink (Choosing to be Biased)

An oblong slip of newspaper had appeared between O'Brien's fingers. For perhaps five seconds it was within the angle of Winston's vision. It was a photograph, and there was no question of its identity. It was the photograph. It was another copy of the photograph of Jones, Aaronson, and Rutherford at the party function in New York, which he had chanced upon eleven years ago and promptly destroyed. For only an instant it was before his eyes, then it was out of sight again. But he had seen it, unquestionably he had seen it! He made a desperate, agonizing effort to wrench the top half of his body free. It was impossible to move so much as a centimetre in any direction. For the moment he had even forgotten the dial. All he wanted was to hold the photograph in his fingers again, or at least to see it.

'It exists!' he cried.

'No,' said O'Brien.

He stepped across the room.

There was a memory hole in the opposite wall. O'Brien lifted the grating. Unseen, the frail slip of paper was whirling away on the current of warm air; it was vanishing in a flash of flame. O'Brien turned away from the wall.

'Ashes,' he said. 'Not even identifiable ashes. Dust. It does not exist. It never existed.'

'But it did exist! It does exist! It exists in memory. I remember it. You remember it.'

'I do not remember it,' said O'Brien.

Winston's heart sank. That was doublethink. He had a feeling of deadly helplessness. If he could have been certain that O'Brien was lying, it would not have seemed to matter. But it was perfectly possible that O'Brien had really forgotten the photograph. And if so, then already he would have forgotten his denial of remembering it, and forgotten the act of forgetting. How could one be sure that it was simple trickery? Perhaps that lunatic dislocation in the mind could really happen: that was the thought that defeated him.

   —George Orwell, 1984

What if self-deception helps us be happy?  What if just running out and overcoming bias will make us—gasp!—unhappy?  Surely, true wisdom would be second-order rationality, choosing when to be rational.  That way you can decide which cognitive biases should govern you, to maximize your happiness.

Leaving the morality aside, I doubt such a lunatic dislocation in the mind could really happen.

Second-order rationality implies that at some point, you will think to yourself, "And now, I will irrationally believe that I will win the lottery, in order to make myself happy."  But we do not have such direct control over our beliefs.  You cannot make yourself believe the sky is green by an act of will.  You might be able to believe you believed it—though I have just made that more difficult for you by pointing out the difference.  (You're welcome!)  You might even believe you were happy and self-deceived; but you would not in fact be happy and self-deceived.

For second-order rationality to be genuinely rational, you would first need a good model of reality, to extrapolate the consequences of rationality and irrationality.  If you then chose to be first-order irrational, you would need to forget this accurate view. And then forget the act of forgetting.  I don't mean to commit the logical fallacy of generalizing from fictional evidence, but I think Orwell did a good job of extrapolating where this path leads.

You can't know the consequences of being biased, until you have already debiased yourself.  And then it is too late for self-deception.

The other alternative is to choose blindly to remain biased, without any clear idea of the consequences.  This is not second-order rationality.  It is willful stupidity.

Be irrationally optimistic about your driving skills, and you will be happily unconcerned where others sweat and fear.  You won't have to put up with the inconvenience of a seatbelt.  You will be happily unconcerned for a day, a week, a year.  Then CRASH, and spend the rest of your life wishing you could scratch the itch in your phantom limb.  Or paralyzed from the neck down.  Or dead.  It's not inevitable, but it's possible; how probable is it?  You can't make that tradeoff rationally unless you know your real driving skills, so you can figure out how much danger you're placing yourself in.  You can't make that tradeoff rationally unless you know about biases like neglect of probability.

No matter how many days go by in blissful ignorance, it only takes a single mistake to undo a human life, to outweigh every penny you picked up from the railroad tracks of stupidity.

One of chief pieces of advice I give to aspiring rationalists is "Don't try to be clever." And, "Listen to those quiet, nagging doubts."  If you don't know, you don't know what you don't know, you don't know how much you don't know, and you don't know how much you needed to know.

There is no second-order rationality.  There is only a blind leap into what may or may not be a flaming lava pit.  Once you know, it will be too late for blindness.

But people neglect this, because they do not know what they do not know.  Unknown unknowns are not available. They do not focus on the blank area on the map, but treat it as if it corresponded to a blank territory.  When they consider leaping blindly, they check their memory for dangers, and find no flaming lava pits in the blank map.  Why not leap?

Been there.  Tried that.  Got burned.  Don't try to be clever.

I once said to a friend that I suspected the happiness of stupidity was greatly overrated.  And she shook her head seriously, and said, "No, it's not; it's really not."

Maybe there are stupid happy people out there.  Maybe they are happier than you are.  And life isn't fair, and you won't become happier by being jealous of what you can't have.  I suspect the vast majority of Overcoming Bias readers could not achieve the "happiness of stupidity" if they tried.  That way is closed to you. You can never achieve that degree of ignorance, you cannot forget what you know, you cannot unsee what you see. 

The happiness of stupidity is closed to you.  You will never have it short of actual brain damage, and maybe not even then.  You should wonder, I think, whether the happiness of stupidity is optimal—if it is the most happiness that a human can aspire to—but it matters not.  That way is closed to you, if it was ever open.

All that is left to you now, is to aspire to such happiness as a rationalist can achieve.  I think it may prove greater, in the end. There are bounded paths and open-ended paths; plateaus on which to laze, and mountains to climb; and if climbing takes more effort, still the mountain rises higher in the end.

Also there is more to life than happiness; and other happinesses than your own may be at stake in your decisions.

But that is moot.  By the time you realize you have a choice, there is no choice.  You cannot unsee what you see.  The other way is closed.

 

Part of the Against Doublethink subsequence of How To Actually Change Your Mind

Next post: "No, Really, I've Deceived Myself"

Previous post: "Singlethink"

162 comments, sorted by
magical algorithm
Highlighting new comments since Today at 5:45 PM
Select new highlight date
Moderation Guidelines: Reign of Terror - I delete anything I judge to be annoying or counterproductiveexpand_more

I am not an island. There are a few good ways to set up a life of bounded bias or a rational decision about whether or not to engage in bias. I am a social creature and as such am acutely aware that most of my decisions are made as a mix of peer pressure, groupthink, discussions with friends, unconscious reasoning and whatever media I may have managed to digest in the past few hours. I have several friends, one of whom is a dedicated rationalist but a genuinely kind person, his name is Steve I have given him these instructions..::please give me unsolicited advice and interrupt me if you see me doing something stupid or immoral but only if you think I could emotionally cope with the reasons why my action was immoral:: I have another friend he's something of a spiritualist and currently some form of wiccan something or other. His name is Dave, also a kind person and he has explicit instructions. ::Please give me unsolicited advice and help me out if I seem to be unhappy Give me the course of action you think would make me happiest so long as it doesn't conflict with what Steve has told me to do. When I have to get a good think on about something I call steve and dave separately, then call them both together, and compare the three suggestions. What is interesting is I have done this often enough that I can often predict what each will say in a sort of mental role taking that is much easier if you imagine it not being you that has such thoughts. As such I have achieved some bounded bias, that is I am biogted enough to not be a social pariah in America (one must be somewhat prejudiced against someone to survive socially even if its only prejudiced against bigots and republicans) But rational enough not to fall for gambler's fallacies and at least bright enough to nod along when a modus ponens is explained to me using small words for the fourteenth time. Its not perfect, but its mine, Most people outsource their morality anyway., from what would jesus do, to local faith leaders to calling their parents for advice. I'm just a little more structured and deliberate. Through this system I can have someone have an unbiased view and speak to someone with a biased view and make a decision as to which is a better view to have without having to unsee everything. Yes I realize steve won't be perfectly unbiased every time or perfectly rational or make the right choices but then again, neither would I and there's nothing special about me making my mistakes.

Yes I realize steve won't be perfectly unbiased every time or perfectly rational or make the right choices but then again, neither would I and there's nothing special about me making my mistakes.

A good principal in general. If more people realized this, the world would be a better place, I should think.

Hmm, I wonder if there's some snappy Wise Saying -esque way of formulating this

"I know I can never be perfect, but that's certainly not going to stop me from trying." --Sean Coincon

:D

Perhaps I am just contrarian in nature, but I took issue with several parts of her reasoning.

"What you're saying is tantamount to saying that you want to fuck me. So why shouldn't I react with revulsion precisely as though you'd said the latter?"

The real question is why should she react with revulsion if he said he wanted to fuck her? The revulsion is a response to the tone of the message, not to the implications one can draw from it. After all, she can conclude with >75% certainty that any male wants to fuck her. Why doesn't she show revulsion simply upon discovering that someone is male? Or even upon finding out that the world population is larger than previously thought, because that implies that there are more men who want to fuck her? Clearly she is smart enough to have resolved this paradox on her own, and posing it to him in this situation is simply being verbally aggressive.

"For my face is merely a reflection of my intellect. I can no more leave fingernails unchewed when I contemplate the nature of rationality than grin convincingly when miserable."

She seems to be claiming that her confrontational behavior and unsocial values are inseparable from rationality. Perhaps this is only so clearly false to me because I frequent lesswrong.

"If it was electromagnetism, then even the slightest instability would cause the middle sections to fly out and plummet to the ground... By the end of class, it wasn't only sapphire donut-holes that had broken loose in my mind and fallen into a new equilibrium. I never was bat-mitzvahed."

This seems to show an incredible lack of creativity (or dare I say it, intelligence), that she would be unable to come up with a plausible way in which an engineer (never mind a supernatural deity) could fix a piece of rock to appear to be floating in the hole in a secure way. It's also incredible that she would not catch onto the whole paradox of omnipotence long before this, a paradox with a lot more substance.

"he eventual outcome would most likely be a compromise, dependent, for instance, on whether the computations needed to conceal one's rationality are inherently harder than those needed to detect such concealment."

Whoah, whoah, since when did cheating and catching it become a race of computation? Maybe an arms race of finding and concealing evidence, but when does computational complexity enter the picture? Second of all, the whole section about the Darwinian arms race makes the (extremely common) mistake of conflating evolutionary "goals" and individual desires. There is a difference between an action being evolutionarily advantageous, and an individual wanting to do it. Never mind the whole confusion about the nature of an individual human's goals (see http://lesswrong.com/lw/6ha/the_blueminimizing_robot/).

One side point is that the way she presents it ("Emotions are the mechanisms by which reason, when it pays to do so, cripples itself") is essentially presenting the situation as Newcomb's Paradox, and claiming that emotions are the solution, since her idea of "rationality" can't solve it on its own.

"By contrast, Type-1 thinking is concerned with the truth about which beliefs are most advantageous to hold."

But wait... the example given is not about which beliefs are most advantageous to hold... it's about which beliefs it's most advantageous to act like you hold. In fact, if you examine all of the further Type-X levels, you realize that they all collapse down to the same level. Suppose there is a button in front of you that you can press (or not press). How could it be beneficial to believe that you should push the button, but not beneficial to push the button? Barring of course, supercomputer Omegas which can read your mind. You're not a computer. You can't get a core dump of your mind which will show a clearly structured hierarchy of thoughts. There's no distinction to the outside world between your different levels of recursive thoughts.

I suppose this bothered me a lot more before I realized this was a piece of fiction and that the writer was a paranoid schizophrenic (the former applying to most else of what I am saying).

"Ah, yet is not dancing merely a vertical expression of a horizontal desire?"

No, certainly not merely. Too bad Elliot lacked the opportunity (and probably the quickness of tongue) to respond.

"But perplexities abound: can I reason that the number of humans who will live after me is probably not much greater than the number who have lived before, and that therefore, taking population growth into account, humanity faces imminent extinction?..."

Because I am overly negative in this post, I thought I'd point out the above section, which I found especially interesting.

But the whole "Flowers for Algernon" ending seemed a bit extreme...and out of place.

she can conclude with >75% certainty that any male wants to fuck her.

... she can? Really? That seems pretty damn high for something as variable as taste in partners.

EDIT: wait, that's a reference to how many guys on a university campus will accept offers of one night stands, right? It's still too high, or too general.

It's also irrelevant to the point I was making. You can point to different studies giving different percentages, but however you slice it a significant portion of the men she interacts with would have sex with her if she offered. So maybe 75% is only true for a certain demographic, but replace it with 10% for another demographic and it doesn't make a difference.

Oh, it certainly doesn't affect your point. I agree with your point completely. I was just nitpicking the numbers.

This post and the linked story scared the heck out of me. Thanks for the thought-provoking material.

Relevant: Paul Graham, Why Nerds are Unpopular

Paul Graham argues that a nerd is anyone not primarily focused on popularity, and that nerds lose the competitive and zero-sum game of popularity to those who aren't distracted by things like studying. After nerds enter the real world, however, they can form their own special-interest communities and often do very well.

Regarding Aaronson's piece, ditziness as signaling makes sense. However, the protagonist failed to see other options: she could have "won" by making the first moves to date an attractive but passive/malleable and socially clueless boy. She could have really "won" by stringing along several passive/malleable/clueless boys. Instead, she sold her soul to stay with the next random guy who asked her out after her "realization", because being alone was more painful. She didn't realize that her understanding of evolutionary theory and rationality failed to make up for her lack of domain knowledge about dating/relationships.

I suspect the vast majority of Overcoming Bias readers could not achieve the "happiness of stupidity" if they tried. That way is closed to you. You can never achieve that degree of ignorance, you cannot forget what you know, you cannot unsee what you see.

The happiness of stupidity is closed to you. You will never have it short of actual brain damage, and maybe not even then. You should wonder, I think, whether the happiness of stupidity is optimal—if it is the most happiness that a human can aspire to—but it matters not. That way is closed to you, if it was ever open.

All that is left to you now, is to aspire to such happiness as a rationalist can achieve.

So, to be clear, you don't think that such neurohacking as presented in the story is possible?

That said, I think you've found a pretty convincing argument that we shouldn't accept the tradeoff, even if it's available. That is one scary piece of writing.

...you will think to yourself, "And now, I will irrationally believe that I will win the lottery, in order to make myself happy." But we do not have such direct control over our beliefs. You cannot make yourself believe the sky is green by an act of will.

In my experience, this is not true.

My father was a dentist, and when I was 7 he learned hypnosis to use to anesthetise his patients. Of course he practiced on me while he was learning. (As it turned out, he did successful anesthesia with it for a few years before people started spreading stories that hypnosis was dangerous mind-control and he quit.)

With posthypnotic suggestion people can easily believe things that they have no reason to believe, remember things they did not experience, and ignore their senses up to a point. I've done it. It all feels real.

I learned to hypnotise people a little, and I learned how to do it on myself. It certainly can be done. You do have that control over your beliefs, if you're willing to use it.

Which is not to say it's a good idea. IME the main time it's useful to make yourself believe something is when you have nothing to lose by burning your bridges, when you lose everything anyway if the belief is wrong. Then you might as well believe it wholeheartedly.

I've read that interest in hypnosis has something like an eleven year cycle. People start to think there's something interesting there. They start studying it, and get some fascinating results that look some ways powerful. Then as they keep studying they find that all the unexpected things people can do under hypnosis they can also do without hypnosis. And then they start to see that a lot of people are basicly walking around hypnotised a lot of the time. They start to wonder what exactly they're studying, and they quit, and after the subject lies fallow awhile more people get interested and it starts again.

Basicly all it takes for hypnosis is that the person relax and listen uncritically. If they're willing to believe what they're told, they're hypnotised. All the peculiar abilities people sometimes display when told to under hypnosis, are things they could do but normally don't believe they can do. When they give up their scepticism they go ahead and do their best instead of doubting themselves and hesitating. They're willing to believe delusions for somebody they trust, and when the limits of the trust show up or they get emphatic evidence against the delusion, then they rethink.

You really can deceive yourself. You can build false memories and believe them. You can make the sky look a little green, particularly on a cloudy day, and you can build on that until it looks pretty green -- provided the idea of a green sky doesn't offend you too much. If you believe it's impossible you can't see it. If it's "I didn't know that was even possible, I wonder why it's happening now?" then you can.

These are things that anybody can learn to do. But I mostly agree with your arguments that it is not generally a useful skill. If I get a toothache I don't anesthetise it until after I get my dentist appointment, and if I miss the appointment the pain comes back. Pain is your signal that something is going wrong with your body, and in general it's a bad idea to ignore that.

False memories are horrifyingly easy to induce. Here is a Scientific American story on the subject from 1997, and here is a scary story from an ex-Scientologist about how to induce false memories using Scientology auditing. "Up to this day, I intellectually know that this story was a fiction written by a friend of mine, but still I have it in vivid memory, as if I was the very person that had experienced it. I actually can't differentiate this memory from any other of my real memories, it still is as valid in my mind as any other memory I have."

Human memories are untrustworthy. This leads to a philosophical dilemma about whether or not to trust your memory, and how much, and what you're supposed to use if you can't trust your memory.

Not everyone can be hypnotized. About a quarter of people can't be hypnotized, according to research at Stanford.

I've tried to be hypnotized before and it didn't work. I think I'm just not capable of making myself that open to suggestion, even though I would have liked to have been hypnotized.

I heard from one of my psychology professors that those on the extreme ends of the IQ spectrum (both high and low) have more trouble being hypnotized, but I'm not sure if this is actually true. The Stanford research showed that hypnotizability wasn't correlated with any personality traits, but I probably wouldn't consider IQ a personality trait.

What if self-deception helps us be happy? What if just running out and overcoming bias will make us - gasp! - unhappy?

You are aware, I'm sure, of studies that connect depression and freedom from bias, notably overconfidence in one's ability to control outcome.

You've already given one answer: to deliberately choose to believe what our best judgement tells us isn't so would be lunacy. Many people are psychologically able to fool themselves subtly, but fewer are able to deliberately, knowingly fool themselves.

Another answer is that even though depression leads to freedom from some biases and illusions, the converse doesn't seem to apply. Overcoming bias doesn't seem to lead to depression. I don't get the impression that a disproportionate number of people on this list are depressed. In my own experience, losing illusions doesn't make me feel depressed. Even if the illusion promised something desirable, I think what I have usually felt was more like intellectual relief, "So that's why (whatever was promised) never seemed to work."

Agreed. I always feel profoundly relieved and even moderately triumphant.

I can even experience a slight stroke of euphoric lunacy upon the shattering of my delusions. Somehow the world seems to burn brighter without the blurry lenses that biases provide.

I'd heard of the connection between depression and more accurate perceptions (notably, more accurate predictions due to less overconfidence), but I wasn't aware of the causal direction. It had been portrayed to me as being that the improved perception of reality was the cause of the depression. Or maybe I just mistakenly inferred it and didn't notice. I didn't know it actually went the other way, though now that I think about it, that actually makes a lot of sense.

Personally, I find that imroved map-territory correspondence leads to more happiness, at least the improved rationality which results from learning Rational Emotive Behavior Therapy. It's not just losing illusions that helps. It's better understanding yourself, better understanding what is actually causing your emotions, and realizing that you have a more internal locus of control rather than external regarding your emotions. It's liberating to be able to stop an emotional reaction in its tracks, analyze it, recognize it as following from an irrational belief, and consequently substitute the irrational emotion for a rational one. It helps especially with anger and anxiety, as those have a tendency to result from irrational, dogmatic beliefs.

Eliezer, do you concede that there is no difference between "believing you're happy" and "really being happy"?

No. There is a difference between believing you love your stepchildren and loving your stepchildren, between believing you're deeply upset about rainforests and being deeply upset about rainforests, and between believing you're happy and being happy.

As soon as you turn happiness into an obligatory sign of spiritual health, a sign of virtue, people will naturally tend to overestimate their happiness.

Falsifiable difference? Put 'em in an fMRI or use other physiological indicators.

Perhaps the TED lecture by Dan Gilbert might cast some illumination upon whether there is a difference between believing you're happy and really being happy.

http://www.ted.com/talks/lang/eng/dan_gilbert_asks_why_are_we_happy.html

Sounds to me like what's being discussed is : is synthetic happiness the same as happiness. Dan Gilbert argues that they are the same.

I didn't downvote, but since the post is mostly just a link to a video my guess is that it's somebody signaling that the video isn't worth watching.

When the main content of a comment is a link, votes get used to indicate whether the link is worth following. This is especially relevant when the link is to a video, which involves a large time commitment as it is not skimmable. If the content of the video doesn't justify the time commitment, then downvotes tell other readers not to waste their time on the link (and warn the poster not to waste people's time with such links).

In my opinion, it's worth watching as a best presentation of a wrong idea that doesn't attempt to engage the correct one. It's also worth watching because it compiles interesting true facts and merely draws wrong conclusions from them, though correct conclusions would also be interesting and some of his intermediate conclusions are fine.

Please don't ask this for every comment of yours that is downvoted, at least until you can reliably make comments that aren't downvoted. It clutters the recent comment threads.

(ETA: I posted this in response to two of the same query being issued in a row. I don't object to people asking why they were downvoted when it's occasional.)

Perhaps you should read the

http://lesswrong.com/lw/2ku/welcome_to_less_wrong_2010/

page, where it is stated

"However, it can feel really irritating to get downvoted, especially if one doesn't know why. It happens to all of us sometimes, and it's perfectly acceptable to ask for an explanation. (Sometimes it's the unwritten LW etiquette; we have different norms than other forums.)"

I'm doing what is suggested as the etiquette.

Goodness, I for one would dislike it if people started doing that all the time (sometimes, it says, which is an apparently informative way of saying "Between 0 and 100% of the time").

The downside of doing it often is that it makes people feel like you're asking for an explanation without putting in any noticeable effort to understand. Writing things that are nice to read generally does take effort. I would recommend only asking if you are genuinely confused after a good sixty seconds of uninterrupted thought on how other people could have perceived your post. And, of course, lurking moar is good advice.

Fair enough Manfred, I respect your feeling of dislike on this position, but I disagree with its lack of rationality.

I did put in more than 60 seconds of effort trying to understand why it's a -1, and couldn't come up with something that didn't include my own bias. So I wanted to both understand what the -1 was for and test to see if my inclination is true or not. So far my bias is telling me it's an example of "have a go at the new guy on the block." - I hold only very lightly to this and will enjoy being proven incorrect by having the -1 explained.

It's commonly accepted that the most challenging time for a new group member is their beginning with the group and it's also known that constructive feedback helps with that challenge.

Does a member of rational group want to provide rational feedback? Observationally quite a few do not.

If I never (or rarely) question the -1, or never or rarely receive any more feedback that -1, then I will struggle, or may not ever understand what the -1 is for. I consider myself to be intellectually honest in asking "what's wrong with this?", because during the process of writing the post, I'm already asking "what's wrong with this?" and so perhaps someone with more knowledge gives me a -1 and I'd appreciate being informed "what's wrong with this.", for in being informed I can potentially implement better self editing procedures, that is I can improve my rationality.

Now if they don't have the time to answer that question, ok, I'll consider on my own what the -1 is for (again!) and then it's more likely I'll come up with an answer that has some amount of "reasoning" based upon my own biases. Now the sequences I've read so far imply that biases are something that people should attempt to perceive and challenge, so I believe that I am being consistent with the sites inclination towards rationality by asking the question, both to challenge and improve my own understanding and do the same for the person who has given me a -1, indeed also for those who witness the exchange also.

couldn't come up with something that didn't include my own bias.

What does this mean? That you couldn't come up with something that didn't include the other person's being stupid or innately evil?

Does a member of rational group want to provide rational feedback?

Oh, my word!

the -1

That does not represent a systematic negative reaction to your post or even consensus disagreement.

I'll work on how to get quotes up on this site, till then...

Lessdazed asks "What does this mean? That you couldn't come up with something that didn't include the other person's being stupid or innately evil?" I've already answered what it means, see the post you reply to.

Thanks for the link "oh, my word!"

"That does not represent a systematic negative reaction to your post or even consensus disagreement" I agree. Each -1 represents only a single persons negative reaction to a post.

I think that asking why a comment was downvoted would be legitimate even more than two times in a row, were the downvoted comments downvoted more than once.

For comments that were only downvoted once, it is not usually a question worth asking. So I agree with the literal reading of the original "don't ask this for every comment of yours that is downvoted" more than the clarification.

It's probably because Gilbert conflates happiness and utility.

I really can't figure that out myself. The comment doesn't seem to be annoying, irrelevant, rude or stupid. (Dan Gilbert's is wrong all the same.)

Um, there are readers of this blog, and there are people who enjoy the "happiness of stupidity" (which is not the same as just having a low IQ; it involves other personality traits as well). I don't think there's much overlap between those two groups. But they are far from being the only two groups in the world, and there is no dichotomy between them.

This is interesting. When I first discovered LW, I was reading The Praise of Folly by Erasmus. He argues, among other things, that all emotions and feelings that make life worthwhile are inherently imbedded in stupidity. Love, friendship optimism and happiness require foolishness to work. Now is it very hard to compare a sixteenth century satirical piece with a current rational argument, but I have observed that intelligence and stupidity don't seem to be mutually exclusive. From where comes your assumption that intelligent, rational people can't be stupid? Emotions don't tend to be rational, and in the force of a strong one like love even the most intelligent and rational person can turn into an optimistic fool, sure that their loved one is infinitely more trustworthy than the average human, and statistics on adultery don't apply in this case. Should you try to overcome the bias of strong emotions? Can you overcome it at all? I have never seen someone immune to it. So maybe the happiness of stupidity is still available to all of us.

I'm through with truth.

I never had a scientific intuition. In college, I once saw a physics demonstration with a cathode ray tube -- moving a magnet bent the beam of light that showed the path of the electrons. I had never seen electrons before and it occurred to me that I had never really believed in the equations in my physics book; I knew they were the right answers to give on tests, but I wouldn't have expected to see them work.

I'm also missing the ability to estimate. Draw a line on a sheet of paper; put a dot where 75% is. Then check if you got it right. I always get that sort of thing wrong. Arithmetic estimation is even harder. Deciding how to bet in a betting game? Next to impossible.

Whatever mechanism is that matches theory to reality, mine doesn't work very well. Whatever mechanism derives expectations about the world from probability numbers, mine hardly works at all. This is why I actually can double-think. I can see an idea as logical without believing in it.

A literate person cannot look at a sentence without reading it. But a small child, just learning to read, can look at letters on a page without reading, and has to make an extra effort to read them. In the same way, a bad rationalist can see that an idea is true, without believing it. I can read about electromagnetism and still not expect to see the beam in the cathode ray tube bend. I spent ten years or so thinking "Isn't it odd that the best arguments are on the atheist side?" without once wondering whether I should be an atheist.

Should I break down that barrier? I'm not sure. I'd do it if it would allow me to make money, I think. But not if it came at the cost of some kind of screaming Cthulhu horror.

You know what I really wish I had? Team spirit. Absolute group loyalty. Faith. Patriotism. The sense of being in the right. In Hoc Signo Vinces. I have fleeting glimpses of it but it doesn't last. I want it enough that I keep fantasizing about joining the Army because it might work. I always wanted to be a fanatic, and my brain would never do it. But I'm starting to wonder if that's hackable; I'm sure enough sleep deprivation and ritual would do it.

Should I break down that barrier? I'm not sure. I'd do it if it would allow me to make money, I think. But not if it came at the cost of some kind of screaming Cthulhu horror.

You know what I really wish I had? Team spirit. Absolute group loyalty. Faith. Patriotism. The sense of being in the right. In Hoc Signo Vinces. I have fleeting glimpses of it but it doesn't last. I want it enough that I keep fantasizing about joining the Army because it might work. I always wanted to be a fanatic, and my brain would never do it. But I'm starting to wonder if that's hackable; I'm sure enough sleep deprivation and ritual would do it.

Absolute group loyalty is much more likely to lead you to a screaming Cthulhu horror than the pursuit of truth is. Especially if it comes from a combination of ritual and sleep deprivation.

Ok, worth thinking about.

I still want it. At times I really want victory, not just a normal life. Even though "normal" is all a person should really expect.

I never had a scientific intuition. In college, I once saw a physics demonstration with a cathode ray tube -- moving a magnet bent the beam of light that showed the path of the electrons. I had never seen electrons before and it occurred to me that I had never really believed in the equations in my physics book; I knew they were the right answers to give on tests, but I wouldn't have expected to see them work.

Intuitively connecting mathy physics to reality isn't the default; you need to watch demonstrations and conduct thought experiments to make those connections. Your intuition got better that day.

You talk about belief the way popular culture talks about love: as some kind of external influence that overcomes your resistance.

And belief can be like that, sure. But belief can also be the result of doing the necessary work.

I realize that's an uncomfortable idea. But it's also an important one.

Relatedly, my own thoughts on the value of truth: when the environment is very forgiving and even suboptimal choices mostly work out to my benefit, the cost of being incorrect a lot is mostly opportunity cost. That is, things go OK, and even get better sometimes. (Not as much better as they would have gotten had I optimized more, but still: better.)

I've spent most of my life in a forgiving environment, which makes it very easy to adopt the attitude that having accurate beliefs isn't particularly important. I can go through life giving up lots of opportunities, and if I just don't think too much about the improvements I'm giving up I'll still be relatively content. It's emotionally easy to discount possible future benefits.

Even if I do have transient moments of awareness of how much better it can be, I can suppress them by thinking about all the ways it can be worse and how much safer I am right where I am, as though refusing to climb somehow protected me from falling.

The thing is: when the environment is risky and most things cost me, the cost of being incorrect is loss. That is, things don't go OK, and they get worse. And I can't control the environment.

It's emotionally harder to discount possible future losses.

I was always under the impression that a sort of "work" can lead you to emotionally believe things that you already know to be true in principle. I suspect that a lot of practice in actually believing what you know will eventually cause the gap between knowing and believing to disappear. (Sort of the way that practice in reading eventually produces a person who can't look at a sentence without reading it.)

For example, I imagine that if you played some kind of betting game every day and made an effort to be realistic, you would stop expecting that wishing really hard for low-probability events could help you win. Your intuition/subconscious would eventually sync up with what you know to be true.

(nods) That's been my experience.

Similarly: acting on the basis of what I believe, even if my emotions aren't fully aligned with those beliefs (for example, doing things I believe are valuable even if they scare me, or avoiding things I believe are risky even if they feel really enticing), can often cause my emotions to change over time.

But even if my emotions don't change, my beliefs and my behavior still do, and that has effects.

This is particularly relevant for beliefs that are strongly associated with things like group memberships, such as in the atheism example you mention.

I was always under the impression that a sort of "work" can lead you to emotionally believe things that you already know to be true in principle.

I strongly associate this with Eliezer's description of the brain as a cognitive engine, that needs to a certain amount of thermodynamical work to arrive at a certainty level - and that reasoned and logical conclusions that you 'know' fail to produce belief (enough certainty to act on knowledge) because they don't make your brain do enough work.

I imagine that forcing someone to deduce bits of probability math from earlier principles and observations, then have them use it to analyze betting games until they can generalise to concepts like expected value, would be enough work to have them believe probability theory.

Should I break down that barrier? I'm not sure. I'd do it if it would allow me to make money, I think. But not if it came at the cost of some kind of screaming Cthulhu horror.

Not to other-optimise, but yes.

As far as I can tell, the chances of encountering a true idea that is also a Lovecraftian cosmic horror is below the vanishing point for human brains. (There aren't neurons small enough to accurately reflect the tiny chances, etc)

It will also help you make money. Example: I received a promotion for demonstrating my ability to make more efficient rosters. This ability came from googling "scheduling problem" and looking at some common solutions, recognising that GRASP-type (page 7) solutions were effective and probably human-brain-computable - and then when I tried rostering, I intuitively implemented a pseduo-GRASP method.

That "intuitively implemented" bit is really important. You might not realise how much you rely on your intuition to decide for you, but it's a lot. It sounds like taking a lot of theory and jamming it into your intuition is the hard part for you.

Tangentially, how do you feel about the wisdom of age and the value of experience in making decisions?

I think wisdom and experience are pretty good things -- not sure how that relates though.

And "screaming Cthulhu horror" was just a cute phrase -- I don't literally believe in Lovecraft. I just mean "if rationality results in extreme misery, I'll take a pass."

I think wisdom and experience are pretty good things -- not sure how that relates though.

Some people I have encountered struggle with my rationality because I often privilege general laws derived from decision theory and statistics over my own personal experience - like playing tit-for-tat when my gut is screaming defection rock, or participating in mutual fantasising about lottery wins but refusing to buy 'even one' lottery ticket. I have found that certain attitudes towards experience and age-wisdom can affect a person's ability to tag ideas with 'true in the real world' - that reason and logic can only achieve 'true but not actually applicable in the real world'. It was a possibility I thought I should check.

And "screaming Cthulhu horror" was just a cute phrase -- I don't literally believe in Lovecraft.

I assumed it was a reference to concepts like Roko's idea. As for regular extreme misery, yes, there is a case for rationality being negative. You would probably need some irrational beliefs (that you refuse to rationally examine) that prevent you from taking paths where rationality produces misery. You could probably get a half-decent picture of what paths these might be from questioning LessWrong about it, but that only reduces the chance - still a consideration.

I'm also missing the ability to estimate. Draw a line on a sheet of paper; put a dot where 75% is. Then check if you got it right. I always get that sort of thing wrong. Arithmetic estimation is even harder. Deciding how to bet in a betting game? Next to impossible.

Whatever mechanism is that matches theory to reality, mine doesn't work very well. Whatever mechanism derives expectations about the world from probability numbers, mine hardly works at all. This is why I actually can double-think. I can see an idea as logical without believing in it.

Congratulations. You're just like most humans.

Well, then why does he say self-delusion is impossible? It's not only possible, it's usual.

I am under the impression that much of Eliezer Yudkowsky's early sequence posts were writted based on (a) theory and (b) experience with general-artificial-intelligence Internet posters. It's entirely possible that his is a correct deduction only on that weird WEIRD group.

I wasn't talking about that aspect (although I think he's wrong there also) but just about the aspect of not doing a good job at things like estimating or mapping probabilities to reality.

I think it's really the same thing. Mapping probabilities to reality is sort of the quantitative version of matching degree of belief to amount of evidence.

Possibly taboo self-delusion? I'm not sure that's what he means. Self-delusion in this context seems to mean something closer to deliberately modifying your confidence in a way that isn't based on evidence.

Why would you expect it to come at the cost of some kind of screaming Cthulhu horror?

I'm not sure. It's just that if it did I wouldn't go for it.

I know one person who's really well calibrated with probability, due to a lot of practice with poker and finance. When something actually is an x% probability, he actually internalizes it -- he really expects it to happen x% of the time. He's 80% likely to be right about something if he says he has an 80% confidence.

He doesn't seem too bad off. Busy and stressed, yes, but not particularly sad. Cheerful, even.

Draw a line on a sheet of paper; put a dot where 75% is. Then check if you got it right.

I tried that one and got it just about spot on. If you had asked me to estimate 67% now that may have been tricky. Estimating half twice in your head is kind of easy.

If you had asked me to estimate 67% now that may have been tricky.

Move your estimation point until half the big side is the same as the little side. (Although I've practiced enough to do halves, thirds, and fifths pretty well, so I might just be overgeneralizing my experience.)

Move your estimation point until half the big side is the same as the little side. (Although I've practiced enough to do halves, thirds, and fifths pretty well, so I might just be overgeneralizing my experience.)

Damn. I chose two random numbers and made a probability out of them. It seems I picked one of the easy ones too! :)

And yes, that algorithm does seem to work well for thirds. I lose a fair bit of accuracy but it isn't down to 'default human estimation mode' level.

Should I break down that barrier? I'm not sure. I'd do it if it would allow me to make money, I think. But not if it came at the cost of some kind of screaming Cthulhu horror.

This sounds like worrying about tripping over a conceptual basilisk. They really are remarkably rare unless your brain is actually dysfunctional or you've induced a susceptibility in yourself. Despite the popularity of the motif of harmful sensation in fiction, I know of pretty much no examples.

Surely, true wisdom would be second-order rationality, choosing when to be rational. ... You can't know the consequences of being biased, until you have already debiased yourself. And then it is too late for self-deception. The other alternative is to choose blindly to remain biased, without any clear idea of the consequences. This is ... willful stupidity.

This isn't quite fair. While it is true that you couldn't know the detailed consequences of being biased, you could make a rational judgment under uncertainty, given what you do know. And it should be possible to for your best judgment in this situation to be that you are better off biased. Of course this mere possibility does not mean that you are in fact better of being biased.

"Leaving the morality aside, I doubt such a lunatic dislocation in the mind could really happen."

Have you talked to any religious people lately? "Oh, the tornado ripped my neighbors house off the foundations, but we were spared. I guess God was looking out for us!"

Could anyone say that without willfully blinding themselves? Do they really think they are better people than their neighbors, and that God moved the tornado away from their house? Yet you hear stuff like this all the time. And I think they really believe it.

The ability to delude ourselves seems to be one of our main survival traits. Rational people would never take the stupid chances that result in progress. Evolution has favored a species that buys lottery tickets.

Forgive me, Master Eliezer, for I have sinned.

I have come to realize that inside my mind is not merely self-delusion, but a full-blown case of doublethink. There are two mutually exclusive statements that I simutaneously hold to be unquestionably true. Here they are:

1) I should not cause suffering to others. 2) Only my own happiness really matters.

I can even explain this doublethink. I am naturally selfish, but society makes me be good. I could try to believe that only I matter, and do good things only for the show, but that strategy doesn't work for most people. Being good is too complex.

This doublethink creates intresting effects. When I read about context insensitivity, I wondered if that's really a bias, or just apathy masquerading as concern. I'd probably give the same amount to save five birds as I would to save Atlantis from sinking. Both are social acts.

I also wonder about coherent extrapolated volition. What will it find when it extrapolates us? That we all want the whole pie? That we would gladly exterminate everyone else if we could get away with it?

Eliezer, we are in essence talking about a value of info calculation. Yes, such a calculated info value rises with rare important things you might know if you had the info. But even so it is not guaranteed that info will be worth the cost. Similarly, it is not guaranteed that our choosing to avoid bias will be worth the costs.

It seems to me simpler to just say that given our purposes we judge better overcoming our biases to in fact be cost-effective on the topics we emphasize here. The strongest argument for that seems to me that we emphasize topics where our evolved judgments about when we can safely be biased are the least likely to be reliable guides to social, as opposed to personal, value.

While it is true that you couldn't know the detailed consequences of being biased, you could make a rational judgment under uncertainty, given what you do know.

Yes, but for it to be a rational judgment under uncertainty, you would have to take into account the unknown unknowns, some of which may be Black Swans (where rare events accounts for a significant fraction of the total weight), plus such well-known biases as overconfidence and optimism. Think of all that worrying you'll have to do... maybe you should just relax...

My own life experience suggests that any black box should be assumed to contain a Black Swan. (Or to be precise, a substantial probability of such, rather than probability 1.0.)

Tiiba: "makes no sense" and "would be surprising" are very different things, and the former is excessive for the claim about depressed people. The level of confidence that's optimal for making correct predictions about the world could be much lower than the level that's optimal for living a happy life. Do you have some way of knowing that it isn't?

(Let me forestall one argument against by remarking that evolution is not in the business of maximizing our happiness.)

"How happy is the moron: / He doesn't give a damn. / I wish I were a moron. / -- My God, perhaps I am!"

Or, in other words, wanting to be stupid is itself a form of stupidity.

"believing you're happy" and "in fact happy" strike me as distinctions without distinction. How are they falsifiable?

By comparing a written self-evaluation and serotonin and dopamine levels in ones brain, perhaps?

How would you calibrate a brain scan machine to happiness except by comparing it to self-evaluated happiness? You only know that certain neural pathways correspond to happiness because people report being happy while these pathways are activated. If someone had different brain circuitry (like, say, someone born with only half a brain), you wouldn't be able to use this metric except by first seeing how their brain pattern corresponded to their self-reported happiness. It seems to me that happiness simply is the perception of happiness. There is no difference between "believing you're happy" and "being happy." You can't be secretly happy or unhappy and not know it, 'cause that wouldn't constitute happiness.

It's hard to be mistaken about how happy you are at the precise moment you're asked the question (you might have trouble reporting exactly how happy you are, but that's different). However, if you want to know how happy you've been over the past month, for example, it's possible to be wrong about that; you could be selectively remembering times you were more or less happy than average.

True. Still, the method of measuring serotonin and dopamine levels would offer no benefit over a self-evaluation, since you can't implement it retroactively.

Only retroactively. Our memories are easy to corrupt. But no, I don't think you can be happy or unhappy at any given moment and simultaneously believe the opposite is true. There's probably room for the whole "belief in belief" thing here, though. That is, you could want to believe you're happy when you're not, and could maybe even convince yourself that you had convinced yourself that you were happy, but I don't think you'd actually believe it.

You haven't given any evidence for those claims. At one time it was believed that minds were indestructible, atomic entities, but now we know we have billions of neurons there is plenty of scope for one neuronal cohort to believe or feel things that another does not.

Sure, that's true. I suppose you could have a split-brain person who is happy in one hemisphere and not in the other, or some such type of situation. I guess it just depends on what you're looking for when you ask "is someone happy?" If you want a subjective feeling, then self-report data will be reliable. If you're looking for specific physiological states or such, then self-report data may not be necessary, and may even contradict your findings. But it seems suspect to me that you would call it happiness if it did not correspond to a subjective feeling of happiness.

Overestimating my driving skills is obviously bad. But how about this scenario of the possibility of happiness destroyed by the truth?

Suppose, on the final day of exams, on the last exam, you think you’ve done poorly. In fact, you only got 1 in 10 questions completely right. On the other 9, you hope you’d get at least a bit of partial credit. On the other hand, all 4 of your friends (in the class of 50) think they’ve done poorly. Maybe there will be a curve? In fact, if the final exam curve is good enough, you might even get an A for the course.

The grade goes online at 6 PM. It’s already there, and it won’t change.

So what do you do? This is the last grade of the semester, and no more exams to study for. A bad grade will make you unhappy for the rest of the evening (you wanted to go to that party, right? You won’t have much fun thinking about that grade). A good grade will make you happy, but so what? Happiness comes with diminishing marginal returns (and for me it’s more like a binary value, happy or not). You have a higher expected utility for tonight if you don’t check your grade. And you’re not any worse off checking the grade tomorrow.

Should you destroy all that expected utility by the truth? (For reference, the truth is a that you got a C-, which is BAD).


My “solution” to this problem (probably irrational?) is in the spirit of “The other way is closed.” I look.

To maximize utility, I shouldn’t look at the grade until tomorrow morning. Some people don’t. I haven’t, once, and it didn’t bother me too much that I haven’t. And after bad grades, the outcome was usually pretty much as expected. So I know my utility function. That’s not the reason.

This is like the two-box decision of Newcomb’s problem. Rationally (according to Eliezer) you would pick one box. I’m not rational. I pick two. What’s there, is already there.

I. JUST. CAN’T. NOT. LOOK.

I would be happier knowing the grade is bad, rather than not knowing at all. Knowing leaves me free to enjoy the party, rather than worry about it and be distracted at the party.

Sometimes I come up with an awesome idea for my research, something that seems like it will totally blow open the problem I've been working on for weeks/months/years. After having such amazing moments of insight I usually take a couple of days off because the potential that the idea is right just feels so good, and because, well, in research it usually turns out that most amazing insights don't solve that problem you've been working on for years.

I know what you mean. I get that all the time, with all of the unsolved math problems I occasionally look at. And since my name isn't on wikipedia yet, I haven't solved any of them.

Although, in this case I would argue that we're better off knowing we're wrong, than being happy for the wrong reasons. The happiness at an end-of-semester party comes from a different source (socializing, having fun, etc), which are, dare I say, the "right" reasons. Destroying this happiness by the truth will not lead to the discovery of more truth, as it were (the grade is already there). Destroying the happiness over a mistake at least lets you find truth in acknowledging such mistake.

But then again, if I have a "brilliant" idea, I start working on it immediately, without giving myself much of a chance to bask in its brilliance.

So what do you do? This is the last grade of the semester, and no more exams to study for. A bad grade will make you unhappy for the rest of the evening (you wanted to go to that party, right? You won’t have much fun thinking about that grade). A good grade will make you happy, but so what? Happiness comes with diminishing marginal returns (and for me it’s more like a binary value, happy or not). You have a higher expected utility for tonight if you don’t check your grade. And you’re not any worse off checking the grade tomorrow.

Should you destroy all that expected utility by the truth? (For reference, the truth is a that you got a C-, which is BAD).

I would think that an ideal rationalist's mental state would be dependent on their prior determination of their most likely grade, and on average actually looking at it should not tend to revise that assessment upwards or downwards.

In practice, I think that all but the most optimistic humans would tend to imagine a grade worse than they probably received until shown otherwise, so looking at the grade would tend to revise your happiness state upwards.

I would think that an ideal rationalist's mental state would be dependent on their prior determination of their most likely grade, and on average actually looking at it should not tend to revise that assessment upwards or downwards.

Suppose I estimate the probability of a good curve at roughly p=5/50=10%. If there's a curve, I'll get an A (utility value 4); else C- (utility value 1.7). Suppose then I need the minimum utility of 2 to enjoy the party (utility 0.2).

My expected utility from not checking the grade is 0.1 x 4 + 0.9 x 1.7 + 0.2 = 2.13. My actual utility once I'd checked the grade is 1.7 + 0.2 = 1.9.

If this expected utility estimate is good, then I should be happy in proportion to it (although I might as well acknowledge now that I failed to account for the difference between expected utility and the utility of the expected outcome, thus assuming that I'm risk-neutral).

Rather than there being a discrete point above which you will be able to enjoy the party and below which you will not, I would expect the amount you enjoy the party to vary according to the grade you got, unless the cutoff point is due to some additional consequence of scoring below that grade which will be accompanied by an additional utility hit. Your prior expected utility would incorporate the chance of taking that additional hit times the likelihood of it occurring.

Anyway, in any specific case, your utility may go up or down by checking your grade, but if you have a perfectly accurate assessment of the probability distribution for your grade, then on average your expected utility should be the same whether you check or not.

In this case, the fact that we know the actual grade stands to be misleading, since it's liable to make any probability distribution that doesn't provide an average expected grade of 1.7 look wrong, even though that might not be the average predicted by the available data.

I considered your point at length. To address your comment, I could use the ignorance hypothesis on my old model, assigning equal probability values to everything between 1.7 and 4.0. Discrete if need be. I could use a binary output value as "enjoying the party," 1 or 0. I could do lots of other tweaks.

But the problem here is, everything comes down to whether this model (or any other 5-minute model) is good enough to explain my non-rationalist gut feeling, especially without an experiment. And, you know, I'm not about to fail an easy exam in a couple of days just to see what my utility function would do.

0 points
expand_less