This is (sort of) a response to Blatant lies are the best kind!, although I'd been working on this prior to that post getting published. This post explores similar issues through my own frame, which seems at least somewhat different from Benquo's.
I've noticed a tendency for people to use the word "lie", when they want to communicate that a statement is deceptive or misleading, and that this is important.
And I think this is (often) technically wrong. I'm not sure everyone defines lie quite the same way, but in most cases where I hear it unqualified, I usually assume it means "to deliberately speak falsehood." Not all deceptive or misleading things are lies.
But it's perhaps a failure of the english language that there isn't a word for "rationalizing" or "motivated cognition" that is as rhetorically hefty.
If you say "Carl lied!", this is a big deal. People might get defensive (because they're friends with Carl), or they might get outraged (if they believe you and feel betrayed by Carl). Either way, something happens.
Whereas if Carl is making a motivated error, and you say "Carl is making a motivated error!", then people often shrug and go 'I dunno, people make motivated errors all the time?" And well, yeah. People do make motivated errors all the time. This is all doubly slippery if the other people are motivated in the same direction as Carl, which incentives them to not get too worked up about it.
But at least sometimes, the error is bad or important enough, or Carl has enough social influence, that it matters that he is making the error.
So it seems perhaps useful to have a word – a short, punchy word – that comes pre-cached with connotation like "Carl has a pattern of rationalizing about this topic, and that pattern is important, and the fact that this has continued awhile un-checked should be making you sit bolt upright in alarm and doing something different from whatever you are currently doing in relation to Carl."
Or, alternately: "It's not precisely a big deal that Carl in particular is doing this. Maybe everyone's doing this, and it'd be unfair to single Carl out. But, the fact that our social fabric is systematically causing people to distort their statements the way Carl is doing is real bad, and we should prioritize fixing that."
The motivating example here was a discussion/argument I had a couple weeks ago with another rationalist. Let's call them Bob.
("Bob" can reveal themselves in the comments if they wish).
Bob was frustrated with Alice, and with many other people's response to some of Alice's statements. Bob said [paraphrased slightly] "Alice blatantly lied! And nobody is noticing or caring!"
Now, it seemed to me that Alice's statement was neither a lie, nor blatant. It was not a lie because Alice believed it. (I call this "being wrong", or "rationalizing", not "lying", and the difference is important because it says very different things about a person's character and how to most usefully respond to them)
It didn't seem blatant because, well, at the very least it wasn't obvious to me that Alice was wrong.
I could see multiple models of the world that might inform Alice's position, and some of them seemed plausible to me. I understood why Bob disagreed, but nonetheless Alice's wrongness did not seem like an obvious fact.
[Unfortunately going into the details of the situation would be more distracting than helpful. I think what's most important to this post were the respective epistemic states of myself and Bob.
But to give some idea, let's say Alice had said something like "obviously minimum wage helps low income workers."
I think this statement is wrong, especially the "obviously" part, but it's a position one might earnestly hold depending on which papers you read in which order. I don't know if Bob would agree that this is a fair comparison, but it roughly matches my epistemic state]
So, it seemed to me that Alice was probably making some cognitive mistakes, and failing to acknowledge some facts that were relevant to her position.
It was also in my probability space that Alice had knowingly lied. (In the minimum wage example, if Alice knew full well that there were some good first principles and empirical reasons to doubt that minimum wage helped low-income workers, and ignored them because it was rhetorically convenient, I might classify that as a lie, or some other form of deception that raised serious red flags about Alice's trustworthiness).
With all this in mind, I said to Bob:
"Hey, I think this is wrong. I don't think Alice was either lying, or blatantly wrong."
Bob thought a second, and then said "Okay, yeah fair. Sure. Alice didn't lie, but she engaged in motivation cognition. But I still think" — and then Bob started speaking quickly, moving on to why he were still frustrated with people's response to Alice, agitation in his voice.
And I said: (slightly paraphrased to fit an hour of discussion into one paragraph)
"Hey. Wait. Stop. It doesn't look like you've back-propagated the fact that Alice didn't blatantly lie through the rest of your belief network. It's understandable if you disagree with me about whether "blatantly lie" makes sense as a description of what's happening here. But if we do agree on that, I think you should actually stop and think a minute, and let that fact sink in, and shift how you feel about the people who aren't treating Alice's statement the way you want."
Bob stopped and said "Okay, yeah, you're right. Thanks." And then waited a minute to do so. (This didn't radically change the argument, in part because there were a lot of other facets of the overall disagreement, but still seemed like a good move for us to have jointly performed)
It was during that minute, while I was meanwhile reflecting on my own, that I thought about the opening statement of this post:
That maybe it's a failure of the english language that we don't have a way to communicate "so-and-so is rationalizing, and this pattern of rationalization is important." If you want to get people's attention and get them agitated, your rhetorical tools are limited.
[Edited addendum]
My guess is that a new word isn't actually the right solution (as Bendini notes in the comments, new jargon tends to get collapsed into whatever the most common use case is, regardless of how well the jargon term fits it).
But I think it'd be useful to at least a have as shared concept-handle, that we can more easily refer to. I think it'd be good to have more affordance to say: "Alice is rationalizing, and people aren't noticing, and I think we should be sitting up and paying attention to this, not just shrugging it off."
(The following was originally the second half of this post. I was worried that I didn't have time to really develop it fully, and meanwhile the half-baked version of it sort of undercut the post in ways I did't like. Putting it here for now. Eventually hopefully will have all of this crystallized into a post that articulates what I think people actually should do about all this)
A problem with "sitting bolt upright in alarm" is that it's not something you can sustainably do all the time. The point of elevated attention is to pay, well, more attention to things that are locally important. If you're always paying maximal attention to one area, you're
a) going to miss other important areas
b) probably going to stress yourself out in ways that are longterm unhelpful. (in particular if "elevated attention" not only uses all of your existing attention, but redirects resources you have previously been using for things other than attention)
Deliberate lying (in my circles) seems quite rare to me.
I'm less confident about non-lying patterns of deception. (Basically, everyone around me seems smart enough for "lying" to be a bad strategy. But other forms of active malicious deception might be going on and I'm not confident I'd be able to tell).
But, given my current rate of "detect deliberate lying" and "detect explicit malicious deception", it does make sense to sit bolt upright in alarm whenever someone looks like they're deceiving. My detection of it, at least, is a rare event.
The next steps I follow after "stop and pay attention" are:
By contrast, a few other activities seem much more common. You can't sit bolt upright in alarm whenever these happen, because then you'd be sitting bolt upright in alarm all the time and stressing your body out.
The world is full of plenty of terrible things where the "appropriate" level of freak-out just isn't very helpful. Your grim-o-meter is for helping you buckle down and run a sprint, not for running a marathon. But I think there's something useful, occasionally, about letting yourself go into a heightened stress state, where you fully feel the importance. Especially if you think a problem might be so important that this particular marathon is the one that you're going to run.
Activities that seem common, and concerning:
I think it's quite likely that we should be coordinating on a staghunt to systematically fix the above two issues. I think such a staghunt looks very different from that ones you do to address deliberate deception.
If someone deliberately deceives me, the issue is that I can't even trust them on the meta-level.
If someone is rationalizing and believing their own marketing, I think the issue is "rationalizing and believing your own marketing is the default state, and it requires a lot of skills built on top of each other in order to stop, and there are a lot of complicated tradeoffs you're making along the way, and this will take along time even for well intentioned people."
And meanwhile a large chunk of the problem is "people have very different models and ontologies that output very different beliefs and plans", so a lot of things that look like rationalization is just very different models.
I think some differences of models are due to motivated cognition, but I think many or most models comes more down to different problems that you're solving.
For example, I had many arguments with habryka about whether there should be norms around keeping the office clean that involved continuous effort on the part of individuals. His opinion was that you should just solve the problem with specialization and systemization. I think motivated cognition have played a role in each of our models, but there were legitimate reasons to prefer one over the othe... (read more)