This is (sort of) a response to Blatant lies are the best kind!, although I'd been working on this prior to that post getting published. This post explores similar issues through my own frame, which seems at least somewhat different from Benquo's.
I've noticed a tendency for people to use the word "lie", when they want to communicate that a statement is deceptive or misleading, and that this is important.
And I think this is (often) technically wrong. I'm not sure everyone defines lie quite the same way, but in most cases where I hear it unqualified, I usually assume it means "to deliberately speak falsehood." Not all deceptive or misleading things are lies.
But it's perhaps a failure of the english language that there isn't a word for "rationalizing" or "motivated cognition" that is as rhetorically hefty.
If you say "Carl lied!", this is a big deal. People might get defensive (because they're friends with Carl), or they might get outraged (if they believe you and feel betrayed by Carl). Either way, something happens.
Whereas if Carl is making a motivated error, and you say "Carl is making a motivated error!", then people often shrug and go 'I dunno, people make motivated errors all the time?" And well, yeah. People do make motivated errors all the time. This is all doubly slippery if the other people are motivated in the same direction as Carl, which incentives them to not get too worked up about it.
But at least sometimes, the error is bad or important enough, or Carl has enough social influence, that it matters that he is making the error.
So it seems perhaps useful to have a word – a short, punchy word – that comes pre-cached with connotation like "Carl has a pattern of rationalizing about this topic, and that pattern is important, and the fact that this has continued awhile un-checked should be making you sit bolt upright in alarm and doing something different from whatever you are currently doing in relation to Carl."
Or, alternately: "It's not precisely a big deal that Carl in particular is doing this. Maybe everyone's doing this, and it'd be unfair to single Carl out. But, the fact that our social fabric is systematically causing people to distort their statements the way Carl is doing is real bad, and we should prioritize fixing that."
The motivating example here was a discussion/argument I had a couple weeks ago with another rationalist. Let's call them Bob.
("Bob" can reveal themselves in the comments if they wish).
Bob was frustrated with Alice, and with many other people's response to some of Alice's statements. Bob said [paraphrased slightly] "Alice blatantly lied! And nobody is noticing or caring!"
Now, it seemed to me that Alice's statement was neither a lie, nor blatant. It was not a lie because Alice believed it. (I call this "being wrong", or "rationalizing", not "lying", and the difference is important because it says very different things about a person's character and how to most usefully respond to them)
It didn't seem blatant because, well, at the very least it wasn't obvious to me that Alice was wrong.
I could see multiple models of the world that might inform Alice's position, and some of them seemed plausible to me. I understood why Bob disagreed, but nonetheless Alice's wrongness did not seem like an obvious fact.
[Unfortunately going into the details of the situation would be more distracting than helpful. I think what's most important to this post were the respective epistemic states of myself and Bob.
But to give some idea, let's say Alice had said something like "obviously minimum wage helps low income workers."
I think this statement is wrong, especially the "obviously" part, but it's a position one might earnestly hold depending on which papers you read in which order. I don't know if Bob would agree that this is a fair comparison, but it roughly matches my epistemic state]
So, it seemed to me that Alice was probably making some cognitive mistakes, and failing to acknowledge some facts that were relevant to her position.
It was also in my probability space that Alice had knowingly lied. (In the minimum wage example, if Alice knew full well that there were some good first principles and empirical reasons to doubt that minimum wage helped low-income workers, and ignored them because it was rhetorically convenient, I might classify that as a lie, or some other form of deception that raised serious red flags about Alice's trustworthiness).
With all this in mind, I said to Bob:
"Hey, I think this is wrong. I don't think Alice was either lying, or blatantly wrong."
Bob thought a second, and then said "Okay, yeah fair. Sure. Alice didn't lie, but she engaged in motivation cognition. But I still think" — and then Bob started speaking quickly, moving on to why he were still frustrated with people's response to Alice, agitation in his voice.
And I said: (slightly paraphrased to fit an hour of discussion into one paragraph)
"Hey. Wait. Stop. It doesn't look like you've back-propagated the fact that Alice didn't blatantly lie through the rest of your belief network. It's understandable if you disagree with me about whether "blatantly lie" makes sense as a description of what's happening here. But if we do agree on that, I think you should actually stop and think a minute, and let that fact sink in, and shift how you feel about the people who aren't treating Alice's statement the way you want."
Bob stopped and said "Okay, yeah, you're right. Thanks." And then waited a minute to do so. (This didn't radically change the argument, in part because there were a lot of other facets of the overall disagreement, but still seemed like a good move for us to have jointly performed)
It was during that minute, while I was meanwhile reflecting on my own, that I thought about the opening statement of this post:
That maybe it's a failure of the english language that we don't have a way to communicate "so-and-so is rationalizing, and this pattern of rationalization is important." If you want to get people's attention and get them agitated, your rhetorical tools are limited.
[Edited addendum]
My guess is that a new word isn't actually the right solution (as Bendini notes in the comments, new jargon tends to get collapsed into whatever the most common use case is, regardless of how well the jargon term fits it).
But I think it'd be useful to at least a have as shared concept-handle, that we can more easily refer to. I think it'd be good to have more affordance to say: "Alice is rationalizing, and people aren't noticing, and I think we should be sitting up and paying attention to this, not just shrugging it off."
Attempting to answer more concretely and principled-ly about what makes sense to distinguish here
Reflecting a bit more, I think there are two important distinctions to be made:
Situation A) Alice makes a statement, which is false, and either Alice knows beforehand it's false, or Alice realizes it's false as soon as she pays any attention to it after the fact. (this is slightly different from how I'd have defined "lie" yesterday, but after 24 hours of mulling it over I think this is the correct clustering)
Situation B) Alice makes a statement which is false, which to Alice appears locally valid, but which is built upon some number of premises or arguments that are motivated.
...
[edit:]
This comment ended up quite long, so a summary of my overall point:
Situation B is much more complicated than Situation A.
In Situation A, Alice only has one inferential step to make, and Alice and Bob have mutual understanding (although not common knowledge) of that one inferential step. Bob can say "Alice, you lied here" and have the conversation make sense.
In Situation B, Alice has many inferential steps to make, and if Bob says "Alice, you lied here", Alice (even if rational and honest) needs to include probability mass on "Bob is wrong, Bob is motivated, and/or Bob is a malicious actor."
These are sufficiently different epistemic states for Alice to be in that I think it makes sense to use different words for them.
...
Situation A
In situation A, if Bob says "Hey, Alice, you lied here", Alice thinks internally either "shit I got caught" or "oh shit, I *did* lie." In the first case, Alice might attempt to obfuscate further. In the second case, Alice hopefully says "oops", admits the falsehood, and the conversation moves on. In either case, the incentives are *mostly* clear and direct to Alice – try to avoid doing this again, because you will get called on it.
If Alice obfuscates, or pretends to be in Situation B, she might get away with it this time, but identifying the lie will still likely reduce her incentives to make similar statements in the future (since at the very least, she'll have to do work defending herself)
Situation B
In situation B, if you say "Hey Alice, you lied here", Alice will say "what the hell? No?".
And then a few things happen, which I consider justified on Alice's part:
From Alice's epistemic position, she just said a true thing. If Bob just claimed that true thing was a lie, alice has now has several major hypotheses to consider:
This is a much more complicated set of possibilities for Alice to evaluate. Incentives are getting applied here, but they could push her in a number of ways.
If Alice is a typical human and/or junior rationalist, she's going to be defensive, which will make it harder for her to think clearly. She will be prone to exaggerating the probability of options that aren't her fault. She may see Bob as socially threatening her – not as a truthseeking collaborator trying to help, but as a malicious actor out to harm her.
If Alice is a perfectly skilled rationalist, she'll hopefully avoid feeling defensive, and will not exaggerate the probability any of the options for motivated reasons. But over half the options are still "this is Bob's problem, not Alice's, and/or they are both somewhat confused together".
Exactly how the probabilities fall out depends on the situation, and how much Alice trusts her own reasoning, and how much she trusts Bob's reasoning. But even perfect-rationalist Alice should have nonzero probability on "Bob is one who is wrong, perhaps maliciously, here".
And if the answer is "Alice's belief is built on some kind of motivated reasoning", that's not something that can be easily resolved. If Alice is wrong, but luckily so, where the chain of motivated beliefs might be only 1-2 nodes deep, she can check if they make sense and maybe discover she is wrong. But...
And meanwhile, until Alice has verified that her reasoning was motivated, she needs to retain probability mass on Bob being the wrong one.
Takeaways
Situation B seems extremely different to me from Situation A. It makes a lot of sense to me for people to use different words or phrases for the two situations.
One confounding issue is that obfuscating liars in Situation A have an incentive to pretend to be in Situation B. But there's still a fact-of-the-matter of what mental state Alice is in, which changes what incentives Alice will and should respond to.