Sorry it's taken this long for me to reply to this.

"Appeal to consequences" is only a fallacy in reasoning about factual states of the world. In most cases, appealing to consequences is the right action.

For example, if you want to build a house on a cliff, and I say "you shouldn't do that, it might fall down", that's an appeal to consequences, but it's completely valid.

Or to give another example, suppose we are designing a programming language. You recommend, for whatever excellent logical reason, that all lines must end with a semicolon. I argue that many people will forget semicolons, and then their program will crash. Again, appeal to consequences, but again it's completely valid.

I think of language, following Eliezer's definitions sequence, as being a human-made project intended to help people understand each other. It draws on the structure of reality, but has many free variables, so that the structure of reality doesn't constrain it completely. This forces us to make decisions, and since these are not about factual states of the world (eg what the definition of "lie" REALLY is, in God's dictionary) we have nothing to make those decisions on except consequences. If a certain definition will result in lots of people misunderstanding each other, bad people having an easier time confusing others, good communication failing to occur, or other bad things, then it's fine to decide against it based on those grounds, just as you can decide against a programming language decision on the grounds that it will make programs written in it more likely crash, or require more memory, etc.

I am not sure I get your point about the symmetry of strategic equivocation. I feel like this equivocation relies on using a definition contrary to its common connotations. So if I was allowed to redefine "murderer" to mean "someone who drinks Coke", then I could equivocate "Alice who is a murderer (based on the definition where she drinks Coke)" and also "Murderers should be punished (based on the definition where they kill people) and combine them to get "Alice should be punished". The problem isn't that you can equivocate between any two definitions, the problem is very specifically when we use a definition counter to the way most people traditionally use it. I think (do you disagree?) that most people interpret "liar" to mean an intentional liar. As such, I'm not sure I understand the relevance of the Ruby's coworkers example.

I think you're making too hard a divide between the "Hobbesian dystopia" where people misuse language, versus a hypothetical utopia of good actors. I think of misusing language as a difficult thing to avoid, something all of us (including rationalists, and even including me) will probably do by accident pretty often. As you point out regarding deception, many people who equivocate aren't doing so deliberately. Even in a great community of people who try to use language well, these problems are going to come up. And so just as in the programming language example, I would like to have a language that fails gracefully and doesn't cause a disaster when a mistake gets made, one that works with my fallibility rather than naturally leading to disaster when anyone gets something wrong.

And I think I object-level disagree with you about the psychology of deception. I'm interpreting you (maybe unfairly, but then I can't figure out what the fair interpretation is) as saying that people very rarely lie intentionally, or that this rarely matters. This seems wrong to me - for example, guilty criminals who say they're innocent seem to be lying, and there seem to be lots of these, and it's a pretty socially important thing. I try pretty hard not to intentionally lie, but I can think of one time I failed (I'm not claiming I've only ever lied once in my life, just that this time comes to mind as something I remember and am particularly ashamed about). And even if lying never happened, I still think it would be worth having the word for it, the same way we have a word for "God" that atheists don't just repurpose to mean "whoever the most powerful actor in their local environment is."

Stepping back, we have two short words ("lie" and "not a lie") to describe three states of the world (intentional deception, unintentional deception, complete honesty). I'm proposing to group these (1)(2,3) mostly on the grounds that this is how the average person uses the terms, and if we depart from how the average person uses the terms, we're inviting a lot of confusion, both in terms of honest misunderstandings and malicious deliberate equivocation. I understand Jessica wants to group them (1,2)(3), but I still don't feel like I really understand her reasoning except that she thinks unintentional deception is very bad. I agree it is very bad, but we already have the word "bias" and are so in agreement about its badness that we have a whole blog and community about overcoming it.

Showing 3 of 8 replies (Click to show all)
2jessicata5moI don't think it's the crux, no. I don't accept ordinary language philosophy, which canonizes popular confusions. There are some contexts where using ordinary language is important, such as when writing popular news articles, but that isn't all of the contexts.
27Scott Alexander5moEDIT: Want to talk to you further before I try to explain my understanding of your previous work on this, will rewrite this later. The short version is I understand we disagree, I understand you have a sophisticated position, but I can't figure out where we start differing and so I don't know what to do other than vomit out my entire philosophy of language and hope that you're able to point to the part you don't like. I understand that may be condescending to you and I'm sorry. I absolutely deny I am "motivatedly playing dumb" and I enter this into the record as further evidence that we shouldn't redefine language to encode a claim that we are good at ferreting out other people's secret motivations.

(Scott and I had a good conversation today. I think I need to write a followup post (working title: "Instrumental Categories, Wireheading, and War") explaining in more detail exactly what distinction I'm making when I say I want to consider some kinds of appeals-to-consequences invalid while still allowing, e.g. "Requiring semicolons in your programming language will have the consequence of being less convenient for users who forget them." The paragraphs in "Where to Draw the Boundaries?" starting with "There is an important difference [...]" are gesturing

... (read more)

Maybe Lying Doesn't Exist

by Zack_M_Davis 7 min read14th Oct 201957 comments

59


In "Against Lie Inflation", the immortal Scott Alexander argues that the word "lie" should be reserved for knowingly-made false statements, and not used in an expanded sense that includes unconscious motivated reasoning. Alexander argues that the expanded sense draws the category boundaries of "lying" too widely in a way that would make the word less useful. The hypothesis that predicts everything predicts nothing: in order for "Kevin lied" to mean something, some possible states-of-affairs need to be identified as not lying, so that the statement "Kevin lied" can correspond to redistributing conserved probability mass away from "not lying" states-of-affairs onto "lying" states-of-affairs.

All of this is entirely correct. But Jessica Taylor (whose post "The AI Timelines Scam" inspired "Against Lie Inflation") wasn't arguing that everything is lying; she was just using a more permissive conception of lying than the one Alexander prefers, such that Alexander didn't think that Taylor's definition could stably and consistently identify non-lies.

Concerning Alexander's arguments against the expanded definition, I find I have one strong objection (that appeal-to-consequences is an invalid form of reasoning for optimal-categorization questions for essentially the same reason as it is for questions of simple fact), and one more speculative objection (that our intuitive "folk theory" of lying may actually be empirically mistaken). Let me explain.

(A small clarification: for myself, I notice that I also tend to frown on the expanded sense of "lying". But the reasons for frowning matter! People who superficially agree on a conclusion but for different reasons, are not really on the same page!)

Appeals to Consequences Are Invalid

There is no method of reasoning more common, and yet none more blamable, than, in philosophical disputes, to endeavor the refutation of any hypothesis, by a pretense of its dangerous consequences[.]

David Hume

Alexander contrasts the imagined consequences of the expanded definition of "lying" becoming more widely accepted, to a world that uses the restricted definition:

[E]veryone is much angrier. In the restricted-definition world, a few people write posts suggesting that there may be biases affecting the situation. In the expanded-definition world, those same people write posts accusing the other side of being liars perpetrating a fraud. I am willing to listen to people suggesting I might be biased, but if someone calls me a liar I'm going to be pretty angry and go into defensive mode. I'll be less likely to hear them out and adjust my beliefs, and more likely to try to attack them.

But this is an appeal to consequences. Appeals to consequences are invalid because they represent a map–territory confusion, an attempt to optimize our description of reality at the expense of our ability to describe reality accurately (which we need in order to actually optimize reality).

(Again, the appeal is still invalid even if the conclusion—in this case, that unconscious rationalization shouldn't count as "lying"—might be true for other reasons.)

Some aspiring epistemic rationalists like to call this the "Litany of Tarski". If Elijah is lying (with respect to whatever the optimal category boundary for "lying" turns out to be according to our standard Bayesian philosophy of language), then I desire to believe that Elijah is lying (with respect to the optimal category boundary according to ... &c.). If Elijah is not lying (with respect to ... &c.), then I desire to believe that Elijah is not lying.

If the one comes to me and says, "Elijah is not lying; to support this claim, I offer this-and-such evidence of his sincerity," then this is right and proper, and I am eager to examine the evidence presented.

If the one comes to me and says, "You should choose to define lying such that Elijah is not lying, because if you said that he was lying, then he might feel angry and defensive," this is insane. The map is not the territory! If Elijah's behavior is, in fact, deceptive—if he says things that cause people who trust him to be worse at anticipating their experiences when he reasonably could have avoided this—I can't make his behavior not-deceptive by changing the meanings of words.

Now, I agree that it might very well empirically be the case that if I say that Elijah is lying (where Elijah can hear me), he might get angry and defensive, which could have a variety of negative social consequences. But that's not an argument for changing the definition of lying; that's an argument that I have an incentive to lie about whether I think Elijah is lying! (Though Glomarizing about whether I think he's lying might be an even better play.)

Alexander is concerned that people might strategically equivocate between different definitions of "lying" as an unjust social attack against the innocent, using the classic motte-and-bailey maneuver: first, argue that someone is "lying (expanded definition)" (the motte), then switch to treating them as if they were guilty of "lying (restricted definition)" (the bailey) and hope no one notices.

So, I agree that this is a very real problem. But it's worth noting that the problem of equivocation between different category boundaries associated with the same word applies symmetrically: if it's possible to use an expanded definition of a socially-disapproved category as the motte and a restricted definition as the bailey in an unjust attack against the innocent, then it's also possible to use an expanded definition as the bailey and a restricted definition as the motte in an unjust defense of the guilty. Alexander writes:

The whole reason that rebranding lesser sins as "lying" is tempting is because everyone knows "lying" refers to something very bad.

Right—and conversely, because everyone knows that "lying" refers to something very bad, it's tempting to rebrand lies as lesser sins. Ruby Bloom explains what this looks like in the wild:

I worked in a workplace where lying was commonplace, conscious, and system 2. Clients asking if we could do something were told "yes, we've already got that feature (we hadn't) and we already have several clients successfully using that (we hadn't)." Others were invited to be part an "existing beta program" alongside others just like them (in fact, they would have been the very first). When I objected, I was told "no one wants to be the first, so you have to say that."

[...] I think they lie to themselves that they're not lying (so that if you search their thoughts, they never think "I'm lying")[.]

If your interest in the philosophy of language is primarily to avoid being blamed for things—perhaps because you perceive that you live in a Hobbesian dystopia where the primary function of words is to elicit actions, where the denotative structure of language was eroded by political processes long ago, and all that's left is a standardized list of approved attacks—in that case, it makes perfect sense to worry about "lie inflation" but not about "lie deflation." If describing something as "lying" is primarily a weapon, then applying extra scrutiny to uses of that weapon is a wise arms-restriction treaty.

But if your interest in the philosophy of language is to improve and refine the uniquely human power of vibratory telepathy—to construct shared maps that reflect the territory—if you're interested in revealing what kinds of deception are actually happening, and why—

(in short, if you are an aspiring epistemic rationalist)

—then the asymmetrical fear of false-positive identifications of "lying" but not false-negatives—along with the focus on "bad actors", "stigmatization", "attacks", &c.—just looks weird. What does that have to do with maximizing the probability you assign to the right answer??

The Optimal Categorization Depends on the Actual Psychology of Deception

Deception
My life seems like it's nothing but
Deception
A big charade

I never meant to lie to you
I swear it
I never meant to play those games

"Deception" by Jem and the Holograms

Even if the fear of rhetorical warfare isn't a legitimate reason to avoid calling things lies (at least privately), we're still left with the main objection that "lying" is a different thing from "rationalizing" or "being biased". Everyone is biased in some way or another, but to lie is "[t]o give false information intentionally with intent to deceive." Sometimes it might make sense to use the word "lie" in a noncentral sense, as when we speak of "lying to oneself" or say "Oops, I lied" in reaction to being corrected. But it's important that these senses be explicitly acknowledged as noncentral and not conflated with the central case of knowingly speaking falsehood with intent to deceive—as Alexander says, conflating the two can only be to the benefit of actual liars.

Why would anyone disagree with this obvious ordinary view, if they weren't trying to get away with the sneaky motte-and-bailey social attack that Alexander is so worried about?

Perhaps because the ordinary view relies an implied theory of human psychology that we have reason to believe is false? What if conscious intent to deceive is typically absent in the most common cases of people saying things that (they would be capable of realizing upon being pressed) they know not to be true? Alexander writes—

So how will people decide where to draw the line [if egregious motivated reasoning can count as "lying"]? My guess is: in a place drawn by bias and motivated reasoning, same way they decide everything else. The outgroup will be lying liars, and the ingroup will be decent people with ordinary human failings.

But if the word "lying" is to actually mean something rather than just being a weapon, then the ingroup and the outgroup can't both be right. If symmetry considerations make us doubt that one group is really that much more honest than the other, that would seem to imply that either both groups are composed of decent people with ordinary human failings, or that both groups are composed of lying liars. The first description certainly sounds nicer, but as aspiring epistemic rationalists, we're not allowed to care about which descriptions sound nice; we're only allowed to care about which descriptions match reality.

And if all of the concepts available to us in our native language fail to match reality in different ways, then we have a tough problem that may require us to innovate.

The philosopher Roderick T. Long writes

Suppose I were to invent a new word, "zaxlebax," and define it as "a metallic sphere, like the Washington Monument." That's the definition—"a metallic sphere, like the Washington Monument." In short, I build my ill-chosen example into the definition. Now some linguistic subgroup might start using the term "zaxlebax" as though it just meant "metallic sphere," or as though it just meant "something of the same kind as the Washington Monument." And that's fine. But my definition incorporates both, and thus conceals the false assumption that the Washington Monument is a metallic sphere; any attempt to use the term "zaxlebax," meaning what I mean by it, involves the user in this false assumption.

If self-deception is as ubiquitous in human life as authors such as Robin Hanson argue (and if you're reading this blog, this should not be a new idea to you!), then the ordinary concept of "lying" may actually be analogous to Long's "zaxlebax": the standard intensional definition ("speaking falsehood with conscious intent to deceive"/"a metallic sphere") fails to match the most common extensional examples that we want to use the word for ("people motivatedly saying convenient things without bothering to check whether they're true"/"the Washington Monument").

Arguing for this empirical thesis about human psychology is beyond the scope of this post. But if we live in a sufficiently Hansonian world where the ordinary meaning of "lying" fails to carve reality at the joints, then authors are faced with a tough choice: either be involved in the false assumptions of the standard believed-to-be-central intensional definition, or be deprived of the use of common expressive vocabulary. As Ben Hoffman points out in the comments to "Against Lie Inflation", an earlier Scott Alexander didn't seem shy about calling people liars in his classic 2014 post "In Favor of Niceness, Community, and Civilization"

Politicians lie, but not too much. Take the top story on Politifact Fact Check today. Some Republican claimed his supposedly-maverick Democratic opponent actually voted with Obama's economic policies 97 percent of the time. Fact Check explains that the statistic used was actually for all votes, not just economic votes, and that members of Congress typically have to have >90% agreement with their president because of the way partisan politics work. So it's a lie, and is properly listed as one. [bolding mine —ZMD] But it's a lie based on slightly misinterpreting a real statistic. He didn't just totally make up a number. He didn't even just make up something else, like "My opponent personally helped design most of Obama's legislation".

Was the politician consciously lying? Or did he (or his staffer) arrive at the misinterpretation via unconscious motivated reasoning and then just not bother to scrupulously check whether the interpretation was true? And how could Alexander know?

Given my current beliefs about the psychology of deception, I find myself inclined to reach for words like "motivated", "misleading", "distorted", &c., and am more likely to frown at uses of "lie", "fraud", "scam", &c. where intent is hard to establish. But even while frowning internally, I want to avoid tone-policing people whose word-choice procedures are calibrated differently from mine when I think I understand the structure-in-the-world they're trying to point to. Insisting on replacing the six instances of the phrase "malicious lies" in "Niceness, Community, and Civilization" with "maliciously-motivated false belief" would just be worse writing.

And I definitely don't want to excuse motivated reasoning as a mere ordinary human failing for which someone can't be blamed! One of the key features that distinguishes motivated reasoning from simple mistakes is the way that the former responds to incentives (such as being blamed). If the elephant in your brain thinks it can get away with lying just by keeping conscious-you in the dark, it should think again!

59