Note: I am a bad writer. If you want to help me by ghostwriting, reviewing or coaching, please reach out! I have a full stack of things to share, and I can pay well.

I’ve gotten quite a bit of feedback on my last post: “Lying is Cowardice, not Strategy”. Nice! I’ve been happy to see people discussing public honesty norms.

Some of the feedback was disappointing– people said things along the lines of: “Lies of omission are not lies! The fact that you have to add ‘of omission’ shows this!”.

The worst that I have gotten in that vein was: “When you are on stand, you take an oath that says that you will convey the truth, the whole truth, and nothing but the truth. The fact that you have to add ‘the whole truth’ shows that lies of omission are not lies.”

This was from a person that also straightforwardly stated “I plan to keep continuing to not state my most relevant opinions in public”, as they were defending this behavior.

But I have also received some helpful feedback that pointed out the lack of nuance in my post. Indeed, reality is complex, and I can not address all of it in one post, so I must make a lot of simplifications. 

Furthermore, I am not (yet?) a good writer, so even within one post, I have many opportunities for improvement. While some people take advantage of this through isolated demands for rigor or empty terminological disputes, others are genuinely confused by a novel point or framing that they never considered from a writer whose mind they do not have full access to.

This post is dedicated to the latter audience: people who tried to understand and/or apply my ideas, but are confused, because there is a lot that I have just not written. To help with this, in this post, I’ll go through some of the less straightforward points that I have glossed over.

Point #0: What do I mean by lies? Are lies always bad?

When I say that someone lies, I mean that they have communicated anti-information: information that predictably makes people become more wrong about the world or about what the locutor believes. This includes lies of commission, lies of omission, misdirection, and more.

Even though lying is bad, there can be good reasons to lie. The obvious example is lying to a murderer at your door, asking for where your friend is. There are also situations where lying is innocuous, such as bluffing games.

Likewise, even though punching people is bad, there can be situations where it’s justified, such as self-defense. There are also situations where punching is innocuous, such as sparring at a boxing club.

Furthermore, communication is hard. That’s why you can even lie accidentally. It is not always easy to know how your statements will be taken, especially across cultures and contexts. This is very similar to insulting people accidentally.

Finally, it is extremely easy to lie without saying anything that is technically false. You can imply things, omit relevant information, make things sound worse than they are, use double-speak and ambiguity, etc.

Point #1: Predictable patterns of lying are what makes someone a liar

I write about “liars”– this might seem like meaninglessly strong moral language.

So let’s make it clear what I mean by it. When I say “liars”, I am talking about people who have a predictable pattern of lying.

The relevant fact is not whether what a person states can be constructed as technically true or technically false, but whether they predictably make people more wrong over time.

Why is it useful to have a concept of “liars?” Because it helps us build practical models of people. Everyone lies. But sometimes, people predictably lie, and you can deduce a fair bunch from this.

Being a liar is not an intrinsic property of a person. The world is not divided into liars and non-liars.

Some people lie while they work at a specific company. This makes sense, as they might get fired for not lying. Expressing themselves honestly could put their colleagues, employees, or managers in trouble. Regardless, it is bad, a strong signal that the person should leave the company.

Most people (including me) lied way too much when they were children. This is normal: at some point, we need to learn communication norms, what people expect, the real-life consequences of lies, the benefits of honesty, etc.

Many people regularly lied as teenagers. Social pressure and everyone around you constantly exaggerating will have you do this.

Sometimes, it should even be ok to lie. For example, by default, it should be socially ok to lie to people about your private life. Even more specifically, the public does not deserve to know whether a given famous person is gay or not. As it is not always possible to deal with unjustified social prying, to the extent one can not do so without lying, we should defend their right to lie.

Instead of “liar” being an intrinsic or intentional property of a person, I use “liar” to refer to someone who has a particular pattern of behavior. If you ever hear me say “X is a liar”, by default, I will be referring to a specific pattern of lying. As in, “For now, you can predict that listening to X about [Y] will lead you to form less accurate beliefs about the world and/or what X thinks”.

If I ever want to make a claim about an intrinsic property about a person as a whole, I will instead say something closer to “X doesn’t care about lying”, “X is a psychopath”, “X is a pathological liar” or something along that line.

Point #2: People often lie because they fail to reflect

People often lie out of a willful lack of reflection: they explicitly avoid thinking clearly about a topic, which makes them predictably unreliable. Because they usually are emotionally invested in this topic, they will also make it costly for you to call them out. This is a thing that I am personally sensitive to, but that most people find normal.

For instance, I once worked with a person who missed self-assigned deadlines 5 times in a row. Missing self-assigned deadlines was not an isolated incident for them, but 5 times in a row was a first. And at each point, the person assured me that the new deadline would be definitively met.

From my point of view, this is very bad. At some point, you are either lying to me about the deadlines (e.g., by being overly optimistic to look good) or you are lying to yourself. Most people can realize, after missing 4 deadlines in a row, that they are miscalibrated. They should just start taking it into account, and at least be slightly less confident that 5th time.

Calling that 5th over-optimistic deadline a “lie” is non-standard. A more palatable name in real-life is “bullshit”. As in, “You are bullshitting me”. However, I find it practical to call them lies, as this is the easiest way to convey the fact that the person could simply not say false things.

I have had many discussions about this, and upon reflection, many people ended up agreeing with this frame and started using it.

Internally, I think of these lies as irreflexive lies: they arise from the lack of reflection. You don’t need much introspection or reflection to realize that you are lying and stop doing it.

Point #3: People can be Honest and consistently lie

Sometimes, a person will show a pattern of consistent lying despite being honest to the best of their ability.

This sounds strange. How can someone consistently convey anti-information, if they are honest to the best of their ability? This seems crazy.

The key thing here is that quite often, people are more often representatives of their social groups than an individual with their own beliefs.

An extremely common case is when an honest person belongs to a group that is itself explicitly dishonest. Think of ideological groups, or people playing some “inside game”. Honest people in these groups will come up with reasonable arguments for why some beliefs or actions of the group are wrong. However, the group will have so much experience with honest newcomers expressing reasonable concerns that they already have a large bank of cheap and efficient counter arguments for almost all of them.

In this situation, most honest people are very bad at recognizing the fact that these nice-sounding counterarguments are the result of adversarial optimization: dishonest groups spend a lot of energy in dealing with honest members. But most of that energy is spent before a specific honest member even joins the group, by preparing counter-arguments. 

As a result, from the point of view of the newcomer: nothing is happening, the group is not being particularly adversarial. Indeed, even if you are observant, you can only see traces of it having happened in the past.

Let’s unfold this “adversarial optimization” a little:

1) Newcomers come up with many concerns. Most of these stem from ignorance, they are easily addressed misconceptions about the group.

2) Mixed with these naive misconceptions are some actually reasonable concerns. But addressing those concerns would be expensive: you would need to change things. As a result, it is often cheaper to just make some ad-hoc counter-arguments: arguments meant specifically to deal with the expressed concerns. These counter-arguments might not be great, but they just need to be good enough to justify the status quo against some newbie.

3) Over time, the group will build a big library of counter-arguments that justify the status quo. Furthermore, as time passes, the group will come up with variants to the counter-arguments, and the most effective ones will be remembered over time.

As a result of this process, the group becomes better at dealing with surface arguments from the outside world, but through ways that are not about truth-seeking. So after joining a group, the honest newcomer will get convinced by all these optimized responses to their concerns, that all seem reasonable and well-thought out.

But from the outside, when you look at this newcomer repeating the group counterarguments, it is obvious that these arguments result from motivated reasoning: you can predict where they will end just based on the group affiliation or what would be convenient to their group. You can usually predict that inconvenient positions will be thoroughly taken down, obfuscated and more.

At that point, it doesn’t matter that the group member is honest. By merely repeating the opinions that won in their group, they are stating and perpetuating lies, as these opinions will be more representative of the internal group incentives than truth finding. 

Their behavior will conceal the weak points of the groups, and they might even state plain falsehoods if the group finds convenient justifications for them.

At this point, their good intent has truly stopped mattering – they are merely PR-washing their group lies.

Unfortunately, they will not recognize that they became an avatar of the group. They will believe that they thought about the topic by themselves, brought concerns to the group, and were rationally convinced by the counterarguments.

Subpoint: Life is hard and unfair.

If your reaction to this is “But this is true of any group, not just cults and extremists!”, you’re correct and well-calibrated.

This does not invalidate my point, to be clear. Real-life is just hard like that: groupthink will have the most honest person behave like a liar.

More specifically, group epistemics (ie: the methods to get groups to think correctly) are hard. Strong group epistemics require punishments: if you don’t punish the wrong behaviors, then you get more of them. 

This is why I am writing this in the first place: I would like more people to be aware of harmful behaviors and call them out.  

Unfortunately, punishing defection feels bad, so it’s not really popular. There are some groups that feel good about punishing defection, or rather, punishing defectors. But they are not really the kinds of groups you want to be in, and they have bad epistemics for less dignified reasons.

Point #4: Lying is just bad

Regardless of all these nuances, a piece of feedback that struck me was “But it’s normal to be dishonest in public!”.

The person who offered this feedback was honest themself, they just found it normal that others were not. And by normal, I don’t mean that they were just feeling that it was expected, I mean that they felt that you should not fault people for it.

This is what I am pushing against. To make it clear: lying publicly is a common strategy, and it works, but it should be faulted

At the very least, I don’t want this strategy to work around me. This is why I have been very vocal about how to be more resilient against lies. This includes things like:

“Don’t steelman too much: call it out when people state seemingly contradictory things.” 

“Push for conversations to be public rather than for arguments or rumors to be spread in private.”

“If someone’s nice, keep in mind that people might pay attention to them just because they are nice.

And crucial to being resilient against liars is the understanding that lying should not be normalized. If people lie around you, you should give them shit for it. Don’t excommunicate them, don’t insult them, but do give them shit for lying, they should pay some cost.

Your norms around lying should optimize around getting fewer lies. Not purifying the world from liars, just getting fewer lies.

That said, if you don’t give liars shit for lying, then they can lie for free, and by virtue of this, you yourself are making lying into a winning strategy. You make your environment, your social group, your peer group, a place where lying is a good strategy.

Around you, people who lie will have an advantage over people who don’t. Furthermore, your peers will see you not punishing liars and will expect more unpunished defection.

To state it plainly: lying is just bad. If you don’t fight it around you, you’ll get more of it. Sometimes, the situation is so bad that you can’t help but lie, or can’t help but not fight it. Sadly, regardless of the reason, as long as it’s perpetuated and not fought, lying still has its deleterious effects, and will keep spreading.

Point #5: Lies in critical situations

On a small scale, some of this stuff is extreme. Indeed, you do not need to fight lying every single lie; your time alone with a friend is usually better spent having fun than constantly calling them out on their bullshit. However, on the societal level, when talking about extinction risks and the future of humanity, we need to have extremely high standards against lies.

Here, I'm talking about the fact that if all of the views of the AI safety community and leading AI developers were more frankly represented publicly, we'd be in a much better position to coordinate. It would be obvious to people that the race for superintelligence is an incredibly dangerous gambit and that the people leading AI companies want to take this gambit.

The fact that this is a difficult thing to admit right now is mainly due to the fact that people's true opinions are not common knowledge. Whenever a key researcher or developer becomes more public or more frank about their beliefs, it becomes less hard for people to agree on the risks, and that racing to AGI is bad.

Point #6: You don’t need to believe someone to get lied to

When I say “fight lying”, it might sound like I am pushing for extreme things. Yet I’d say that most of it is just refusing to get lied to.  But refusing to get lied to is hard.

It starts with acknowledging the problem, that you can in fact, get lied to. Alas, there is always this natural tendency of believing that you are above this: “Social pressure doesn’t work on me”, “Ads don’t work on me”, and “Propaganda doesn’t work on me” are very common beliefs.

Let’s start with some examples.

If you know someone who told you false things in the past, and you pay attention to things similar to what they lied about, you are getting lied to.

If you know someone who says different things to different people, and you pay attention to what they tell you: you are getting lied to.

If you know someone who conceals their beliefs publicly, and you pay attention to what they say in private, you are getting lied to.

In situation 1, it might look like you are safe: you know that the person has told you false things, so you will obviously not believe them. But you might not be on bad terms with them (or you might be, but showing it would be awkward). So you entertain them, and talk to them about their arguments.

Well, you got lied to! They successfully conveyed anti-information to you: they told you things that make it more likely for you to be wrong about the world. Whenever events happen, you will notice when they match stories that you were told, leading to illusory correlations

More generally, you will consider their arguments, and you will see them as more reasonable than others (or at least, more normal). You are also more likely to discuss with others what they said, and give more air to their ideas. After all, ”it doesn't matter what they say about you as long as they spell your name right”, "First they ignore you, then they laugh at you, then they fight you, then you win", etc. Fundamentally, arguments become more potent as they are more salient.

In situation 2 and 3, it might look like you are safe: even though that person lies or conceals their beliefs to others, they are honest with you. That’s normal: everyone has their façade/mask/tatemae in public, and their authentic/true self/honne in private. Getting access to these private truths is actually evidence that you are close to them.

While this can be true for private matters, this is also an extremely common technique used by liars. Coming up with an explanation for why some important thing can not be said publicly (or to a wider audience at least) is very convenient: this is the very thing that lets liars convince each person they interact with that they are honest with them and not the others. 

Internally, the lying party might not even feel like they are lying: they have different connections with different people, and they just put the emphasis on different parts of their beliefs.

When people adopt such behaviors, the only consistently winning move is not to play. If you give liars attention, you have already lost. 

By giving them attention, you will think about what they told you, and will discuss it. If you dedicate attention to what a liar tells you, they have already won. They don’t need you to “believe” them, they just need their message to become normalized to your ears.

If you have not trained yourself against this, if you have not adopted an adversarial mindset about it, you are most likely being taken advantage of. Let’s go over a couple of specific exploitable attitudes that I constantly see in the wild.

Subpoint: Discounting lies from nice, friendly and smart people

Discounting lies if the person is nice, a friend, or smart. This mistake is unfortunately the most common one. Very often, someone who worries about extinction risks will ask me about a conversation that they had with someone from an AGI lab that said promising things. 

Then, I will always point out the obvious fact that they are not committing to these things, nor saying them publicly. And finally we get to the punchline: “But they are nice/smart/my-friend!” or “I had a drink/party with them and we spoke for hours!”.

This is very bad. Not only does this give an easy way out for liars (they just need to be nice, signal intelligence and get close to you!), but it favors lying! Liars are much better than the baseline at being nice, signaling intelligence and getting close to you! 

Lying itself helps with being nice (fake compliments), signaling intelligence (understate your ignorance) and getting close to people (imply genuine interest)!

Liars are good at being nice and signaling smarts. If you discount lies when people are nice, you are trashing most of your ability to recognize lies.

Subpoint: Arguing with liars in private

Arguing with a liar in private. This mistake is unfortunately common with rationalists. Some rationalist will tell me “I had a conversation with [Obvious Opponent], and they acknowledged my counter argument! They had actually never encountered it! I think I made some progress with them. Yay to debates!” 

Sometimes later (once, the very next day!), [Obvious Opponent] goes on to publicly ignore whatever counter-argument was made to them. What happened is that [Obvious Opponent] encountered a new counter-argument that they never prepared for. And now, because they were given some free preparation material, they won’t be surprised the next time.

Because this was done in private, they are not suffering any public cost for not being prepared and not having considered that counter-argument. This is bad, because the public cost (such as being dunked on) is the correct update from the public about [Obvious Opponent] not having factored a relevant consideration in their public position.

Without this public ritual, [Obvious Opponent] can hold their opinion, free of charge. The next time they encounter this point publicly, they’ll already have a counterargument ready for it. For this reason, it will even look like [Obvious Opponent] thinking about it helped them form their beliefs, even though it was completely after-the-fact. 

When people look at [Obvious Opponent], they will think that their opinion will have come from careful consideration of this point, when it was just [Obvious Opponent] preparing rhetoric after getting some unexpected counterargument in private.

To summarize, private arguments with liars just give them ammunition for free.

On the other hand, public statements, conversations and debates are great!

Subpoint: Good behaviors

There is a whole lot more that can be done, and when I have more time, I’ll write more about it. 

Specifically, instead of things to avoid, I’d also like to write more about what behaviors are good (help prevent lies), and which are risky (could go both ways).

If you are interested, here is a small list of things I plan to write more about, disorganized:

Good behavior: moving the interaction to a more public space (a group, a larger group, in the public). This makes it much harder for liars to serve different versions to different people. This is one of the most important community norms to have: there should be strong pressure to make crucial claims, cruxes and choices public. (Likely a full post)

Good behavior: having people commit to non-fuzzy concrete things. Through public statements or writing in general (even DMs). Then you have hard proof.

Good behavior: asking them increasingly more concrete and simple questions. If you don’t make things concrete, liars can just weasel their way in looking like they agree with everyone.

Good behavior: make it clear that you will discard whatever liars tell you while they continue their pattern of lying.

Risky behavior: getting information from them. Risky because they are experienced and will still tell you stuff that they want you to think about, normalize, and spread. 

Risky behavior: negotiating. For instance, exchanging information or trading favors. Risky, because liars do not make for reliable trade partners, and they can keep dangling stuff in front of you.

Conclusion

Lying is bad. If you condone it, you’ll have more of it around you. This is true regardless of your intent.

Combating repeated lies isn’t taught, and doing it feels bad. This is why it’s very likely that you are getting lied to, in one way or another.

As a rule of thumb: push for public conversations, and ask very concrete questions. It is much harder to publicly lie on straightforward matters than to privately lie on vague stuff.

More generally, deontology is a strong baseline. There are many rules, such as “Don’t misrepresent your beliefs”, that are just really hard to beat. It is much easier to feel smarter than deontology than to actually beat it. Not to say that it is impossible to beat, just that it is much easier to believe that you have beat it than actually having beaten it.

Accordingly: If you see yourself coming up with a good reason for why in your specific case, breaking the rule is ok, you are most likely wrong.

You could try to figure out why you are wrong– you could try to figure out how you came up with an argument for why it is actually ok to lie. But debugging yourself takes time, attention and energy, and you have limited resources. So Just Follow The Rules. Then later, if you want, you can freely introspect, reflect or meditate.

Your limited attention is better spent on building good habits, following good norms and focusing on the important stuff. Trying to make independent utilitarian calculations for each of your choices and perfectly understanding each of your individual rationalizations is not worth it.

New to LessWrong?

New Comment
5 comments, sorted by Click to highlight new comments since: Today at 6:17 AM
[-]aphyer5mo3324

Some of the feedback was disappointing– people said things along the lines of: “Lies of omission are not lies! The fact that you have to add ‘of omission’ shows this!”.

The worst that I have gotten in that vein was: “When you are on stand, you take an oath that says that you will convey the truth, the whole truth, and nothing but the truth. The fact that you have to add ‘the whole truth’ shows that lies of omission are not lies.”

This was from a person that also straightforwardly stated “I plan to keep continuing to not state my most relevant opinions in public”, as they were defending this behavior.

What are your...let's say 5...most controversial political viewpoints that you would get in most trouble (with family/friend groups/employers/government/etc.) for expressing?

You might prefer not to answer this question.  Tough luck!  Lies of omission are lies too, and you shouldn't say them.

You might prefer to have controversial conversations in private with people you trust, away from Twitter frenzies and the pressure to dunk on opponents.  Tough luck!  You see,

the public cost (such as being dunked on) is the correct update from the public 

about you holding these positions, and 

Without this public ritual, [you] can hold [your] opinion, free of charge. 

This may not be what you intended to say in this piece.  But it does appear to be what you actually said.

Thanks, this nicely encapsulated what I was circling around as I read it. I kept reaching for much more absurd cases, like "Mr. President, you're a liar for not disclosing all the details of the US latest military hardware when asked." 

Even aside from that... I'm a human with finite time and intelligence, I don't actually have the ability to consistently avoid lies of omissions even if that were my goal. 

Plus, I do think it's relevant that many of our most important social institutions are adversarial. Trials, business decisions, things like that. I expect that there are non-adversarial systems that can outperform these, but today we don't have them, and you need more than a unilateral decision to change such systems. Professionally, I know a significant amount of information that doesn't belong to me, that I am not at liberty to disclose due to contracts I'm covered under, or that was otherwise told to me in confidence (or found out by me inadvertently when it's someone else's private information). This information colors many other beliefs and expectations. If you ask me a question where the answer depends in part on this non-disclosable information, do I tell you my true belief, knowing you might be able to then deduce the info I wasn't supposed to disclose? Or do I tell you what I would have believed, had I not known that info? Or some third thing? Are any of the available options honest?

Preface: I'm mostly thinking out loud here due to personal interest, and I'm a bad writer myself. It may be ouside of the scope of what you are trying to say, or put excessive pressure on your ideas for the purpose of addressing rare edge-cases with little benefit. Feel free to discard any of all of the following:

I'm curious how far you're taking the idea that truth is less harmful than lies. Infinitely? I have personally asked myself the types of questions which has driven people to suicide or existential crisis, and eventually arrived at some of the most uplifting truths I now know. On the other hand, this world is riddled with lies (and I think that some of them are elephants in the room). You can get yourself or other people killed just by stating the truth. I'm under the impression that the average person knows a few of such things, but maybe that's not the case. Said axiomatically: Some truth seems like it's not aligned with humanity at all, and some truth only appears to be aligned with a subset of humanity (and these subsets tend to have friction between them).

A thing which interests me personally is how some truths sound terrible only because of other lies:
"You're an egoist", "No, how dare you say such a thing!", "It's a tautology that you always do what you think you should do in that moment. I didn't say that you harmed other people for your own benefit. Actually, you're likely doing other people good for your own benefit, it's probably the case that your ego wants to help other people, making you moral rather than immoral."

But it's extremely likely that any truthful statement is taken to be an opinion or evaluation, that other people doubt the intention behind your truthfulness, and even that they claim you're lying (because they disagree with what they think you imply). An easy example is that (universally disliked person) is intelligent, since "intelligent" is often regarded as a compliment rather than as a description. The hidden truth here is that intelligence and morality correlate less than we want to believe, and this is because of the hidden false belief that morality must be correct or valid (that the inherent value of preferences isn't enough for comfort).

By the way, I agree that AI is dangerous (not only that, this danger is trivially true). I actually think that stopping all technological growth might be desirable soon, or at least that we should be selective about future advancements.

Finally, I have to disagree with "Push for public conversations" if that entails lowering the barrier of entry. Every topic requires a level of intelligence and knowledge before one can engage with it in a productive manner. If you talk in a way that only intelligent and knowledgable people understand, then you automatically filter out most who are unqualified. If such were to read this comment, or scan it for tokens which are taboo or outside of the overton window, he'd not find much, and thus not engage with me in a hostile manner. Abstracting to a higher scope than both sides of a culture war subject "The majority is not necessarily correct" seems better than a specific statement which is easier to understand but more likely to look like it's taking a stance "Bullying is not ideal", "Cancel culture is not ideal".

A few confusions I had when reading the central definition of "lie" used in this post:

When I say that someone lies, I mean that they have communicated anti-information: information that predictably makes people become more wrong about the world or about what the locutor believes. This includes lies of commission, lies of omission, misdirection, and more.

Predictable by whom, under what circumstances? This makes quite a large difference to the meaning.

Certainly not by the speaker at the time, or it would be impossible to lie inadvertently (which is also a highly non-central use of the word "lie", just in case you weren't aware of that).

Certainly not by the listeners, because if they could predict it then they would be able to discount the communication and therefore not become more wrong.

Is it some hypothetical person who knows the true state of the world? I guess that would fit but can't be applied in practice, and it would be very strange to say the something is "predictable" when nobody in the world could predict it.

Maybe just the speaker, but after receiving additional information? Then it becomes conditional on what information they receive. Maybe just the fact that there exists information that they could receive, that would allow them to predict it? But that's even worse, because it may depend upon information private to the listener or possibly not known to anyone.

Maybe it's predictable to the speaker in the presence of information that they already know, but don't necessarily realize that they know? Or maybe a "jury of their peers" in the sense that the additional information required is generally known or expected to be known? That makes it rather subjective, though, which isn't ideal.

So no, I'm still not really clear exactly what this definition means in its important highlighted term due to a lack of referent.

[+][comment deleted]5mo20