Lying and censorship are both adversarial games: they are applied by some for an advantage over others.

All else being equal, one is harmful because it spreads disinformation, the other is harmful because it suppresses the spread of accurate information. When someone or a group justifies using either, it is normally in the context of achieving a greater benefit, or responding reciprocally to another. This mirrors war, in that war is inherently destructive as well, but can sometimes be used to accomplish good things and to punish or deter bad things.

Treating lying and censorship as war though highlights not just that civil discourse is in a constant state of war, but a constant state of war crime. Lying to the masses is a form of indiscriminate attack, while censorship operates like a blockade and locks away potential benefits from everyone. Lies and censorship have collateral damage, and should only be considered legitimate when considerations of proportionality and mitigating collateral damage are applied. It should not be tolerated when friends and political allies lie to you about political opponents to boost support, because they are treating you as an adversary or a useful idiot. The more acceptable it is to lie and censor for effect, unpaired from any consideration of proportionality or collateral damage, the more societal trust is destroyed and the harder it is to initiate mutually beneficial cooperation. When insincere and unserious discourse and analysis is accepted (from one’s own side), b.s. proliferates and simultaneously provides stronger justification for the powerful to censor arbitrarily and in a biased manner to favor themselves. To get to a state with less lying and censorship, I wonder if the history of the decline of war provides any lessons on a path to “epistemic peace.”

You can’t easily get to a cooperative equilibrium when there are many actors that can independently chose to cheat and defect for their own advantage. To negotiate with trust that something good will happen, you need fewer actors. While it was easy for societies and states to monopolize the use of force, prohibit murder, and set rules of war with each other, it is much more arbitrary to prohibit “lying” or to get large groups to agree on what is unacceptable beyond extremely legible and circumscribed instances such as fraud. In broader epistemic conflict over what is true, you need large coalitions that vet their own members, and throw out their “epistemic war criminals” rather than promoting liars and censors. In interpersonal conflict, you need communities that vet people and eject or punish the dishonest and censors. Basically you need a set of norms and rules for legitimate means to punish lying and censorship that scales into an equilibrium of lying and censorship being extremely rare.

There will still be asymmetries in the ability to lie and censor that can be abused, but with norms around lying and censorship being analogous to rules of war, otherwise honest people can cooperate to lie to abusers and wannabe authoritarians. Such lies directed at much narrower targets that intend harm are much more proportional than the largely indiscriminate methods that get spammed to larger audiences. Such narrower lies nevertheless can still become indiscriminate in their consequences (e.g. lying about quotas to appease an authoritarian, and causing economy wide shortages.)

It seems really critical to solve these problems quickly. Deep fakes along with other AI optimized deception will only make problems worse. With such high fidelity deception, even people that actually try to figure out what is true will increasingly be unable to find evidence they can trust. Dark times are ahead, it is time to start building collaborative truth-seeking systems that can scale.

New to LessWrong?

New Comment
22 comments, sorted by Click to highlight new comments since: Today at 11:54 AM

You might enjoy the literature on public goods games examining questions like what causes metastability amongst cooperation and defection cluster formations.

https://arxiv.org/pdf/1705.07161.pdf

https://royalsocietypublishing.org/doi/pdf/10.1098/rsif.2012.0997

https://apps.dtic.mil/dtic/tr/fulltext/u2/a565967.pdf

I share your overall concern, but have a few disagreements with this post:

--I don't think deepfakes are as big a deal as lying and censorship, and I think those aren't as big a deal as bias+filterbubbles.

--I think lying and censorship should be compared more to the threat of violence than to violence itself. Since both are general-purpose tools for shaping people's behavior (violence shapes people's behavior too, but it isn't really a tool for doing so, because you can't fine-tune the way you do the violence to get the target to behave in specific ways. And insofar as you can, it's because you are threatening them with more violence unless they behave the way you want). On this view, the situation is more grim. War may have ended between great powers, but the threat of violence hasn't. And all the people in the world live under constant threat of violence from whatever government controls where they live; this is what laws are. So, this predicts that as ideological conflict becomes more polarized and congealed around a small number of mega-ideologies, most people in the world will live in the "territory" of one such mega-ideology, and within each territory there will be pervasive lying and censorship.

That said, I like the idea of comparing lying and censorship to acts of violence with collateral damage. They do have collateral damage, after all. (Unlike, perhaps, the mere threat of violence?) I also like the idea of using large institutions to self-police, e.g. two mega-ideologies could agree to not censor each other or something. For some reason this seems unlikely to work to me, but I'm not sure why. Maybe because it feels analogous to two great powers agreeing to have open borders with each other. You are giving up your main defensive advantage!

I also like the idea of using large institutions to self-police, e.g. two mega-ideologies could agree to not censor each other or something. 

This seems to me confused. What to you mean with mega-ideologies that are large institutions? As I understand the term ideology, ideologies aren't instiutions. 

When it comes to real world large institutions a majority of censorship is internal and not external. 

Yeah, most ideologies are not institutions, good point. (For our purposes, what matters is whether they have enough agency to make deals with other ideologies. A Supreme Leader or other central power structure would do the trick, but it's not the only way.) So then I should rephrase: I'm intrigued by the idea of getting ideologies to have more agency so they can make win-win deals with each other. Just as warring tribes of humans benefit from having national governments that can coordinate when to fight and when to negotiate, so too might culture-warring ideological tribes of humans benefit from... etc.

Having to speak according to the party line that was decided in some deal seems to me like being censored to speak. The act of making a deal that involves not saying certain things like "People who say X are awful people who should be shunned" inherently involves censorship. 

A tribe of humans who have an ideology is not the same as the ideology itself. In a tribe of humans there are always divergent opinions. The more all of the people are pressured to say the same thing, the more censorship is there for most definitions of censorship.

In the US, legally prohibiting lying would be unconstitutional (US v. Alvarez), and for good reason, I certainly don't trust our political leaders to adjudicate what is a lie and what is not.

It illegal to prohibit all lying but that doesn't mean that there aren't a variety of areas where lying can be forbidden. 

US v. Alvarez just said that you can't make content-based rules that forbid lies that produce no harm. 

Part of what the Long Term Stockexchange is about is for example to prevent companies from telling certain lies by making it into security violations to tell those lies that are forbidden. 

There are other legal mechanisms such as affidavits that could be scaled up to bring more interactions into a space where lies are legally punishable. 

US v. Alvarez doesn't make any distinction between prohibitions on lying that are content-based and prohibitions on lying that are content neutral. And I don't think you can make such a distinction, any prohibition on lying necessarily permits a person to assert the negation of whatever is prohibited, and would therefor necessarily be content-based.

It is certainly true that US v. Alvarez allows a lot of specific prohibitions on lying in contexts where there is concrete harm, I just took the post to be arguing for a broader prohibition on lying, a prohibition on all lying, which I think would be clearly unconstitutional under US v. Alvarez. Could we expand the contexts in which a legal prohibition applies? Possibly to some degree, but I don't think the very abstract metaphorical war that the post talks about would be a harm that any court would recognize, and I'm not sure the kinds of narrow prohibitions would address the posts concerns. 

US v. Alvarez speaks about content-based restrictions a lot:

Respondent challenges the statute as a content-based suppression of pure speech, speech not falling within any of the few categories of expression where content-based regulation is permissible. 

The summaries you find online at https://www.oyez.org/cases/2011/11-210 and https://supreme.justia.com/cases/federal/us/567/709/#tab-opinion-1970529 also speak about it being about content-based restrictions.

The prohibition against lying under oath is content neutral and nothing in US v. Alvarez seems to have a problem with it regardless of whether you cause harm by the lie. Lying to police officers is also criminalized whether or not the lie is actually producing harm.

A podcaster or journalist could make legally binding commitments that he isn't lying by making affidavits. If enough people back up their statements by legally binding commitments to tell the truth, statements by people who aren't willing to do so, get suspect. 

The Long Term Stockexchange gets companies to make a bunch of legally binding statements that reduce their ability to lie by turning violating the commitments made into security violations. 

You've misunderstood my claim. But since you want to go into the legal technicalities, let's go there. There actually was no majority opinion in US v. Alvarez. There was an opinion by Justice Kennedy for himself and three other justices, which talks a lot about content-based discrimination. The idea here is that lies are a subcategory of content-based discrimination. Suppose there is a statute prohibiting me from lying about how many chairs there are in this room, and I assert that there are three chairs in this room, when there are in fact only two. I have violated the statute. But had I made a different claim on the same topic, had I asserted that there are only two chairs in this room, I would not have violated the statute. That makes the statute content-based. 

The controlling opinion in US v. Alvarez is actually the opinion by Justice Breyer, not Justice Kennedy, and Justice Breyer more or less skips over the whole issue of whether it is content-based, but ends up applying strict scrutiny anyway. According to Justice Breyer's controlling opinion, regulations of false speech in areas that “would present a grave and unacceptable danger of suppressing truthful speech”, such as “philosophy, religion, history, the social sciences, the arts, and the like”, get strict scrutiny. Regulations of “false statements about easily verifiable facts that do not concern such subject matter” get intermediate scrutiny, which means they still might not be constitutional.

Both opinions recognize that there are a lot of specific categories of lies, such as perjury which you mention, that are generally thought to be proscribable, and which US v. Alvarez does not touch. Neither opinion suggests that these categories of lies are somehow content-neutral. Even for content-based regulations, courts then have to ask whether the government has a compelling interest in prohibiting the speech, and whether the prohibition is narrowly tailored to that compelling interest, before declaring a prohibition on speech unconstitutional. There are a variety of other exceptions to free speech that the Supreme Court has recognized over the years (defamation, true threats, incitement of imminent lawless action, etc.) The idea with many of the categories of presumably proscribable lies mentioned in US v Alvarez is that these categories of lies are proscribable because they generally cause significant harms, even though they are content-based. This is how Justice Breyer puts it: "I also must concede that many statutes and common-law doctrines make the utterance of certain kinds of false statements unlawful. Those prohibitions, however, tend to be narrower than the statute before us, in that they limit the scope of their application, sometimes by requiring proof of specific harm to identifiable victims; sometimes by specifying that the lies be made in contexts in which a tangible harm to others is especially likely to occur; and sometimes by limiting the prohibited lies to those that are particularly likely to produce harm." Since you mentioned perjury and lying to cops specifically, here is what Justice Breyer has to say about that: "Perjury statutes prohibit a particular set of false statements—those made under oath—while requiring a showing of materiality. See, e.g., 18 U. S. C. §1621. Statutes forbidding lying to a government official (not under oath) are typically limited to circumstances where a lie is likely to work particular and specific harm by interfering with the functioning of a government department, and those statutes also require a showing of materiality. See, e.g., §1001."

The point: US v. Alvarez actually is a serious impediment to any prohibition on lying aimed at improving the general epistemic environment of public debate, and for good reason: any such prohibition has to be enforced by the government, and allowing the government to decide what counts as a lie is a recipe for censorship. People like Donald Trump sometimes win elections, and do you want him deciding what counts as a lie and is therefor prohibited?

This argument about affidavits seems wrong to me too. I've never heard of an affidavit being used in a context where there wasn't the idea of the document being used in a court proceeding, and I'm not sure such a thing would be allowed. Can you please cite a particular statute that you think would allow a podcaster to legally bind himself with an affidavit? And if such a thing did become common, do you think courts would be willing to be the arbiters of which statements were true in podcasts, or do you think they would be unwilling to enforce what they would (rightly in my view) see as a misuse of a tool intended to protect only their own integrity? The latter seems much more likely to me.

I did a bit of research and it seems I projected to much from non-US contexts onto the US context and while you can make affidavits without being in a court if you get a notary to witness it, it doesn't seem to be punishable if the affidavit doesn't go to court. 

But even without that particular mechanism you likely can set up contracts that allow an organization to punish you when you lie. 

It seems like Eric Ries claim about security fraud being the only real crime in the US that's on the books is more true then initially assumed when I heard it. Lying to your investors is an enforceable crime. 

Practically, you likely wouldn't want to go to US courts anyway. You could have a website like Patreon that has one of it's rules that it punishes it's members for lying and then then witholds revenue from them if they lie. Contractual freedom is quite broad in the US. 

Contract law could be much more workable, yes, especially if the contract specifies some private entity, not a judge, to be the arbiter of what is a lie.

I'm unclear about what you mean with censorship here. If my friend tell me a secret about how he's unhappy at work, is his demand for me to keep that secret an act of war according to your standards?

While it can depend on the specifics, in general censorship is coercive and one sided. Just asking someone to not share something isn't censorship, things are more censorial if there is a threat attached.

I don't think it is bad to only agree to share a secret with someone if they agree to keep the secret. The info wouldn't have been shared in the first place otherwise. If a friend gives you something in confidence, and you go public with the info, you are treating that friend as an adversary at least to some degree, so being more demanding in response to a threat is proportional.

While there's no explicit threat attached to most asks for keeping information secret rejecting those demands can still incur costs. 

In a work enviroment if you violate the peoples demands for keeping information secret you might lose chances of being promoted even if that

Most censorship in China is not through explicit threats but through expectations that would will not publish certain information.

Censorship can only be done by the powerful, who often lie at the same time. So I don't think it's wise to think of it as an antidote to lying.

It's not an antidote, just like a blockade isn't an antidote to war. Blockades might happen to prevent a war or be engineered for good effects, but by default they are distortionary in a negative direction, have collateral damage, and can only be pulled off by the powerful.

Interesting post.

You or other readers might also find the idea of epistemic security interesting, as discussed in the report "Tackling threats to informed decisionmaking in democratic societies: Promoting epistemic security in a technologically-advanced world". The report is by researchers at CSER and some other institutions. I've only read the executive summary myself. 

There's also a BBC Futures article on the topic by some of the same authors.

Dark times are ahead.

When it comes to epistemic warfare, they are already here, confidence 85%.  (People on both sides of a given political divide would agree that they are already here, of course for different reasons.  Pro-choice and pro-life; Brexit and Remain; Republican and Democrat.)

when considerations of proportionality and mitigating collateral damage are applied

Do you have a more concrete model for when x units of censorship/lying are appropriate for y utils/hedons/whatever?  Not a trick question, although I doubt any two people could agree on such a model unless highly motivated ("you can't come out of the jury room until you agree").  The question may be important when it comes time to teach an AI how to model our utility function.  

My intuitive model would be "no censorship or lying is ever appropriate for less than n utils, and p units of censorship or lying are never appropriate for any number of utils short of a guaranteed FAI".  And then...a vast grayness in the middle.  n is fairly large; I can't think of any remotely feasible political goals in the U.S. that I'd endorse my representatives lying and censoring in order to accomplish.  

I'd endorse widespread lying and censorship to prevent/avoid/solve a handful of seemingly intractable and also highly dangerous Nash equilibria with irreversible results, like climate change.  We'd need to come up with some Schelling fences first, since you wouldn't want to just trust my judgment (I don't).

When it comes to epistemic warfare, they are already here, confidence 85%.

The term of dark times is relative. Seeing darkness in present times doesn't mean that the future won't be darker. 

I think you need legible rules for norms to scale in an adversarial game, so it can't be direct utility threshold based rules.

Proportionality is harder to make legible, but when lies are directed at political allies that's clear friendly fire or betrayl. Lying to the general public also shouldn't fly, that's indiscriminate.

I really don't think lying and censorship is going to help with climate change. We already have publication bias and hype on one side, and corporate lobbying + other lies on the other. You probably have to take another approach to get trust/credibility when joining the fray so late. If there were greater honesty and accuracy we'd have invested more in nuclear power a long time ago, but now that other renewable tech has descended the learning curve faster different options make sense going forward. In the Cold War, anti-nuclear movements generally got a bit hijacked by communists trying to make the U.S. weaker and to shift focus from mutual to unilateral action... there's a lot of bad stuff influenced by lies in distant past that constrain options in the future. I guess it would be interesting to see what deception campaigns in history are the most widely considered good and successful after the fact. I assume most are ones with respect to war, such as ally deception about the D-Day landings.

Fair points.  Upon reflection, I would probably want to know in advance that the Dark Arts intervention was going to work before authorizing it, and we're not going to get that level of certainty short of an FAI anyway, so maybe it's a moot point.