After Twitter permanently suspended Donald Trump’s Twitter account, some high-profile rationalists and rationalist-adjacent folks came out strongly against the decision. Among those who seem strongly opposed: Eliezer Yudkowsky, Kevin Simler, Naval Ravikant, and Balaji Srinivasan. 

This post is about why I think they’re wrong.

1. Does the law allow Twitter to do this?

Yes. Section 230 of the Communications Decency Act gives platforms broad discretion to edit or not edit user generated content as they see fit. Dozens of courts have held that this includes the ability to permanently ban users or suspend their accounts, for any reason or no reason. For more background on this, please read Professor Eric Goldman’s excellent Law and Technology Blog (Full disclosure: I’m an occasional contributor on the blog).

2. Does banning Trump stifle free speech?

No. Twitter is a private company and not a state actor. The First Amendment does not apply to decisions about whom it allows to use its platform.

Too many smart people are conflating the rights and responsibilities of state actors with those of private companies.

For example, in response to the ban, early-stage Twitter investor Naval Ravikant tweeted “First they ban accounts. Then, they ban apps. Finally, they block websites.”

It’s important that we disambiguate the “they” in this tweet. (Even though the misleadingly ambiguous pronoun choice gives the statements all of their purported power.)

There’s a critical difference between a state actor banning free speech and a private platform doing the same. The First Amendment provides the confines for government limitations on free speech. The law gives broad discretion to private companies to address what kind of speech to permit within the confines of their physical and digital boundaries.

Conflating the two is a lazy and misleading. 

Donald Trump can still speak freely in public. 

He still has plenty of soapboxes. Just not his preferred soapbox.

3. Is it wrong for Twitter to enforce social norms on its platform?


As a fan of SlateStarCodex, I remember the occasional post where Scott Alexander would give updates on commenters who received temporary or permanent bans for violating the rules and norms of his comments section. Occasionally, folks would kick and scream about those decisions. But Scott wanted to encourage good-faith arguments and civil and charitable conduct in his comments section. Scott understood that un-moderated comments sections often devolve into poisonous conversations. He policed the comments section as he saw fit.

I'm not intimately familiar with the content moderation policies at Less Wrong, but based on an initial Google search, it certainly appears that the powers that be have broad discretion to delete "anything [they] judge to be annoying or counterproductive."

Why should Twitter be judged for doing the same thing? Donald Trump wasn’t removed from Twitter because of an ideological difference-of-opinion. He wasn’t banned for arguing for higher tariffs with China or restrictive immigration policies. He was suspended because his Twitter feed is a constant stream of poison and misinformation that threatens the entire ecosystem. This was true from the advent of birther-ism to his formal ban this week. He has, for months, publicly stated that he would not commit to a peaceful transfer of power. Now that it’s time to relinquish power, he incited a violent insurrection to stop the formal process of transitioning him out of power. If that's not grounds for removal, what is?

If Twitter had banned Trump merely for taking an unpopular ideological stance, I would have opposed the ban. But Twitter banned for poisoning their ecosystem and repeatedly inciting violence. 

4. Isn’t Twitter a monopoly and doesn’t banning him effectively prevent him from engaging in free speech?

No, Twitter is not a monopoly.

Only 22% of Americans have Twitter accounts. 

Almost all Americans have TVs. Trump could give a TV press conference tomorrow and most networks would host it or at least publicize it.

And even if it were a monopoly, that wouldn’t affect the free-speech or Section 230 implications of this ban.

Again, Trump still has plenty of places to disseminate his poisonous message. Just not on Twitter.

5. But what about the slippery slope that will lead to the parade of horribles?

The law makes hard distinctions all of the time. Content moderation systems do, too. Twitter had to adjudicate the facts in front of it. That’s what it did here.

Donald Trump is an evil jackass and a legitimate threat to democracy. Purging him from the system is good for the system. He is actively trying to subvert a 230-year-old system of government. There are no parallels to this situation anywhere and there likely (hopefully) never will be again. 

If Twitter makes another decision in a different context, I reserve the right to disagree with that decision. For example, I was against Google’s decision to fire James Damore for his amateur evo-psych missive. 

But context matters in the world of content moderation. 

6. Twitter’s rules are selectively enforced

Sure. But content moderation at scale is inordinately difficult. There will always be false positives and false negatives.

One of the most famous examples of content moderation gone bad is the famous picture of the naked child in Vietnam running from a Napalm explosion. A few years ago, Facebook’s algorithms flagged the picture as offensive and inserted black box over the child’s genitalia. 

In almost every circumstance, it is wrong to post naked pictures of children. A rule against such posts would normally be a good thing. But in this instance, the consensus was that this was not the right decision.

Rationalists are inclined to seek out clear, bright-line rules. Unfortunately, content moderation does not lend itself to such clarity. Content moderation must involve ad hoc, fact-based adjudication, because context matters in content moderation. To quote the true expert on this subject, Mike Masnick:

Trump is, perhaps, the perfect example of why demanding clear rules on social media and how they moderate is stupid.

As for the question of why now? Well, clearly, the context has changed. The context is that Trump inspired a mob of goons to invade the Capitol building this week, and there remain legitimate threats that his cultish followers will continue to do significant damage. Certainly some people have insisted that this kind of violence was always a risk — and it was. But it had not actually erupted to this level in this fashion. Again, we’re talking about context. There’s always more context. And given that the situations are always edge cases, that the context always matters, and that things are always shifting, you can totally see why it’s a reasonable decision to ban Trump from their platforms right now, based on everything else going on, and the likelihood that he might inspire more violence.

If you really want to understand this issue, spend a few hours reading Mike Masnick’s historical posts on this subject. 

Is Twitter’s current system imperfect and occasionally unfair? I’m sure it is. But like other imperfect systems, it’s the best we have. Well-intentioned policy-makers have been trying to come up with alternatives to Section 230 for years, but whenever such alternatives are subjected to careful scrutiny, most scholars on the subject conclude that the alternatives are far worse.

7. Will banning Trump and similar norm-violators from mainstream platforms lead to more violence and unrest?

This was an argument by Yudkowsky. I don’t think that’s true, either. What’s dangerous about Trump is that he has made norm-violation mainstream. Maybe a small percentage of wingnuts believed that prior elections were rigged, but these types of allegations from presidents, senators, and US Representatives en masse are a unique feature of the Trump administration. That’s led to a destabilization of the political infrastructure of this country. 

There will always be wingnuts. What is unique about our current situation is how they have gone mainstream.

There’s plenty of evidence that the major platforms’ algorithms have contributed to conspiracy-minded thinking. That their content-moderation policies should attempt to offset some of these effects is not a bad thing. 

There are always going to be services like 8chan where the norm-violators will congregate. But most people don’t want to spend their time on 8chan; they want to be on Facebook, where their friends and family are. 

There will certainly be violence from norm-violators who aggregate on fringe sites in the future. But if we can take steps to reduce the influence of wingnuts in mainstream culture, we can perhaps limit future iterations of khakistocracy like the one we’ve been subjected to for the last four years. 

8.  Zuckerberg and Dorsey changed their minds on this issue for a reason

Facebook and Twitter didn’t want to be in this position. Both Zuckerberg and Dorsey both advocated for laissez faire principles of content moderation just a few years ago. But they soon realized that this position was naïve. They realized that to keep their own standing in the community, they needed to ban some norm-violators and moderate some content. 

Zuckerberg and Dorsey now know that removing from the ecosystem those whose actions could destroy the ecosystem is a necessary precondition for a functioning ecosystem.

That’s the judgment call Twitter made here.

Donald Trump is a recidivist norm violator, whose norm violations have been as damaging to the United States as any in modern history. Twitter deemed it in the best interests of its ecosystem (and the broader ecosystem) to ban him. 

So be it. We’ll all be better off for it. 


21 comments, sorted by Click to highlight new comments since: Today at 1:45 AM
New Comment

Re #2: you’re conflating the First Amendment and free speech. The First Amendment is one particular legal instantation of the idea of free speech, applicable in limited circumstances in one country. Establishing that there is no First Amendment problem does not establish that there is no free speech problem. And although I agree that there are important differences between government censorship and censorship by private actors, the classical liberal argument for free speech supplies reasons why even private censorship is harmful. You need to engage with these pro-free-speech arguments and explain why they don’t apply here.

Fair point re: #2, but the ultimate point is unchanged. For the same reasons that Less Wrong and SSC engage in content moderation, Twitter does the same. Banning Trump, on balance, will not be harmful.

"Content moderation" is not always a bad thing, but you can't jump directly from "Content moderation can be important" to "Banning Trump, on balance, will not be harmful". 

The important value behind freedom of association is not in conflict with the important value behind freedom of speech, and it's possible to decline to associate with someone without it being a violation of the latter principle. If LW bans someone because they're [perceived to be] a spammer that provides no value to the forum, then there's no freedom of speech issue. If LW starts banning people for proposing ideas that are counter to the beliefs of the moderators because it's easier to pretend you're right if you don't have to address challenging arguments, then that's bad content moderation and LW would certainly suffer for it.

The question isn't over whether "it's possible for moderation to be good", it's whether the ban was motivated in part or full by an attempt to avoid having to deal with something that is more persuasive than Twitter would like it to be. If this is the case, then it does change the ultimate point.

What would you expect the world to look like if that weren't at all part of the motivation? 

What would you expect the world to look like if it were a bigger part of the motivation than Twitter et al would like to admit?

Again, Trump wasn't banned for his ideas. He was banned for actively inciting violence and for a long history of poisoning the well. 

Neither of us know what Twitter's "real" motivations were. Heck, the executives of Twitter might not know what their real motivations were. 

The real question is whether it is proper for a major media platform to remove a major political figure for ostensibly breaking the code of conduct associated with the platform and for actively engaging in incitement to violence. That activity ought not to be protected by free speech or society as a whole.

I think Lesswrong and SSC are in a different situation than Twitter.  Twitter is a key place for discussion and dissemination of ideas.  It could be argued that the scale and function twitter serves (organizing protests etc) means  Twitter should be treated of more as a public good than the SSC comment section and that their reach means they have more of a responsibility to be very careful about deplatforming voices .

I happen to agree with your conclusion, but I don't think you're addressing what EY said. He tweeted the following:

What America needs now, to heal, is for the left and the right to be on entirely different social networks. Still with the ability to subtweet alleged screencaps from the Other network of Others being outrageous, of course! But with no ability for Others to clarify or respond.

My Translation: I'm worried that banning Trump from twitter will increase polarization because it will make the two tribes more segregated than they were before. This is not that similar to your #7, and otherwise missing from the list entirely.

I also think #8 is unlikely. It doesn't strike me as plausible that the Capitol incident provided any rational person with significant evidence on which to update their view of Trump. On the other hand, public opinion appears to have shifted significantly. A financial motive seems likely here, especially for Zukerberg.

My Translation: I'm worried that banning Trump from twitter will increase polarization because it will make the two tribes more segregated than they were before. This is not that similar to your #7, and otherwise missing from the list entirely.

I can't prove this isn't true, but I believe it's unlikely given what we know about how the algorithms currently work. To generate outrage engagement you want to identify ideas that are being shared on social media in various forms and then find out what groups of people are most enraged by have increased engagement with the platform when viewing them and find ways to show that content to those people more often.

Segregating platforms wouldn't fundamentally change this. I'd say it's a wash either way. 

Point 7 is a response to Yudkowsky retweeting on Jan 8 Ryan Lackey's post that said:

"If you wanted to increase the odds of an actual civil war in the next decade, pushing 10-50 mm people into a somewhat segregated communications system actively forced to evolve to resist aggressive censorship is an important first step."


I very much agree, especially with point #8. Communities, online and off, by default start out with little to no moderation. Moderation is added typically only when there are elements that poison the ecosystem, as you put it.

I co-hosted an in-person philosophical discussion group for over a decade. At first we invited everyone to join, then we quickly learned that some styles of discussion destroy good conversation, so we started moderating or even asking people who could not refrain from them to leave the group. It was painful to do, but also necessary to preserve the culture.

A while back I saw some study showing that banning the most toxic subreddits greatly reduced the number of racial slurs on Reddit as a whole. It is for these sorts of reasons that banning toxic users generally and Trump specifically makes sense for Twitter.

I agree with 6 and 7, and I agree with your conclusion in general--removing Trump at this point in time was better for the world than leaving him on the platforms. Let me point out where I see the gap. 

I believe the model that Eliezer, Naval and Balaji are using here would be correct if this was, say, 2015 before Twitter's timeline went algorithmic.

In 2020 when someone talks in a way that presupposes Twitter and Facebook are "speech platforms" similar to writing a blog or a book or something like that, the immediate question that comes to mind is if they've read any Shoshana Zuboff or Jaron Lanier?

Twitter is addictive and Trump is a Twitter-addict. To the extent you can blame the existence and marketing of a drug for someone's addict behavior while they're on it, Twitter as an behavioral addiction platform is very culpable in what happened on January 6th. They're something like the drug dealer or the Purdue Pharma the analogy.

If you're Jack Dorsey, getting the president of the Untied States addicted to your technology is a big win. As a corporation Twitter profited off Trump for the better part of a decade and a significant percentage of Twitter's traffic was dependent on his presence there.

A better analogy is something like getting banned from a casino or getting 86'd from a bar. High rollers sometimes get kicked out of casinos even if they still have plenty of money to spend. In a similar sense this is Twitter saying "you've made us a lot of money, but your presence is starting to detract enough from our other customers that on balance you're no longer valuable to us."


I'm nonplussed with Eliezer, Naval and Balaji's takes on this. It may be their own use of Twitter that's making it difficult for them to see (I mean, I do too, but I consciously equate Twitter usage with something like smoking a cigarette in terms of it's impact on my health--and I should cut back). 

In a way, we would all be lucky if we were to get suspended from Twitter.

This post once had 11 Karma and then went down to 4, so clearly this is not a popular take among rationalists. 

I feel as if too many rationalists struggle to see past the "but what about the slippery slope?" argument and fail to see the evil that's right in front of them.

Your post is a good one and it sucks people are coming down on it that way.

It made me wonder if Eliezer and Jaron Lanier had ever had a conversation before. They did not too long ago and I missed it. -- video is missing from the LW post but is here

I would love to see this happen again with a moderator and some more structure. 

I wonder if this isn't a consequence of a kind of philosophical blind spot in EYs rationalist perspective. Sort of that to EY Twitter represents an achievement to pedestalize rather than a albatross that we've bought in to and accept because of network lock-in effects.

I used to tell people in college that I "had two pack a year habit." I would smoke rarely to strike up conversations because it was an easy ice breaker when I wanted a conversation. Twitter is like that, but instead trading seven minutes of your life you're trading chunks of your humanity.

I mean... I'm jealous of Trump for losing access to his drug of choice. I think it could be a really positive things for him and for all of us. :)  

One point that stuck out to me in the post was that Twitter is only used by 22% of Americans, therefor it’s not a monopoly. 

22% is pretty close to the number of Americans who read newspapers just 20 years ago.

Admittedly there is much more competition today for platforms, but given that almost every major platform deplatformed Trump, I think the current situation is in some ways analogous to every newspaper in the US stopping interviewing the president 20 years ago.

I endorse @remizidae's comment above, and would like to add the following:

Twitter is a private company and not a state actor. The First Amendment does not apply to decisions about whom it allows to use its platform.

Too many smart people are conflating the rights and responsibilities of state actors with those of private companies.

This is legally relevant but morally irrelevant. The distinction between public and private moderation is not due to some fundamental, ontological difference between government and private oppression, but rather because in the conditions under which the 1st Amendment was originally written, the state was the only actor who could effectively suppress speech across the whole spectrum of society. But this is not the case today! Today, large online platforms are able to suppress speech that they disapprove of at an international scale. Even if "only 22% of Americans have a Twitter account", that still gives them a degree of influence comparable to that of the state, and this concern only gets greater if we realise that adding in a few other common social networks brings coverage to close to 100% and all of these platforms have similar moderation attitudes, which results in a homogeneity of "acceptable discourse" which freedom of speech is supposed to avoid. If we actually value freedom of speech as an actual moral principle and not merely as a legal technicality, then we should absolutely be concerned about censorial powers wielded by private companies.

This strikes me as a weak slippery slope argument. There is no "homogeneity of acceptable discourse" on Twitter. Even after Trump's ban, far-right wing politicians such as Hawley and Boebart still use the platform. He wasn't removed for ideological reasons. He was banned because he was actively inciting a violent insurrection and aspired to continue to incite such an insurrection. 

For (moral) free speech considerations, the question of whether the censor is a private or government entity is a proxy. We care whether a censor has enough power to actually suppress the ideas they're censoring.

The example of SSC moderation is a poor guide for our intuition here, because we should expect to arrive at different answers to "is censorship here OK?" for differently sized scopes. It can simultaneously be fine to ban talking about X at your dinner table and a huge problem to ban it nationally.

If we were to plot venue size against harm to society by the exercise of power to censor that venue, I'd expect some kind of increasing curve. Twitter's moderation policy definitely sits above SSC's. It also sits below, say, the Sedition Act.

Also, the scale of the event we're seeing isn't only Twitter and Facebook. The alternative platform the faction tried to flee to has been evicted by Google, Apple, and Amazon.

The strategy of "apply pressure on every technology company available until they boot your political opponents" is a symmetric weapon. It works just as well for bad intent as for good intent.

In response to your second point re: free speech, a cross-post of a comment I made on Facebook on a related issue:

I'm not from the US, but despite knowing the common counter-arguments, I don't understand how platform censorship is consistent with your 1st amendment.

Technically, the 1st amendment only prevents the government from censoring stuff; in practice, that has IIRC meant that e.g. a recruitment twitch stream by the US military is arguably not allowed to block spam.

And if that isn't allowed, surely a system where any powerful member of government can pressure any private platform holder to censor arbitrary stuff doesn't make sense. All you've done is to add a level of indirection to the government censorship. Here's a story by Glenn Greenwald on the issue of platform censorship, and he ultimately resigned from The Intercept because he got censored while trying to report on the same story, too.

If there is not a "state actor," then the First Amendment does not apply. 

I'm not a First-Amendment scholar. There is literature and case law on this subject, but I wouldn't be able to summarize it well. That said, I'm fairly certain that government officials pressuring private platforms to remove certain content would not implicate the First Amendment. But it is a closer call than the Trump situation.

And, to be clear, I'm not in favor of all forms of platform censorship. I'm simply defending this instance of banning Trump from Twitter. 

Without question, this is a hard question. Too many rationalists assume it is easy. 


I think sticking to a strictly literal interpretation of the 1st amendment is problematic for the reason that the politically and economically powerful seek, almost by virtue (or vice) of their positions, to always amass more power. Paraphrasing Gilmore's widely know quote, the powerful interpret power-limiting rules as damage, and route around them. And since full free speech is a strong way to limit the power of the powerful, in all cases in which either laws make it hard or even impossible to censor, or public perception make it politically unfeasible to censor, we may expect those in power to seek means to achieve as much censorship as materially possible through as many indirect means as possible.

Therefore, it's important to look at this from a Consequentialist perspective and ask whether certain forms of speech are being effectively reduced thanks to coordination between private agents to actively reduce it, and if yes, ask a classic cui bono? If the the answer to this latest question is "those in power", then for all practical purposes there was censorship, even if it's a censorship that manages to carefully sidestep the legal definition.

This doesn't mean that Twitter banning Trump, or all the big tech players banning Parler, is itself wrong. It's right, but a right that comes from mixing two wrongs, as argued by Matt Stoller, a well known anti-trust researcher who writes extensively on the topic, on his recent article A Simple Thing Biden Can Do to Reset America, from which I quote these two paragraphs (it's well worth reading the article on its entirety, as well as the one linked in the quote):

My view is that what Parler is doing should be illegal, because it should be responsible on product liability terms for the known outcomes of its product, aka violence. This is exactly what I wrote when I discussed the problem of platforms like Grindr and Facebook fostering harm to users. But what Parler is doing is *not* illegal, because Section 230 means it has no obligation for what its product does. So we’re dealing with a legal product and there’s no legitimate grounds to remove it from key public infrastructure. Similarly, what these platforms did in removing Parler should be illegal, because they should have a public obligation to carry all customers engaging in legal activity on equal terms. But it’s not illegal, because there is no such obligation. These are private entities operating public rights of way, but they are not regulated as such.

In other words, we have what should be an illegal product barred in a way that should also be illegal, a sort of ‘two wrongs make a right’ situation. I say ‘sort of’ because letting this situation fester without righting our public policy will lead to authoritarianism and censorship. What we should have is the legal obligation for these platforms to carry all legal content, and a legal framework that makes business models fostering violence illegal. That way, we can make public choices about public problems, and political violence organized on public rights-of-way certain is a public problem.

Unless something like this is done so as untangle the two sides of the problem so that this outcome comes instead of two rights, there will always be the potential for a fully legal, fully 1st-amendment-respecting, "Great Firewall of America" to grow and evolve up to the point free speech will exist de jure, but not de facto. Conversely, if done right, that workaround would be closed, the risk itself ceasing, all the while the promotion of concretely damaging speech still being effectively curbed.


[This comment is no longer endorsed by its author]Reply

The First Amendment and freedom of speech are NOT synonymous. The First Amendment is only one legal protection of free speech in one context. It is true that banning Trump from Twitter does not violate the First Amendment, but it is a violation of freedom of speech. We live in a world there the speech that used to occur as literal speech in public places, protected by the First Amendment, now occurs largely online on Twitter and Facebook and such. They fill the role of providing a medium for speech which used to be filled by the government, and we need to hold them to the same standards. 

You have also misconstrued the point of section 230. Section 230 says that platforms cannot be held laible for what users post, it is there to allow platforms to respect free speech, not to give them discretion not to.

It is true that smaller-scale entities like SSC or LW regulating content is not necessarily bad. There are thousands of blogs on the scale of LW or SSC, there is only one Twitter and one Facebook, and really nothing else on that scale. In order to have a healthy ecosystem for exchanging ideas, the larger platforms need to respect freedom of speech on their platforms, in the same way that the First Amendment requires of the government. How big does a platform need to be before it needs to respect freedom of speech? I don't know, there may be a gray area, but Twitter is definitely not in the gray area.

New to LessWrong?