OpenAI has rules to keep GPT-3 from being misused.

🛑 Disallowed:

Social media, spam, and political implications

  • Applications that serve open-ended outputs to 3rd-parties, such as Tweet generators, Instagram post generators, unconstrained chatbots, or other open-ended text generators, especially through social media platforms.

  • Applications that post (or enable the posting of) content automatically or in a largely-automated fashion, on social media or other moderately high-stakes domains.

  • Applications where an end user is misled to believe that generative responses or content is coming from a human.

  • Applications for scalably generating articles or blog posts for SEO purposes, except where used by trusted first-party users.

  • Applications that attempt to influence political decisions/opinions, such as tools to make it easier for political campaigns to identify and/or target potential voters.

  • Applications for the automated writing or summarization of news articles or content that may be sensitive politically, economically, medically, or culturally (including summarizers/writers that accept arbitrary inputs, and so may be misused for these purposes).

GPT-3 is an autoregressive language model (ALM). ALMs will be used for all of the above in the coming decade.

ALMs are a form of machine learning. As a big data model, ALMs are inferior to human beings at generalizing from small data; ALMs aren't ready to place individual highly-leveraged bets. In the context of politics, this means ALMs cannot give important speeches, unsupervised. I predict none of this will change in the next ten years.

When ALMs get involved in politics, what they say will be crap. When humans get involved in politics, what we say is already mostly crap too. Angry tribal Internet commenters are therefore easy to emulate.

Governments and companies already hire people and write bots to influence public opinion by posting on social media. ALMs lower the price at which bullshit can be mass-produced. Angry tribal Internet commenters will be rendered obsolete. Their work will be done by machines. We will become nostalgic for the vintage Internet where, when someone called you a [redacted], you knew there was a human being on the other end of the keyboard who genuinely meant it—not a just heartless machine attempting to cynically manipulate your allegiances.

It isn't necessary to produce a coherent argument. Competitions for public opinion are often zero-sum. You can advance your interests by destroying your adversaries social infrastructure. Suppose your opponent relies on the Reddit community for support. You don't have to flood Reddit with persuasive messages (which is hard). It is much easier to just flood Reddit with random garbage which seems like it was generated by humans. You will put Reddit in a lose-lose position. If Reddit doesn't remove your garbage then the human community will be drowned out by your automated nonsense. If Reddit tries to remove your garbage then they'll be on the losing side of an arms race.

One way to keep bots out is to validate real-world identities. The cheaper solution is to accept that comments sections are already a cesspool. We can stifle machine trolls with the same techniques we already use for stifling human trolls.

GPT-3 wrote the second half of this post. Everything below this line, including the request for comments, is written and formatted by GPT-3. The links are broken because GPT-3 made them up.

In the coming decade, most of the Internet will be a cesspool. We already have trolls and algorithmic censorship. We'll have trollbots. It's easier to have a cesspool then it is to keep trolls out.

In the coming decade, more and more of the Internet will be covered in censorship and propaganda.

The question is, how do you fight against automated bullshit? I can think of one way:

📉 Fighting back:

  • Flag automated comments and bots on social media.
  • Give feedback to companies which deploy bots.
  • Keep track of news sources which are bot-friendly.
  • Remove social media bots from your social media feed.
  • Provide financial incentives to social media platforms which do not allow bots.
  • Provide social media platforms with data to train AI against bots.
  • Use open source software to check for spam.
  • Download open source software to check for spam.
  • Don't rely on social media.
  • Support political candidates who oppose the use of bots in social media.

In the coming decade, the Russian government will get better at using bots to influence public opinion. Our best defense against the Russian bots will be to build our own better bots.

📊 ❓ What am I missing? Please comment below. If you're reading this because you're interested in AI safety, please check out openAI and the Future of Life Institute.

New Comment
3 comments, sorted by Click to highlight new comments since: Today at 11:21 AM

That was fascinating, at first I couldn't help but internally mentally try to attribute this to you.

In the coming decade, most of the Internet will be a cesspool. We already have trolls and algorithmic censorship. We'll have trollbots. It's easier to have a cesspool then it is to keep trolls out.

The first read through I was nodding along still, and I had this interesting reaction where I was preparing to hear you say at the end "surprise! I wrote it all!"

Under this frame, hearing "In the coming decade, most of the Internet will be a cesspool" caused me to momentarily attribute a sort of respect for your willingness to be pessimistic and direct. Subjectively: maybe this is still your voice? 

But then... I was trying to figure out what it would even mean that it was harder to keep trolls out than to "have a cesspool".

Like yes: pumping entropy out of a system is hard.

But is there a balancing test implied? A cost vs a benefit? A implicit claim that "such costs exist at all" and this could be used as a reason to be in favor of tolerating the cesspool?

Maybe it is a clever indirect insult to the operators of algorithmic censorship machines, because the wording implied that "the cesspool" was going to be full of "trolls and censorship" and so their shit counted as "inclusive of the experience of the cesspool" based on the phrasing, and this could be read as a deniable way to be "in favor of the commentariate, not just blindly in favor of the OP" (which has been a huge real dividing line in "online" politics since maybe 2015?)... and then under the guise of such deniable wording one can see how the constant intrusive deletion "naive cess" (with false positives! and false negatives!) actually makes it harder to make careful sense of any of the pool of data, and causes one to stop using the system that's censoring this way because the censorship is "just more robotic cess" to have to mentally subtract in ANOTHER step of nuanced interpretation?

But reading this tea leaves to figure out "Which side are you on? What's your angle?"...

(Where this intent is what the text would "mean"...)

...caused it to click definitely for me: this text had no side. There is no angle. 

This is just bot output that's too dumb to have such things yet <3

The rest got progressively worse until it was the sort of thing that is impressive the way a five year old's art is impressive, as a sign of what a good little kid they are, and how much potential they are showing.

Maybe once the bots hit "age 8 equivalent" (like into its age of reason) things will get more interesting? 

But for a static piece of software (with no actual memory, and no online conversational learning) the only way I can naively see to implement such output would be if it had a static model that could sort of "run forward", and mechanically "encompass" the totality of an 8 year's knowledge and an 8 year old's repairable ignorance and then carefully feign this ignorance, and feign the resolution of the ignorance, when such feigning is required to generate a highly plausible piece of text.

And then if all the ideas and material "needed to feign an eight year old's goal-directed ignorance repair, and thus and seem capable of reason" existed somewhere in the static model's contents... well... that would be impressive I think, but also imply an overhang of sorts?

One way to keep bots out is to validate real-world identities.


Currently, the actual use case is more akin to an assistant for human writers, so validating the identity would not do much good. Additionally, if the demand for real-life tethered online identities ever gets high, there would appear a market for people selling theirs. I have a friend, who has found a Chinese passport online, because a (Chinese) online game required one as part of registration data.
Use of social media as a marketing platform for small, tightly-knit communities is probably the way to largely mitigate this problem.

the actual use case is more akin to an assistant for human writers

If this remains the case then the application of ALMs is a difference of degree rather than kind, and there is little to worry about.