[ Question ]

How will internet forums like LW be able to defend against GPT-style spam?

by ChristianKl1 min read28th Jul 202017 comments

14

GPTAISite Meta
Frontpage

GPT-3 seems to be skilled enough to write forum comments that aren't easy to identify as spam. While OpenAI reduces the access to it's API it will likely don't take that long till other companies develop similar API's that are more freely available. While this isn't the traditional AI safety question, it does seem like it starts to become a significant safety question.

New Answer
Ask Related Question
New Comment

4 Answers

GPT-generated spam seems like a worse problem for things like product reviews, than for a site like LW where comments are generally evaluated by the quality of their content. If GPT produces low-quality comments it'll be downvoted, if it produces high-quality comments then great.

We already filter a lot of comments by well-meaning internet citizens who just kind of get confused about what LessWrong is about, and are spouting only mostly coherent sentences. So I think we overall won't have much of a problem with moderating this and our processes deal with it pretty well, at least for this generation of GPT-3 without finetuning (I can imagine finetuned versions of GPT-3 to be good enough to cause problems even for us). Karma also helps a lot.

I can imagine being concerned about the next generation of GPT though.

The obvious answer to spammers being run by GPT is mods being run by GPT. Ask it whether every comment is high-quality/generated, then act on that as needed to keep the site functional.

How about integrate with the underlay https://www.underlay.org/pub/future/release/5 ? FYI I personally connected some of the team members in the project with each other.