GPT-3 seems to be skilled enough to write forum comments that aren't easy to identify as spam. While OpenAI reduces the access to it's API it will likely don't take that long till other companies develop similar API's that are more freely available. While this isn't the traditional AI safety question, it does seem like it starts to become a significant safety question.
We already filter a lot of comments by well-meaning internet citizens who just kind of get confused about what LessWrong is about, and are spouting only mostly coherent sentences. So I think we overall won't have much of a problem with moderating this and our processes deal with it pretty well, at least for this generation of GPT-3 without finetuning (I can imagine finetuned versions of GPT-3 to be good enough to cause problems even for us). Karma also helps a lot.
I can imagine being concerned about the next generation of GPT though.
The obvious answer to spammers being run by GPT is mods being run by GPT. Ask it whether every comment is high-quality/generated, then act on that as needed to keep the site functional.