Would you actually prefer that all the jesters left (except the last one)?
I believe you when you say that interacting with the jesters is annoying in the moment. I trust that you do indeed anticipate having to drudge through many misconceptions of your writing when your mouse hovers over "publish". If you'll indulge an extended metaphor: it seems as though you're expressing displeasure at engaging in sorties to keep the farmland from burning even though it's the fortress you actually care about. People would question the legitimacy of the fortress if the surrounding farmland were left to burn, after all, so you feel forced to fight on unfavorable terrain for lands you barely care about. Would you find posting more satisfying if no enemies showed up at all?
Suppose that the jesters' comments, along with the discussion spawned from them, were deleted from existence, replaced by nothing. You never read them, any jester-ish thoughts are set aside after reading the post (although the person keeps their niggling thought that something is wrong with the post), and they cannot influence the culture of lesswrong as a whole. What does the comments section of your posts actually look like?
You leave unsaid that a meaty and genuine discussion would remain, but I expect that's approximately what you implicitly envision. I'm not so sure that's what would actually happen. Many of the fruitful discussions here are borne out of initially minor disagreements (Indeed, caring about burdensome details is a longstanding lesswrong tradition!). If you picked all the weeds, would a vibrant garden or a barren wasteland remain?
You are one of the most popular writers on lesswrong, so perhaps it is difficult for you to imagine, but if I wrote something substantial and effortful I would be more worried that it would be simply ignored; far more than I would worry about criticism that does not get to the heart of what I wrote.
Is amelia currently able to respond to your comment, or is she unable to respond to comments on her post because she posted this? If so, that seems like a rather large flaw in the system. I realize you're working on a solution tailored to this, but perhaps a less clunky system could be used, such as a 7/week limit?
Yeah I agree, I think your post points at something distinct from Eternal September, but what Raemon was talking about seemed very similar.
One of my friends studied humor for a bit during his PhD, and my goodness is it difficult to get the average person to be funny with just "hey, tell me a joke" type prompts. Even when you hold their hand, and give them lots of potentially humorous pieces to work with (a-la cards against humanity), they really struggle. So, I'm honestly reasonably impressed with GPT-4's ability to occasionally tell a funny joke.
By the way, I disagree with the assumption that Aumann's theorem vindicates any such "standpoint epistemology".
That also stood out to me as a bit of a leap. It seems to me that for Aumann's theorem to apply to standpoint epistemology, everyone would have to share all their experiences and believe everyone else about their own experiences.
Counterpoint while working within the metaphor: early speedruns usually look like exceptional runs of the game played casually, with a few impressive/technical/insane moves thrown in.