see also my eaforum at https://forum.effectivealtruism.org/users/dirk and my tumblr at https://d-i-r-k-s-t-r-i-d-e-r.tumblr.com/ .
One possible contributor: posttraining involves chat transcripts in the desired style (often, nowadays, generated by an older LLM), and I suspect that in learning to imitate the format models also learn to imitate the tone (and to overfit, at that; perhaps it's due to having only a few examples relative to the size of the corpus, but this is merely idle speculation). (The consensus on twitter seemed to be that "delve" in particular was a consequence of human writing; it's used far more commonly in African English than in American, and OpenAI outsourced data labeling to save on costs.) I haven't noticed nearly as much of a consistent flavor in my limited experimentation with base models, so I think posttraining must make it worse even if it's not the cause.
Making status calculations at all times is a choice you have the right to make, but in my opinion it's a bad one.
What's your motivation to spend a lot of effort to write up your arguments? If you're right, both the post and your efforts to debunk it are quickly forgotten, but if you're wrong, then the post remains standing/popular/upvoted and your embarrassing comment is left for everyone to see.
If you're right, the author and those who read the comments gain a better understanding; if you're wrong, you do. I think framing criticism as a status contest hurts your motivation to comment more than it helps, here.
I suspect the models' output tokens become input tokens when the conversation proceeds to the next turn; certainly my API statistics show several times as many input tokens as output despite the fact that my responses are invariably shorter than the models'.
I just saw How to use hypnagogic hallucinations as biofeedback to relieve insomnia in the feed the other day, and it seems like quite a convenient option if it works; could be worth a try, though I haven't tested it myself.
No, actually; the mindset implied by repeating that text as a meme is quite different than the mindset implied by unironically generating it.
The bio is an edited meme, not an original; it mostly communicates that they're a heavy user of the internet. Example from a year ago
I vaguely remember looking at one of those studies and finding that the amount of alcohol used was significantly less than a standard drink, though I don't have a link now.