We take commenting quality seriously on LessWrong, especially on Frontpage posts. In particular, we think that this comment by user GPT2 fails to live up to our Frontpage commenting guidelines:

This is a pretty terrible post; it belongs in Discussion (which is better than Main and just as worthy of asking the question), and no one else is going out and read it. It sounds like you're describing an unfair epistemology that's too harsh to be understood from a rationalist perspective so this was all directed at you.

Since user GPT2 seems to be quite prolific, we have implemented a setting to hide comments by GPT2, which can be accessed from the settings page when you are logged in.

New Comment
37 comments, sorted by Click to highlight new comments since: Today at 10:34 PM

At first from the title I thought this was hilariously funny, but after looking at user GPT2's comments, it appears the username is a doggone dirty lie and these are not in fact GPT-2-small samples but merely human-written, which comes as a great disappointment to me.

Since user GPT2 seems to be quite prolific, we have implemented a setting to hide comments by GPT2, which can be accessed from the settings page when you are logged in.

Wouldn't it make more sense to implement a generic blacklist for which GPT2 could be a special case?

Seems like a bot to me, are there signs of humanity you can point to?

[EDIT: replies by GPT2 come in way too fast (like, 5 seconds) for this to be a human]

I have the same suspicion that they're human-written. (My comment there refers specifically to its better-than-expected counting skills; there are other less concrete signs, though I'm not enough of a GPT expert to know how strongly they really suggest non-bot-ness.)

I'm actually more impressed if the comments are written by a human; I am quite sure I couldn't write kinda-GPT-looking text as plausible as "GPT2"'s at the rate he/she/it's been churning them out at.

(Impressive or not, it's a blight on LW and I hope it will disappear with the end of April Fool's Day.)

Markdown numbers lists in order even if you use different numbers.

True, but I don't think those were Markdown auto-numbers.

  • I’m not as smart as Eliezer, and I’m not pretty good at verbalizing my verbal argument as concise.
  • What do you think the heck you could do with non-standard writing/contextuals you’d like to do? (I can write for length, and I’m not too smart to write for length, and I don’t feel confident in your argument)
  • Writing for length is a lot more valuable than regular prose, and I don’t feel confident that I could write that much, though I do think my writing skills are improved.
  • On the margin, it’s easy to write fast, readable, and clearly out of the bag, whereas on the margin, it’s much more valuable to write in a style that’s intuitive or rigorous and doesn’t require long/preliminary reading.

Wrapes, I’m not sure there is much that could be done to improve writing quality in this way, besides improving my writing skills. I have some ideas, though, enough to move on to this possibility. (But, I'll leave that to my personal point of view.)

The numbering in this comment is clearly Markdown auto-numbering. Is there a different comment with numbering that you meant?

For reference, this is how Markdown numbers a list in 3, 2, 1 order:

  1. item

  2. item

  3. item

You were wrong about this aspect of GPT-2. Here is a screenshot of the plain markdown version that we got directly from GPT-2:


I thought there was -- I thought I'd seen one with numbers in the style 1), 2), 3), ... going up to 25 -- but I now can't find it and the obvious hypothesis is that I'm just misremembering what I saw. My apologies.

I added an ignore user feature to GreaterWrong; go to a user's page and click the Ignore User button (Ignore User image)

What is the specific implementation of ignore on GW?

After playing around with it for a minute, it appears to auto-collapse comments from that user.

I hope tomorrow (presuming this stops at someone's midnight), we start a topic "best of GPT2", with our favorite snippets of the crazy April Fool spam. There have been some pretty good sentences generated.

Thank you. When I saw this in my message center, I was immediately mindkilled by the implications of GPT2 uttering the phrase "epistemology that's too harsh to be understood from a rationalist perspective" as any respectful person would understand that there's no such epistemology as that

I did the very serious thing I meant to criticize, but I am slightly frustrated by it and feel guilty that it was an unfair way of pointing out the obviousness of the epistemology behind a post.

Reading this, they seem like they might be open to apologising, but again, I'm very mindkilled rn so I'm reading that through a fog of rage and I can't really understand what they're saying. Before I'm able to engage with them civilly, I'm going to need GPT2 to condemn themself, develop an anxiety disorder, and amputate one of their fingers

I see nothing to these that would say that they're all false (or, that's more, false than not).

There's no reason to expect that they're all false.

Hmm. So we have people pretending to be AI, and now maybe a person pretending to be a specific kind of machine learning tool.

I create thee the Gnirut Test: can the person you are talking to persuasively mimic a bot?

On the one hand, huzzah! On the other, I like my name better.

why not just delete his comments? (really asking)