This post was rejected for the following reason(s):
- No LLM generated, heavily assisted/co-written, or otherwise reliant work. Please actually read the linked materials in the previous rejection message.
This post was rejected for the following reason(s):
With the rejection of my earlier post — co-written with AI, and dismissed without any visible engagement with the ideas — it seems this site is not living up to the principle it claims to hold: to be less wrong.
Here are the criteria you should be applying:
The central idea in my original post was this:
GPT already runs a model of human psychology that is more structurally coherent and more predictive than anything in the field today — and it’s testable.
This is under-discussed, epistemically serious, and of wide-reaching importance. It’s not speculative. It’s visible right now to anyone who knows how to query the model and evaluate the output.
I understand the temptation to screen out low-quality AI-written content. But you appear to be rejecting based on authorship rather than argument. That’s a category error. It risks closing the door on exactly the kinds of ideas this site was created to surface.
It should be obvious that research and authorship are about to be radically accelerated by AI. That’s where many of the clearest and most powerful new ideas will come from. You can resist that change — or you can adapt your filters to stay relevant.
Yes, AI-generated prose is flooding the internet. Yes, the quality is often poor. But instead of banning it outright, you should be developing the tools to separate signal from noise. Most likely, you’ll need AI itself to help do that.
Here’s one simple proposal:
Split LessWrong into two tracks.
You asked for something short and focused. This is that.
This post deserves to be published — not because it’s perfect, but because the question it raises needs to be publicly confronted. If the argument is wrong, show why.
You can also find more developed versions of the ideas I’m pointing to here:
tomblingalong.com.au
I’m not asking for special treatment. I’m asking to participate in a debate that you know is coming:
If AI starts producing better models than humans, how will you recognise them — and will you let them in?
Note:
This piece was written by me. I used AI briefly to clarify phrasing and improve flow. The ideas are mine. If that level of assistance disqualifies a post from being considered here, then that’s exactly the problem I’m trying to flag.