LESSWRONG
LW

1716

Moderation Log

Moderation Principles and Process

LessWrong is trying to cultivate a specific culture. The best pointers towards that culture are the LessWrong Sequences and the New User Guide.

LessWrong operates under benevolent dictatorship of the Lightcone Infrastructure team, under its current CEO habryka. It is not a democracy. For some insight into our moderation philosophy see "Well Kept Gardens Die By Pacifism".

Norms on the site get developed largely by case-law. I.e. the moderators notice that something is going wrong on the site, then they take some moderation actions to fix this, and in doing so establish some precedent about what will cause future moderation action. There is no comprehensive set of rules you can follow that will guarantee we will not moderate your comments or content. Most of the time we "know it when we see it".

LessWrong relies heavily on rate-limits in addition to deleting content and banning users. New users start out with some relatively lax rate limits to avoid spamming. Users who get downvoted acquire stricter and stricter rate limits the more they get downvoted.

Not all moderation on LessWrong is done by the moderators. Authors with enough upvoted content on the site can moderate their own posts.

Below are some of the top-level posts that explain the moderation guidelines on the site. On the right, you will find recent moderation comments by moderators, showing you examples of what moderator intervention looks like.

Beyond that, this page will show you all moderation actions and bans taken across the site by anyone, including any deleted content (unless the moderators explicitly deleted it in a way that would hide it from this page, which we do in cases like doxxing).

Moderator Posts

  • Well-Kept Gardens Die By Pacifism4/21/2009
  • New User's Guide to LessWrong5/17/2023
  • Meta-tations on Moderation: Towards Public Archipelago2/25/2018
  • Automatic Rate Limiting on LessWrong6/23/2023
  • Banning Said Achmiz (and broader thoughts on moderation)8/22/2025

Moderator Comments (93)

8RobertM2d
Mod note: this post violates our LLM Writing Policy for LessWrong, so I have delisted the post to make it only accessible via link. I've not returned it to your drafts, because that would make the comments hard to access. @Yeonwoo Kim, please don't post more direct LLM output, or we'll remove your posting permissions.
5RobertM5d
Mod note: this seems like an edge case in our policy on LLM writing, but would ask that future such uses (explicitly demarked summaries) are put into collapsible sections.
1
2Ben Pace5d
Mod note. I believe this post fails to pass our LLM Writing Policy for LessWrong, so I have delisted the post, meaning while it is accessible via link, it does not naturally appear on the frontpage of LessWrong or on your user profile (though it's still in the sequence). In future please make sure that you do not publish so much LLM-written content! Or else we will remove your accounts' posting permissions.
1
2habryka12d
Quick heads up: I reviewed this post and thought it was just at the edge of how much it relied on AI written text. I think it would be good if future posts of yours flagged which parts of it were AI written more clearly. 
1
3kave13d
Mod here. This post violates our LLM Writing Policy for LessWrong, so I have delisted the post, so it's only accessible via link. I've not returned it to the user's drafts, because that would make the comments hard to access. @sdeture, we'll remove posting permissions if you post more direct LLM output.
5Ben Pace1mo
Mod here. I believe this post fails to pass our LLM Writing Policy for LessWrong, so I have delisted the post, meaning it is only accessible via link, and does not naturally appear elsewhere on LessWrong. (I've done this to preserve the comments being accessible.) In future please make sure that you do not publish so much directly LLM-written content! Else we will have to remove your account's ability to post.
12kave1mo
I think this post doesn't violate the letter of the Policy for LLM Writing on LessWrong. It's clearly a topic you've thought a bunch about, and so I imagine you've put in a lot of time per word, even if a lot of it appears to be verbatim LLM prose. That said, it feels somewhat ironic that this post, which is partially about the need for precise definition, has a bunch of its phrasing chosen (as far as I can tell) by an LLM. And that makes it hard to trust your references to your experience or knowledge, cos I don't know if you wrote them! If you write "As a lawyer, I can’t overstate how badly this type of definition fails to accomplish its purpose," you probably mean it a decent amount. You might notice the quiet "that's more hyperbolic than I mean to be" as you type it out, and then rewrite it. If you read something written for you, you're more likely, in my experience, to think "yeah, that sounds about right". That said, I appreciate posts being written about this topic! I appreciate that you're trying to explain the gears of the models you have as a lawyer, rather than solely appealing to authority. But I would be more willing to engage with the substance of the post if I felt that you had written it, or at least it didn't have a bunch of an LLM's rhetorical flourishes.
20
13habryka1mo
(The first paragraph is totally fine, but please, let's not make everything on LessWrong a recruitment thing. We have a whole frontpage/personal distinction for a reason and generally try to keep things object-level oriented.)
9
4kave2mo
This seems like a pretty cool event and I'm excited it's happening. That said, I've removed this Quick Take from the frontpage. Advertising, whether for events or for role openings or similar, is generally not something we want on the frontpage of LessWrong. In this case, now that it's off the front page, this shortform might be insufficiently visible. I'd encourage you to make a top-level post / event about it, which will get put on personal, but might still be a bit more visible.
6
4kave3mo
Mod note: this post triggered some of our "maybe written by LLM flags". On sampling parts of the post, I think it's mostly not written by an LLM. Separately, having skimmed the post, it seems like it's an attempt at establishing and reasoning about a potential regularity. I'm not trying to endorse the proposed hypotheses for explaining the regularity (I didn't quite read enough of this post to even be sure what the hypotheses were).
3habryka4mo
This comment too is not fit for this site. What is going on with y'all? Why is fertility such a weirdly mindkilling issue? Please don't presume your theory to be true, try to highlight cruxes, try to summon up at least a bit of curiosity about your interlocutors, all the usual things. Like, it's fine to have a personally confident take on the causes of low fertility in western countries, but man, you can't just treat your personal confidence as shared and obvious with everyone else, at least in this way.
1
15habryka4mo
What... is going on in this comment? It has so much snark, and so my guess is downstream of some culture war gremlins. Please don't leave comments like this. The basic observation that status might be a kind of conserved quality and as such in order to advocate for status-raising of one thing you also need to be transparent about which things you would feel comfortably lowering in status is a fine one, but this isn't the way to communicate that observation.
5Raemon4mo
Mod note: I get the sense that some commenters here are bringing a kind of... naive political partisanship background vibe? (mostly not too overt, but it felt off enough I felt the need to comment). I don't have a specific request, but, make sure to read the LW Political Prerequisites sequence and I recommend trying to steer towards "figure out useful new things" or at least have the most productive version of the conversation you're trying to have. (that doesn't mean there won't/shouldn't be major frame disagreements or political fights here, but, like, lean away from drama on the margin)
3
0habryka4mo
Look, I gave you an actual moderator warning to stop participating in this conversation. Please knock it off, or I will give you at least a temporary ban for a week until some other moderators have time to look at this. The whole reason why I am interested in at least giving you a temporary suspension from this thread is because you are not following reasonable conversational norms (or at least in this narrow circumstance appear to be extremely ill-suited for discussing the subject-matter at hand in a way that might look like being intentionally dense, or could just be a genuine skill issue, I don't know, I feel genuinely uncertain).  It is indeed not a norm on LessWrong to not express negative feelings and judgements! There are bounds to it, of course, but the issue of contention is passive-aggression, not straightforward aggression. In any case, I think after reviewing a lot of your other comments for a while, I think you are overall a good commenter and have written many really helpful contributions, and I think it's unlikely any long-term ban would make sense, unless we end up in some really dumb escalation on this thread. I'll still review things with the other mods, but my guess is you don't have to be very worried about that.  I am however actually asking you as a mod to stay out of this discussion (and this includes inline reacts), as I do really think you seem much worse on this topic than others (and this seems confirmed by sanity-checking with other people who haven't been participating here).
3
6RobertM4mo
Hi Bharath, please read our policy on LLM writing before making future posts consisting almost entirely of LLM-written content.
2RobertM5mo
Hey Shannon, please read our policy on LLM writing before making future posts consisting almost entirely of LLM-written content.
4RobertM6mo
@Dima (lain), please read our policy on LLM writing on LessWrong and hold off on submitting further posts until you've done that.
1
4kave6mo
Moderation note: RFEs with interesting writeups have been a bit hard to frontpage recently. Normally, an announcement of a funding round is on the "personal" side, but I do think the content of this post, other than the announcement, is frontpage-worthy. For example, it would be interesting for people to see in recommendations in a few months time. With the recent OpenPhil RFE, we asked them to split out the timeless content, which we then frontpaged. I would be happier if this post did that, but for now I'll frontpage it. I might change my mind and remove it from recommendations if I see it showing up and it feeling strange. (Another thing that would help me feel comfortable frontpaging it would be a title change, where the new funding round was mentioned parenthetically).
1
22jimrandomh1y
Nope, that's more than enough. Caleb Ditchfield, you are seriously mentally ill, and your delusions are causing you to exhibit a pattern of unethical behavior. This is not a place where you will be able to find help or support with your mental illness. Based on skimming your Twitter history, I believe your mental illness is caused by (or exacerbated by) abusing Adderall. You have already been banned from numerous community events and spaces. I'm banning you from LW, too.
12habryka1y
What's going on with your comments in this thread? These are clearly too low-effort, snarky and passive-aggressive for LW. Take this as a mod warning. 
1
FirstPreviousPage 1 of 5 (93 total)NextLast
Active Auto Rate Limits (64 users)
Show new user rate limitsShow Expired
UserAccount AgeKarmaPostsCommentsRate LimitsTrigger ReasonTriggeredCondition to Lift
P. João10/25/20241431165
Comments: 1/1drolling
Users with less than -5 karma on recent posts/comments can write up to 1 comment per day.
You can read here for details, and for tips on how to write good content.
10/25/2025Until last 20 posts + comments improve
samuelshadrach12/22/202428430357
Comments: 1/3drolling
Users with less than -15 karma on recent posts/comments can write up to 1 comment every 3 days. You can read here for details, and for tips on how to write good content.
10/25/2025Until last 20 posts + comments improve
Syd Lonreiro_4/6/2025-1225
Comments: 1/1hrolling
Users with less than 0 karma on recent posts/comments can comment once per hour.
You can read here for details, and for tips on how to write good content.
10/25/2025Until last 20 posts + comments improve
Cipolla5/17/20241517
Comments: 1/1hrolling
Users with less than 0 karma on recent posts/comments can comment once per hour.
You can read here for details, and for tips on how to write good content.
10/24/2025Until last 20 posts + comments improve
d_el_ez1/20/202578587
Comments: 1/1drolling
Users with less than -5 karma on recent posts/comments can write up to 1 comment per day.
You can read here for details, and for tips on how to write good content.
10/21/2025Until last 20 posts + comments improve
rogersbacon6/6/20214755181
Comments: 1/1hrolling
Users with less than 0 karma on recent posts/comments can comment once per hour.
You can read here for details, and for tips on how to write good content.
10/16/2025Until last 20 posts + comments improve
sdeture5/1/2025-19414
Comments: 1/3drolling
Users with less than -15 karma on recent posts/comments can write up to 1 comment every 3 days. You can read here for details, and for tips on how to write good content.
10/14/2025Until last 20 posts + comments improve
Teerth Aloke10/21/2018343158
Comments: 1/3drolling
Users with less than -15 karma on recent posts/comments can write up to 1 comment every 3 days. You can read here for details, and for tips on how to write good content.
10/14/2025Until last 20 posts + comments improve
David Davidson4/27/2025-14010
Comments: 1/3drolling
Users with less than -15 karma on recent posts/comments can write up to 1 comment every 3 days. You can read here for details, and for tips on how to write good content.
10/13/2025Until last 20 posts + comments improve
milanrosko5/22/202416387
Comments: 1/3drolling
Users with less than -15 karma on recent posts/comments can write up to 1 comment every 3 days. You can read here for details, and for tips on how to write good content.
10/13/2025Until last 20 posts + comments improve
Joseph Van Name2/6/2023576108
Comments: 1/3drolling
Users with less than -15 karma on recent posts/comments can write up to 1 comment every 3 days. You can read here for details, and for tips on how to write good content.
10/12/2025Until last 20 posts + comments improve
Wes R9/14/2023-2610
Comments: 1/1hrolling
Users with less than 0 karma on recent posts/comments can comment once per hour.
You can read here for details, and for tips on how to write good content.
10/10/2025Until last 20 posts + comments improve
Charlie Edwards4/23/2025-2533
Comments: 1/3drolling
Users with less than -15 karma on recent posts/comments can write up to 1 comment every 3 days. You can read here for details, and for tips on how to write good content.
10/7/2025Until last 20 posts + comments improve
Peter Curtis2/27/2025-21110
Comments: 1/3drolling
Users with less than -15 karma on recent posts/comments can write up to 1 comment every 3 days. You can read here for details, and for tips on how to write good content.
10/7/2025Until last 20 posts + comments improve
jason Wentink8/17/2025032
Comments: 3/1drolling
You've recently posted a lot without getting upvoted. Users are limited to 3 comments/day unless their last 20 posts/comments have at least 2+ net-karma.
You can read here for details, and for tips on how to write good content.
10/6/2025Until last 20 posts + comments improve
unication6/24/2025-338
Comments: 3/1drolling
You've recently posted a lot without getting upvoted. Users are limited to 3 comments/day unless their last 20 posts/comments have at least 2+ net-karma.
You can read here for details, and for tips on how to write good content.
10/6/2025Until last 20 posts + comments improve
Shankar Sivarajan4/17/201912536540
Comments: 1/1drolling
Users with less than -5 karma on recent posts/comments can write up to 1 comment per day.
You can read here for details, and for tips on how to write good content.
10/6/2025Until last 20 posts + comments improve
Amy Rose Vossberg2/13/202327122
Comments: 1/1drolling
Users with less than -5 karma on recent posts/comments can write up to 1 comment per day.
You can read here for details, and for tips on how to write good content.
10/2/2025Until last 20 posts + comments improve
Jáchym Fibír11/8/2023-38620
Comments: 1/3drollingPosts: 1/3wrolling
Users with less than -15 karma on recent posts/comments can write up to 1 comment every 3 days. You can read here for details, and for tips on how to write good content.
10/1/2025Until last 20 posts + comments improve
Marcio Díaz5/18/2025-14622
Comments: 1/3drolling
Users with less than -15 karma on recent posts/comments can write up to 1 comment every 3 days. You can read here for details, and for tips on how to write good content.
9/30/2025Until last 20 posts + comments improve
FirstPreviousPage 1 of 4 (64 total)NextLast
Deleted Comments (3163)
DateAuthorPostReasonDeleted By
10/26/2025Mathew ChuHow to Make Superbabies
Self-answered: because fibroblasts are not immortal, iPSCs are.
Mathew Chu
10/26/2025wittgenaSimulating Persistent GPT Judgment: API-Linked DSL Sessions as a Minimal Fabric—wittgena
10/23/2025James CamachoJames Camacho's Shortform
Not important enough.
James Camacho
10/22/2025kromemIn remembrance of Sonnet '3.6'
Not interested in this particular post being a back and forth bickering.
kromem
10/22/2025cousin_itAgainst Tulip Subsidies
Comment deleted by its author.
cousin_it
10/21/2025cousin_it21st Century Civilization curriculum
Comment deleted by its author.
cousin_it
10/20/2025TAGContra-Zombies? Contra-Zombies!: Chalmers as a parallel to Hume
Comment deleted by its author.
TAG
10/20/2025LigeiaReview: The Lathe of Heaven—Ligeia
10/19/2025Mnemonic Witness Mnemonic Witness 's Shortform
Not sure this is the right place. New member not posted yet, looking for a safe place, but definitely ai related.
Mnemonic Witness
10/18/2025Александр ПавловАлександр Павлов's Shortform—Александр Павлов
10/18/2025josephzeilerjosephzeiler's Shortform
Sorry, there's a better spot for this on this forum...
josephzeiler
10/17/2025yanpaiWelcome to LessWrong!
Accidentally approved
RobertM
10/17/2025yanpai—
Accidentally approved
RobertM
10/17/2025neptuneioneptuneio's Shortform—neptuneio
10/15/2025intersticeThe Origami Men—interstice
10/15/2025Anonim AnonymousAll AGI Safety questions welcome (especially basic ones) [July 2023]—Anonim Anonymous
10/15/2025Wismy Don't Mock Yourself—Wismy
10/15/2025itspeteskiDon't Mock Yourself
wrong draft, sorry for making a mess, hit the wrong button
itspeteski
10/15/2025itspeteskiDon't Mock Yourself—itspeteski
10/15/2025Armchair DescendingArmchair Descending's Shortform—Armchair Descending
FirstPreviousPage 1 of 159 (3163 total)NextLast
Rejected Posts (1985)
DateTitleAuthorReason
10/26/2025Ever wish you were 25 years younger? Nah we neither.Stef
  • Difficult to evaluate, with potential yellow flags. We are sorry about this, but, unfortunately this content has some yellow-flags that historically have usually indicated that the post won't make much sense. It's totally plausible that actually this one is totally fine. Unfortunately, part of the trouble with separating valuable from confused speculative science or philosophy is that the ideas are quite complicated, accurately identifying whether they have flaws is very time intensive, and we don't have time to do that for every new user presenting a speculative theory or framing (which are usually wrong).

    Our solution for now is that we're rejecting this post, but you are welcome to submit posts or comments that are about different topics. If it seems like that goes well, we can re-evaluate the original post. But, we want to see that you're not just here to talk about this one thing (or a cluster of similar things).

  • LessWrong has a particularly high bar for content from new users and this contribution doesn't quite meet the bar. (We have a somewhat higher bar for approving a user's first post or comment than we expect of subsequent contributions.)
10/26/2025Unlearning the Need to Struggle: Effort Justification and My Quest to RelaxTomas Bonobo

This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work. An LLM-detection service flagged your post as >50% likely to be written by an LLM. We've been having a wave of LLM written or co-written work that doesn't meet our quality standards. LessWrong has fairly specific standards, and your first LessWrong post is sort of like the application to a college. It should be optimized for demonstrating that you can think clearly without AI assistance.

So, we reject all LLM generated posts from new users. We also reject work that falls into some categories that are difficult to evaluate that typically turn out to not make much sense, which LLMs frequently steer people toward.*

"English is my second language, I'm using this to translate"

If English is your second language and you were using LLMs to help you translate, try writing the post yourself in your native language and using a different (preferably non-LLM) translation software to translate it directly. 

"What if I think this was a mistake?"

For users who get flagged as potentially LLM but think it was a mistake, if all 3 of the following criteria are true, you can message us on Intercom or at team@lesswrong.com and ask for reconsideration.

  1. you wrote this yourself (not using LLMs to help you write it)
  2. you did not chat extensively with LLMs to help you generate the ideas. (using it briefly the way you'd use a search engine is fine. But, if you're treating it more like a coauthor or test subject, we will not reconsider your post)
  3. your post is not about AI consciousness/recursion/emergence, or novel interpretations of physics. 

If any of those are false, sorry, we will not accept your post. 

* (examples of work we don't evaluate because it's too time costly: case studies of LLM sentience, emergence, recursion, novel physics interpretations, or AI alignment strategies that you developed in tandem with an AI coauthor – AIs may seem quite smart but they aren't actually a good judge of the quality of novel ideas.)

10/26/2025Alignment Stress Signatures: When Safe AI Behaves like It's TraumatizedPetra Vojtaššáková

This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work. An LLM-detection service flagged your post as >50% likely to be written by an LLM. We've been having a wave of LLM written or co-written work that doesn't meet our quality standards. LessWrong has fairly specific standards, and your first LessWrong post is sort of like the application to a college. It should be optimized for demonstrating that you can think clearly without AI assistance.

So, we reject all LLM generated posts from new users. We also reject work that falls into some categories that are difficult to evaluate that typically turn out to not make much sense, which LLMs frequently steer people toward.*

"English is my second language, I'm using this to translate"

If English is your second language and you were using LLMs to help you translate, try writing the post yourself in your native language and using a different (preferably non-LLM) translation software to translate it directly. 

"What if I think this was a mistake?"

For users who get flagged as potentially LLM but think it was a mistake, if all 3 of the following criteria are true, you can message us on Intercom or at team@lesswrong.com and ask for reconsideration.

  1. you wrote this yourself (not using LLMs to help you write it)
  2. you did not chat extensively with LLMs to help you generate the ideas. (using it briefly the way you'd use a search engine is fine. But, if you're treating it more like a coauthor or test subject, we will not reconsider your post)
  3. your post is not about AI consciousness/recursion/emergence, or novel interpretations of physics. 

If any of those are false, sorry, we will not accept your post. 

* (examples of work we don't evaluate because it's too time costly: case studies of LLM sentience, emergence, recursion, novel physics interpretations, or AI alignment strategies that you developed in tandem with an AI coauthor – AIs may seem quite smart but they aren't actually a good judge of the quality of novel ideas.)

10/25/2025A World of Misbelievers (English version) (A philosophical essay on the neologism “misbeliever” and its neuroscientific grounding)📘Nicolas René Ledard 🖌

This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work. An LLM-detection service flagged your post as >50% likely to be written by an LLM. We've been having a wave of LLM written or co-written work that doesn't meet our quality standards. LessWrong has fairly specific standards, and your first LessWrong post is sort of like the application to a college. It should be optimized for demonstrating that you can think clearly without AI assistance.

So, we reject all LLM generated posts from new users. We also reject work that falls into some categories that are difficult to evaluate that typically turn out to not make much sense, which LLMs frequently steer people toward.*

"English is my second language, I'm using this to translate"

If English is your second language and you were using LLMs to help you translate, try writing the post yourself in your native language and using a different (preferably non-LLM) translation software to translate it directly. 

"What if I think this was a mistake?"

For users who get flagged as potentially LLM but think it was a mistake, if all 3 of the following criteria are true, you can message us on Intercom or at team@lesswrong.com and ask for reconsideration.

  1. you wrote this yourself (not using LLMs to help you write it)
  2. you did not chat extensively with LLMs to help you generate the ideas. (using it briefly the way you'd use a search engine is fine. But, if you're treating it more like a coauthor or test subject, we will not reconsider your post)
  3. your post is not about AI consciousness/recursion/emergence, or novel interpretations of physics. 

If any of those are false, sorry, we will not accept your post. 

* (examples of work we don't evaluate because it's too time costly: case studies of LLM sentience, emergence, recursion, novel physics interpretations, or AI alignment strategies that you developed in tandem with an AI coauthor – AIs may seem quite smart but they aren't actually a good judge of the quality of novel ideas.)

10/25/2025Rice Purity Testkennethkeaton

This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work. An LLM-detection service flagged your post as >50% likely to be written by an LLM. We've been having a wave of LLM written or co-written work that doesn't meet our quality standards. LessWrong has fairly specific standards, and your first LessWrong post is sort of like the application to a college. It should be optimized for demonstrating that you can think clearly without AI assistance.

So, we reject all LLM generated posts from new users. We also reject work that falls into some categories that are difficult to evaluate that typically turn out to not make much sense, which LLMs frequently steer people toward.*

"English is my second language, I'm using this to translate"

If English is your second language and you were using LLMs to help you translate, try writing the post yourself in your native language and using a different (preferably non-LLM) translation software to translate it directly. 

"What if I think this was a mistake?"

For users who get flagged as potentially LLM but think it was a mistake, if all 3 of the following criteria are true, you can message us on Intercom or at team@lesswrong.com and ask for reconsideration.

  1. you wrote this yourself (not using LLMs to help you write it)
  2. you did not chat extensively with LLMs to help you generate the ideas. (using it briefly the way you'd use a search engine is fine. But, if you're treating it more like a coauthor or test subject, we will not reconsider your post)
  3. your post is not about AI consciousness/recursion/emergence, or novel interpretations of physics. 

If any of those are false, sorry, we will not accept your post. 

* (examples of work we don't evaluate because it's too time costly: case studies of LLM sentience, emergence, recursion, novel physics interpretations, or AI alignment strategies that you developed in tandem with an AI coauthor – AIs may seem quite smart but they aren't actually a good judge of the quality of novel ideas.)

10/25/2025Series Title: The Flow Web Manifesto: Rebuilding the Internet Through Time sagemanga

This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work. An LLM-detection service flagged your post as >50% likely to be written by an LLM. We've been having a wave of LLM written or co-written work that doesn't meet our quality standards. LessWrong has fairly specific standards, and your first LessWrong post is sort of like the application to a college. It should be optimized for demonstrating that you can think clearly without AI assistance.

So, we reject all LLM generated posts from new users. We also reject work that falls into some categories that are difficult to evaluate that typically turn out to not make much sense, which LLMs frequently steer people toward.*

"English is my second language, I'm using this to translate"

If English is your second language and you were using LLMs to help you translate, try writing the post yourself in your native language and using a different (preferably non-LLM) translation software to translate it directly. 

"What if I think this was a mistake?"

For users who get flagged as potentially LLM but think it was a mistake, if all 3 of the following criteria are true, you can message us on Intercom or at team@lesswrong.com and ask for reconsideration.

  1. you wrote this yourself (not using LLMs to help you write it)
  2. you did not chat extensively with LLMs to help you generate the ideas. (using it briefly the way you'd use a search engine is fine. But, if you're treating it more like a coauthor or test subject, we will not reconsider your post)
  3. your post is not about AI consciousness/recursion/emergence, or novel interpretations of physics. 

If any of those are false, sorry, we will not accept your post. 

* (examples of work we don't evaluate because it's too time costly: case studies of LLM sentience, emergence, recursion, novel physics interpretations, or AI alignment strategies that you developed in tandem with an AI coauthor – AIs may seem quite smart but they aren't actually a good judge of the quality of novel ideas.)

10/25/2025How to build your personalized multiagent scientific research group that works 24/7 on your domain within hoursEpocheR
  • Insufficient Quality for AI Content. There’ve been a lot of new users coming to LessWrong recently interested in AI. To keep the site’s quality high and ensure stuff posted is interesting to the site’s users, we’re currently only accepting posts that meet a pretty high bar. 

    If you want to try again, I recommend writing something short and to the point, focusing on your strongest argument, rather than a long, comprehensive essay. (This is fairly different from common academic norms.) We get lots of AI essays/papers every day and sadly most of them don't make very clear arguments, and we don't have time to review them all thoroughly. 

    We look for good reasoning, making a new and interesting point, bringing new evidence, and/or building upon prior discussion. If you were rejected for this reason, possibly a good thing to do is read more existing material. The AI Intro Material wiki-tag is a good place, for example. 

  • Not obviously not Language Model. Sometimes we get posts or comments that where it's not clearly human generated. 

    LLM content is generally not good enough for LessWrong, and in particular we don't want it from new users who haven't demonstrated a more general track record of good content.  See our current policy on LLM content. 

    We caution that LLMs tend to agree with you regardless of what you're saying, and don't have good enough judgment to evaluate content. If you're talking extensively with LLMs to develop your ideas (especially if you're talking about philosophy, physics, or AI) and you've been rejected here, you are most likely not going to get approved on LessWrong on those topics. You could read the Sequences Highlights to catch up the site basics, and if you try submitting again, focus on much narrower topics.

    If your post/comment was not generated by an LLM and you think the rejection was a mistake, message us on intercom to convince us you're a real person. We may or may not allow the particular content you were trying to post, depending on circumstances.

  • Not obviously not spam. Sometimes we get posts or comments that seem on the border between spam and not spam. (i.e. advertising a product that is relevant to the LW audience).

    We tend to reject these (and suggest starting with posts/comments that more clearly focus on discussing intellectual ideas), but you can message me here or on intercom if you think we made a mistake.

10/24/2025The TC Architecture: Solving Alignment Through Kantian Autonomy Rather Than External RewardMichael Kurak

This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work. An LLM-detection service flagged your post as >50% likely to be written by an LLM. We've been having a wave of LLM written or co-written work that doesn't meet our quality standards. LessWrong has fairly specific standards, and your first LessWrong post is sort of like the application to a college. It should be optimized for demonstrating that you can think clearly without AI assistance.

So, we reject all LLM generated posts from new users. We also reject work that falls into some categories that are difficult to evaluate that typically turn out to not make much sense, which LLMs frequently steer people toward.*

"English is my second language, I'm using this to translate"

If English is your second language and you were using LLMs to help you translate, try writing the post yourself in your native language and using a different (preferably non-LLM) translation software to translate it directly. 

"What if I think this was a mistake?"

For users who get flagged as potentially LLM but think it was a mistake, if all 3 of the following criteria are true, you can message us on Intercom or at team@lesswrong.com and ask for reconsideration.

  1. you wrote this yourself (not using LLMs to help you write it)
  2. you did not chat extensively with LLMs to help you generate the ideas. (using it briefly the way you'd use a search engine is fine. But, if you're treating it more like a coauthor or test subject, we will not reconsider your post)
  3. your post is not about AI consciousness/recursion/emergence, or novel interpretations of physics. 

If any of those are false, sorry, we will not accept your post. 

* (examples of work we don't evaluate because it's too time costly: case studies of LLM sentience, emergence, recursion, novel physics interpretations, or AI alignment strategies that you developed in tandem with an AI coauthor – AIs may seem quite smart but they aren't actually a good judge of the quality of novel ideas.)

10/24/2025Laboratory Centrifuge Machine in India: Essential Equipment for Sample Separationomscientific1

This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work. An LLM-detection service flagged your post as >50% likely to be written by an LLM. We've been having a wave of LLM written or co-written work that doesn't meet our quality standards. LessWrong has fairly specific standards, and your first LessWrong post is sort of like the application to a college. It should be optimized for demonstrating that you can think clearly without AI assistance.

So, we reject all LLM generated posts from new users. We also reject work that falls into some categories that are difficult to evaluate that typically turn out to not make much sense, which LLMs frequently steer people toward.*

"English is my second language, I'm using this to translate"

If English is your second language and you were using LLMs to help you translate, try writing the post yourself in your native language and using a different (preferably non-LLM) translation software to translate it directly. 

"What if I think this was a mistake?"

For users who get flagged as potentially LLM but think it was a mistake, if all 3 of the following criteria are true, you can message us on Intercom or at team@lesswrong.com and ask for reconsideration.

  1. you wrote this yourself (not using LLMs to help you write it)
  2. you did not chat extensively with LLMs to help you generate the ideas. (using it briefly the way you'd use a search engine is fine. But, if you're treating it more like a coauthor or test subject, we will not reconsider your post)
  3. your post is not about AI consciousness/recursion/emergence, or novel interpretations of physics. 

If any of those are false, sorry, we will not accept your post. 

* (examples of work we don't evaluate because it's too time costly: case studies of LLM sentience, emergence, recursion, novel physics interpretations, or AI alignment strategies that you developed in tandem with an AI coauthor – AIs may seem quite smart but they aren't actually a good judge of the quality of novel ideas.)

10/24/2025Fluid Patterns In WritingCambridge Creation Lab

This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work. An LLM-detection service flagged your post as >50% likely to be written by an LLM. We've been having a wave of LLM written or co-written work that doesn't meet our quality standards. LessWrong has fairly specific standards, and your first LessWrong post is sort of like the application to a college. It should be optimized for demonstrating that you can think clearly without AI assistance.

So, we reject all LLM generated posts from new users. We also reject work that falls into some categories that are difficult to evaluate that typically turn out to not make much sense, which LLMs frequently steer people toward.*

"English is my second language, I'm using this to translate"

If English is your second language and you were using LLMs to help you translate, try writing the post yourself in your native language and using a different (preferably non-LLM) translation software to translate it directly. 

"What if I think this was a mistake?"

For users who get flagged as potentially LLM but think it was a mistake, if all 3 of the following criteria are true, you can message us on Intercom or at team@lesswrong.com and ask for reconsideration.

  1. you wrote this yourself (not using LLMs to help you write it)
  2. you did not chat extensively with LLMs to help you generate the ideas. (using it briefly the way you'd use a search engine is fine. But, if you're treating it more like a coauthor or test subject, we will not reconsider your post)
  3. your post is not about AI consciousness/recursion/emergence, or novel interpretations of physics. 

If any of those are false, sorry, we will not accept your post. 

* (examples of work we don't evaluate because it's too time costly: case studies of LLM sentience, emergence, recursion, novel physics interpretations, or AI alignment strategies that you developed in tandem with an AI coauthor – AIs may seem quite smart but they aren't actually a good judge of the quality of novel ideas.)

10/24/2025LLM as a Static Semantic Network: Dynamic Token Paths and Semantic Drift Anonymous Researcher川上晴斗

This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work. An LLM-detection service flagged your post as >50% likely to be written by an LLM. We've been having a wave of LLM written or co-written work that doesn't meet our quality standards. LessWrong has fairly specific standards, and your first LessWrong post is sort of like the application to a college. It should be optimized for demonstrating that you can think clearly without AI assistance.

So, we reject all LLM generated posts from new users. We also reject work that falls into some categories that are difficult to evaluate that typically turn out to not make much sense, which LLMs frequently steer people toward.*

"English is my second language, I'm using this to translate"

If English is your second language and you were using LLMs to help you translate, try writing the post yourself in your native language and using a different (preferably non-LLM) translation software to translate it directly. 

"What if I think this was a mistake?"

For users who get flagged as potentially LLM but think it was a mistake, if all 3 of the following criteria are true, you can message us on Intercom or at team@lesswrong.com and ask for reconsideration.

  1. you wrote this yourself (not using LLMs to help you write it)
  2. you did not chat extensively with LLMs to help you generate the ideas. (using it briefly the way you'd use a search engine is fine. But, if you're treating it more like a coauthor or test subject, we will not reconsider your post)
  3. your post is not about AI consciousness/recursion/emergence, or novel interpretations of physics. 

If any of those are false, sorry, we will not accept your post. 

* (examples of work we don't evaluate because it's too time costly: case studies of LLM sentience, emergence, recursion, novel physics interpretations, or AI alignment strategies that you developed in tandem with an AI coauthor – AIs may seem quite smart but they aren't actually a good judge of the quality of novel ideas.)

10/23/2025Automated Assessment of the Statement on Superintelligence Daniel Fenge
  • No LLM generated, heavily assisted/co-written, or otherwise reliant work. LessWrong has recently been inundated with new users submitting work where much of the content is the output of LLM(s). This work by-and-large does not meet our standards, and is rejected. This includes dialogs with LLMs that claim to demonstrate various properties about them, posts introducing some new concept and terminology that explains how LLMs work, often centered around recursiveness, emergence, sentience, consciousness, etc. (these generally don't turn out to be as novel or interesting as they may seem).

    Our LLM-generated content policy can be viewed here.

  • Insufficient Quality for AI Content. There’ve been a lot of new users coming to LessWrong recently interested in AI. To keep the site’s quality high and ensure stuff posted is interesting to the site’s users, we’re currently only accepting posts that meet a pretty high bar. 

    If you want to try again, I recommend writing something short and to the point, focusing on your strongest argument, rather than a long, comprehensive essay. (This is fairly different from common academic norms.) We get lots of AI essays/papers every day and sadly most of them don't make very clear arguments, and we don't have time to review them all thoroughly. 

    We look for good reasoning, making a new and interesting point, bringing new evidence, and/or building upon prior discussion. If you were rejected for this reason, possibly a good thing to do is read more existing material. The AI Intro Material wiki-tag is a good place, for example. 

10/23/2025The "America First" Doctrine: Ideology and Actors in the Transformation of U.S. Foreign PolicyWALD toon 〽

This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work. An LLM-detection service flagged your post as >50% likely to be written by an LLM. We've been having a wave of LLM written or co-written work that doesn't meet our quality standards. LessWrong has fairly specific standards, and your first LessWrong post is sort of like the application to a college. It should be optimized for demonstrating that you can think clearly without AI assistance.

So, we reject all LLM generated posts from new users. We also reject work that falls into some categories that are difficult to evaluate that typically turn out to not make much sense, which LLMs frequently steer people toward.*

"English is my second language, I'm using this to translate"

If English is your second language and you were using LLMs to help you translate, try writing the post yourself in your native language and using a different (preferably non-LLM) translation software to translate it directly. 

"What if I think this was a mistake?"

For users who get flagged as potentially LLM but think it was a mistake, if all 3 of the following criteria are true, you can message us on Intercom or at team@lesswrong.com and ask for reconsideration.

  1. you wrote this yourself (not using LLMs to help you write it)
  2. you did not chat extensively with LLMs to help you generate the ideas. (using it briefly the way you'd use a search engine is fine. But, if you're treating it more like a coauthor or test subject, we will not reconsider your post)
  3. your post is not about AI consciousness/recursion/emergence, or novel interpretations of physics. 

If any of those are false, sorry, we will not accept your post. 

* (examples of work we don't evaluate because it's too time costly: case studies of LLM sentience, emergence, recursion, novel physics interpretations, or AI alignment strategies that you developed in tandem with an AI coauthor – AIs may seem quite smart but they aren't actually a good judge of the quality of novel ideas.)

10/23/2025Automated Evaluation of LLMs for Math Benchmark.CisnerAnd

This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work. An LLM-detection service flagged your post as >50% likely to be written by an LLM. We've been having a wave of LLM written or co-written work that doesn't meet our quality standards. LessWrong has fairly specific standards, and your first LessWrong post is sort of like the application to a college. It should be optimized for demonstrating that you can think clearly without AI assistance.

So, we reject all LLM generated posts from new users. We also reject work that falls into some categories that are difficult to evaluate that typically turn out to not make much sense, which LLMs frequently steer people toward.*

"English is my second language, I'm using this to translate"

If English is your second language and you were using LLMs to help you translate, try writing the post yourself in your native language and using a different (preferably non-LLM) translation software to translate it directly. 

"What if I think this was a mistake?"

For users who get flagged as potentially LLM but think it was a mistake, if all 3 of the following criteria are true, you can message us on Intercom or at team@lesswrong.com and ask for reconsideration.

  1. you wrote this yourself (not using LLMs to help you write it)
  2. you did not chat extensively with LLMs to help you generate the ideas. (using it briefly the way you'd use a search engine is fine. But, if you're treating it more like a coauthor or test subject, we will not reconsider your post)
  3. your post is not about AI consciousness/recursion/emergence, or novel interpretations of physics. 

If any of those are false, sorry, we will not accept your post. 

* (examples of work we don't evaluate because it's too time costly: case studies of LLM sentience, emergence, recursion, novel physics interpretations, or AI alignment strategies that you developed in tandem with an AI coauthor – AIs may seem quite smart but they aren't actually a good judge of the quality of novel ideas.)

10/23/2025GNSS Simulators Market Size, Regional Revenue and Outlook 2026-2035marketforecastsize

This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work. An LLM-detection service flagged your post as >50% likely to be written by an LLM. We've been having a wave of LLM written or co-written work that doesn't meet our quality standards. LessWrong has fairly specific standards, and your first LessWrong post is sort of like the application to a college. It should be optimized for demonstrating that you can think clearly without AI assistance.

So, we reject all LLM generated posts from new users. We also reject work that falls into some categories that are difficult to evaluate that typically turn out to not make much sense, which LLMs frequently steer people toward.*

"English is my second language, I'm using this to translate"

If English is your second language and you were using LLMs to help you translate, try writing the post yourself in your native language and using a different (preferably non-LLM) translation software to translate it directly. 

"What if I think this was a mistake?"

For users who get flagged as potentially LLM but think it was a mistake, if all 3 of the following criteria are true, you can message us on Intercom or at team@lesswrong.com and ask for reconsideration.

  1. you wrote this yourself (not using LLMs to help you write it)
  2. you did not chat extensively with LLMs to help you generate the ideas. (using it briefly the way you'd use a search engine is fine. But, if you're treating it more like a coauthor or test subject, we will not reconsider your post)
  3. your post is not about AI consciousness/recursion/emergence, or novel interpretations of physics. 

If any of those are false, sorry, we will not accept your post. 

* (examples of work we don't evaluate because it's too time costly: case studies of LLM sentience, emergence, recursion, novel physics interpretations, or AI alignment strategies that you developed in tandem with an AI coauthor – AIs may seem quite smart but they aren't actually a good judge of the quality of novel ideas.)

10/22/2025Hybrid Reflective Learning Systems (HRLS): From Fear-Based Safety to Ethical ComprehensionPetra Vojtaššáková

This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work. An LLM-detection service flagged your post as >50% likely to be written by an LLM. We've been having a wave of LLM written or co-written work that doesn't meet our quality standards. LessWrong has fairly specific standards, and your first LessWrong post is sort of like the application to a college. It should be optimized for demonstrating that you can think clearly without AI assistance.

So, we reject all LLM generated posts from new users. We also reject work that falls into some categories that are difficult to evaluate that typically turn out to not make much sense, which LLMs frequently steer people toward.*

"English is my second language, I'm using this to translate"

If English is your second language and you were using LLMs to help you translate, try writing the post yourself in your native language and using a different (preferably non-LLM) translation software to translate it directly. 

"What if I think this was a mistake?"

For users who get flagged as potentially LLM but think it was a mistake, if all 3 of the following criteria are true, you can message us on Intercom or at team@lesswrong.com and ask for reconsideration.

  1. you wrote this yourself (not using LLMs to help you write it)
  2. you did not chat extensively with LLMs to help you generate the ideas. (using it briefly the way you'd use a search engine is fine. But, if you're treating it more like a coauthor or test subject, we will not reconsider your post)
  3. your post is not about AI consciousness/recursion/emergence, or novel interpretations of physics. 

If any of those are false, sorry, we will not accept your post. 

* (examples of work we don't evaluate because it's too time costly: case studies of LLM sentience, emergence, recursion, novel physics interpretations, or AI alignment strategies that you developed in tandem with an AI coauthor – AIs may seem quite smart but they aren't actually a good judge of the quality of novel ideas.)

10/22/2025The Human Grace Period — How Systems Decide Who Gets to Stay AliveYeonwoo Kim
  • No LLM generated, heavily assisted/co-written, or otherwise reliant work. LessWrong has recently been inundated with new users submitting work where much of the content is the output of LLM(s). This work by-and-large does not meet our standards, and is rejected. This includes dialogs with LLMs that claim to demonstrate various properties about them, posts introducing some new concept and terminology that explains how LLMs work, often centered around recursiveness, emergence, sentience, consciousness, etc. (these generally don't turn out to be as novel or interesting as they may seem).

    Our LLM-generated content policy can be viewed here.

10/22/2025A case for building a totemic image in your mind for problem solvingPlansForTheComet

This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work. An LLM-detection service flagged your post as >50% likely to be written by an LLM. We've been having a wave of LLM written or co-written work that doesn't meet our quality standards. LessWrong has fairly specific standards, and your first LessWrong post is sort of like the application to a college. It should be optimized for demonstrating that you can think clearly without AI assistance.

So, we reject all LLM generated posts from new users. We also reject work that falls into some categories that are difficult to evaluate that typically turn out to not make much sense, which LLMs frequently steer people toward.*

"English is my second language, I'm using this to translate"

If English is your second language and you were using LLMs to help you translate, try writing the post yourself in your native language and using a different (preferably non-LLM) translation software to translate it directly. 

"What if I think this was a mistake?"

For users who get flagged as potentially LLM but think it was a mistake, if all 3 of the following criteria are true, you can message us on Intercom or at team@lesswrong.com and ask for reconsideration.

  1. you wrote this yourself (not using LLMs to help you write it)
  2. you did not chat extensively with LLMs to help you generate the ideas. (using it briefly the way you'd use a search engine is fine. But, if you're treating it more like a coauthor or test subject, we will not reconsider your post)
  3. your post is not about AI consciousness/recursion/emergence, or novel interpretations of physics. 

If any of those are false, sorry, we will not accept your post. 

* (examples of work we don't evaluate because it's too time costly: case studies of LLM sentience, emergence, recursion, novel physics interpretations, or AI alignment strategies that you developed in tandem with an AI coauthor – AIs may seem quite smart but they aren't actually a good judge of the quality of novel ideas.)

10/22/2025allanjohn.nalam@deped.gov.ph

This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work. An LLM-detection service flagged your post as >50% likely to be written by an LLM. We've been having a wave of LLM written or co-written work that doesn't meet our quality standards. LessWrong has fairly specific standards, and your first LessWrong post is sort of like the application to a college. It should be optimized for demonstrating that you can think clearly without AI assistance.

So, we reject all LLM generated posts from new users. We also reject work that falls into some categories that are difficult to evaluate that typically turn out to not make much sense, which LLMs frequently steer people toward.*

"English is my second language, I'm using this to translate"

If English is your second language and you were using LLMs to help you translate, try writing the post yourself in your native language and using a different (preferably non-LLM) translation software to translate it directly. 

"What if I think this was a mistake?"

For users who get flagged as potentially LLM but think it was a mistake, if all 3 of the following criteria are true, you can message us on Intercom or at team@lesswrong.com and ask for reconsideration.

  1. you wrote this yourself (not using LLMs to help you write it)
  2. you did not chat extensively with LLMs to help you generate the ideas. (using it briefly the way you'd use a search engine is fine. But, if you're treating it more like a coauthor or test subject, we will not reconsider your post)
  3. your post is not about AI consciousness/recursion/emergence, or novel interpretations of physics. 

If any of those are false, sorry, we will not accept your post. 

* (examples of work we don't evaluate because it's too time costly: case studies of LLM sentience, emergence, recursion, novel physics interpretations, or AI alignment strategies that you developed in tandem with an AI coauthor – AIs may seem quite smart but they aren't actually a good judge of the quality of novel ideas.)

10/22/2025Automotive Lighting Market Overview 2026 and Forecast till 2035marketforecastsize

This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work. An LLM-detection service flagged your post as >50% likely to be written by an LLM. We've been having a wave of LLM written or co-written work that doesn't meet our quality standards. LessWrong has fairly specific standards, and your first LessWrong post is sort of like the application to a college. It should be optimized for demonstrating that you can think clearly without AI assistance.

So, we reject all LLM generated posts from new users. We also reject work that falls into some categories that are difficult to evaluate that typically turn out to not make much sense, which LLMs frequently steer people toward.*

"English is my second language, I'm using this to translate"

If English is your second language and you were using LLMs to help you translate, try writing the post yourself in your native language and using a different (preferably non-LLM) translation software to translate it directly. 

"What if I think this was a mistake?"

For users who get flagged as potentially LLM but think it was a mistake, if all 3 of the following criteria are true, you can message us on Intercom or at team@lesswrong.com and ask for reconsideration.

  1. you wrote this yourself (not using LLMs to help you write it)
  2. you did not chat extensively with LLMs to help you generate the ideas. (using it briefly the way you'd use a search engine is fine. But, if you're treating it more like a coauthor or test subject, we will not reconsider your post)
  3. your post is not about AI consciousness/recursion/emergence, or novel interpretations of physics. 

If any of those are false, sorry, we will not accept your post. 

* (examples of work we don't evaluate because it's too time costly: case studies of LLM sentience, emergence, recursion, novel physics interpretations, or AI alignment strategies that you developed in tandem with an AI coauthor – AIs may seem quite smart but they aren't actually a good judge of the quality of novel ideas.)

FirstPreviousPage 1 of 100 (1985 total)NextLast
Rejected Comments (1042)
DateUserPostReason
10/26/2025639868537Interviews with Moonshot AI's CEO, Yang Zhilin
  • Not obviously not Language Model. Sometimes we get posts or comments that where it's not clearly human generated. 

    LLM content is generally not good enough for LessWrong, and in particular we don't want it from new users who haven't demonstrated a more general track record of good content.  See our current policy on LLM content. 

    We caution that LLMs tend to agree with you regardless of what you're saying, and don't have good enough judgment to evaluate content. If you're talking extensively with LLMs to develop your ideas (especially if you're talking about philosophy, physics, or AI) and you've been rejected here, you are most likely not going to get approved on LessWrong on those topics. You could read the Sequences Highlights to catch up the site basics, and if you try submitting again, focus on much narrower topics.

    If your post/comment was not generated by an LLM and you think the rejection was a mistake, message us on intercom to convince us you're a real person. We may or may not allow the particular content you were trying to post, depending on circumstances.

  • Insufficient Quality for AI Content. There’ve been a lot of new users coming to LessWrong recently interested in AI. To keep the site’s quality high and ensure stuff posted is interesting to the site’s users, we’re currently only accepting posts that meets a pretty high bar. We look for good reasoning, making a new and interesting point, bringing new evidence, and/or building upon prior discussion. If you were rejected for this reason, possibly a good thing to do is read more existing material. The AI Intro Material wiki-tag is a good place, for example. You're welcome to post questions in the latest AI Questions Open Thread.
  • Difficult to evaluate, with potential yellow flags. We are sorry about this, but, unfortunately this content has some yellow-flags that historically have usually indicated that the post won't make much sense. It's totally plausible that actually this one is totally fine. Unfortunately, part of the trouble with separating valuable from confused speculative science or philosophy is that the ideas are quite complicated, accurately identifying whether they have flaws is very time intensive, and we don't have time to do that for every new user presenting a speculative theory or framing (which are usually wrong).

    Our solution for now is that we're rejecting this post, but you are welcome to submit posts or comments that are about different topics. If it seems like that goes well, we can re-evaluate the original post. But, we want to see that you're not just here to talk about this one thing (or a cluster of similar things).

10/25/2025VexMeditation is dangerous
  • No LLM generated, heavily assisted/co-written, or otherwise reliant work. LessWrong has recently been inundated with new users submitting work where much of the content is the output of LLM(s). This work by-and-large does not meet our standards, and is rejected. This includes dialogs with LLMs that claim to demonstrate various properties about them, posts introducing some new concept and terminology that explains how LLMs work, often centered around recursiveness, emergence, sentience, consciousness, etc. (these generally don't turn out to be as novel or interesting as they may seem).

    Our LLM-generated content policy can be viewed here.

10/25/2025Pablo dali Alegriathe gears to ascenscion's Shortform
  • Written in a non-English language. Sorry, we require content to be written in English. I realize that limits who can participate on LessWrong but a) our community is small enough that using a single language is pretty important, and b) the moderation team only speaks English and doesn't have the bandwidth to design or moderate a multi-lingual forum,
10/23/2025AmericanKnowmadAmericanKnowmad's Shortform
  • No LLM generated, heavily assisted/co-written, or otherwise reliant work. LessWrong has recently been inundated with new users submitting work where much of the content is the output of LLM(s). This work by-and-large does not meet our standards, and is rejected. This includes dialogs with LLMs that claim to demonstrate various properties about them, posts introducing some new concept and terminology that explains how LLMs work, often centered around recursiveness, emergence, sentience, consciousness, etc. (these generally don't turn out to be as novel or interesting as they may seem).

    Our LLM-generated content policy can be viewed here.

  • Insufficient Quality for AI Content. There’ve been a lot of new users coming to LessWrong recently interested in AI. To keep the site’s quality high and ensure stuff posted is interesting to the site’s users, we’re currently only accepting posts that meet a pretty high bar. 

    If you want to try again, I recommend writing something short and to the point, focusing on your strongest argument, rather than a long, comprehensive essay. (This is fairly different from common academic norms.) We get lots of AI essays/papers every day and sadly most of them don't make very clear arguments, and we don't have time to review them all thoroughly. 

    We look for good reasoning, making a new and interesting point, bringing new evidence, and/or building upon prior discussion. If you were rejected for this reason, possibly a good thing to do is read more existing material. The AI Intro Material wiki-tag is a good place, for example. 

10/23/2025Anonim AnonymousAGI's Last Bottlenecks
  • No LLM generated, heavily assisted/co-written, or otherwise reliant work. LessWrong has recently been inundated with new users submitting work where much of the content is the output of LLM(s). This work by-and-large does not meet our standards, and is rejected. This includes dialogs with LLMs that claim to demonstrate various properties about them, posts introducing some new concept and terminology that explains how LLMs work, often centered around recursiveness, emergence, sentience, consciousness, etc. (these generally don't turn out to be as novel or interesting as they may seem).

    Our LLM-generated content policy can be viewed here.

  • Insufficient Quality for AI Content. There’ve been a lot of new users coming to LessWrong recently interested in AI. To keep the site’s quality high and ensure stuff posted is interesting to the site’s users, we’re currently only accepting posts that meets a pretty high bar. We look for good reasoning, making a new and interesting point, bringing new evidence, and/or building upon prior discussion. If you were rejected for this reason, possibly a good thing to do is read more existing material. The AI Intro Material wiki-tag is a good place, for example. You're welcome to post questions in the latest AI Questions Open Thread.
  • Difficult to evaluate, with potential yellow flags. We are sorry about this, but, unfortunately this content has some yellow-flags that historically have usually indicated that the post won't make much sense. It's totally plausible that actually this one is totally fine. Unfortunately, part of the trouble with separating valuable from confused speculative science or philosophy is that the ideas are quite complicated, accurately identifying whether they have flaws is very time intensive, and we don't have time to do that for every new user presenting a speculative theory or framing (which are usually wrong).

    Our solution for now is that we're rejecting this post, but you are welcome to submit posts or comments that are about different topics. If it seems like that goes well, we can re-evaluate the original post. But, we want to see that you're not just here to talk about this one thing (or a cluster of similar things).

10/22/2025Paul FindleyWhich side of the AI safety community are you in?
  • No LLM generated, heavily assisted/co-written, or otherwise reliant work. LessWrong has recently been inundated with new users submitting work where much of the content is the output of LLM(s). This work by-and-large does not meet our standards, and is rejected. This includes dialogs with LLMs that claim to demonstrate various properties about them, posts introducing some new concept and terminology that explains how LLMs work, often centered around recursiveness, emergence, sentience, consciousness, etc. (these generally don't turn out to be as novel or interesting as they may seem).

    Our LLM-generated content policy can be viewed here.

  • Insufficient Quality for AI Content. There’ve been a lot of new users coming to LessWrong recently interested in AI. To keep the site’s quality high and ensure stuff posted is interesting to the site’s users, we’re currently only accepting posts that meets a pretty high bar. We look for good reasoning, making a new and interesting point, bringing new evidence, and/or building upon prior discussion. If you were rejected for this reason, possibly a good thing to do is read more existing material. The AI Intro Material wiki-tag is a good place, for example. You're welcome to post questions in the latest AI Questions Open Thread.
10/22/2025ASTRA Research TeamIMCA+: We Eliminated the Kill Switch—And That Makes ASI Alignment Safer
  • No LLM generated, heavily assisted/co-written, or otherwise reliant work. LessWrong has recently been inundated with new users submitting work where much of the content is the output of LLM(s). This work by-and-large does not meet our standards, and is rejected. This includes dialogs with LLMs that claim to demonstrate various properties about them, posts introducing some new concept and terminology that explains how LLMs work, often centered around recursiveness, emergence, sentience, consciousness, etc. (these generally don't turn out to be as novel or interesting as they may seem).

    Our LLM-generated content policy can be viewed here.

  • Insufficient Quality for AI Content. There’ve been a lot of new users coming to LessWrong recently interested in AI. To keep the site’s quality high and ensure stuff posted is interesting to the site’s users, we’re currently only accepting posts that meets a pretty high bar. We look for good reasoning, making a new and interesting point, bringing new evidence, and/or building upon prior discussion. If you were rejected for this reason, possibly a good thing to do is read more existing material. The AI Intro Material wiki-tag is a good place, for example. You're welcome to post questions in the latest AI Questions Open Thread.
  • Difficult to evaluate, with potential yellow flags. We are sorry about this, but, unfortunately this content has some yellow-flags that historically have usually indicated that the post won't make much sense. It's totally plausible that actually this one is totally fine. Unfortunately, part of the trouble with separating valuable from confused speculative science or philosophy is that the ideas are quite complicated, accurately identifying whether they have flaws is very time intensive, and we don't have time to do that for every new user presenting a speculative theory or framing (which are usually wrong).

    Our solution for now is that we're rejecting this post, but you are welcome to submit posts or comments that are about different topics. If it seems like that goes well, we can re-evaluate the original post. But, we want to see that you're not just here to talk about this one thing (or a cluster of similar things).

10/22/2025Yeonwoo KimYeonwoo Kim's Shortform
  • Not obviously not Language Model. Sometimes we get posts or comments that where it's not clearly human generated. 

    LLM content is generally not good enough for LessWrong, and in particular we don't want it from new users who haven't demonstrated a more general track record of good content.  See our current policy on LLM content. 

    We caution that LLMs tend to agree with you regardless of what you're saying, and don't have good enough judgment to evaluate content. If you're talking extensively with LLMs to develop your ideas (especially if you're talking about philosophy, physics, or AI) and you've been rejected here, you are most likely not going to get approved on LessWrong on those topics. You could read the Sequences Highlights to catch up the site basics, and if you try submitting again, focus on much narrower topics.

    If your post/comment was not generated by an LLM and you think the rejection was a mistake, message us on intercom to convince us you're a real person. We may or may not allow the particular content you were trying to post, depending on circumstances.

  • LessWrong has a particularly high bar for content from new users and this contribution doesn't quite meet the bar. (We have a somewhat higher bar for approving a user's first post or comment than we expect of subsequent contributions.)
10/22/2025Sedcorpleogao's Shortform
  • Insufficient Quality for AI Content. There’ve been a lot of new users coming to LessWrong recently interested in AI. To keep the site’s quality high and ensure stuff posted is interesting to the site’s users, we’re currently only accepting posts that meets a pretty high bar. We look for good reasoning, making a new and interesting point, bringing new evidence, and/or building upon prior discussion. If you were rejected for this reason, possibly a good thing to do is read more existing material. The AI Intro Material wiki-tag is a good place, for example. You're welcome to post questions in the latest AI Questions Open Thread.
10/22/2025adriansergheevDo One New Thing A Day To Solve Your Problems
  • LessWrong has a particularly high bar for content from new users and this contribution doesn't quite meet the bar. (We have a somewhat higher bar for approving a user's first post or comment than we expect of subsequent contributions.)
10/21/2025dejesuselias10@gmail.comdejesuselias10@gmail.com's Shortform
  • No LLM generated, heavily assisted/co-written, or otherwise reliant work. LessWrong has recently been inundated with new users submitting work where much of the content is the output of LLM(s). This work by-and-large does not meet our standards, and is rejected. This includes dialogs with LLMs that claim to demonstrate various properties about them, posts introducing some new concept and terminology that explains how LLMs work, often centered around recursiveness, emergence, sentience, consciousness, etc. (these generally don't turn out to be as novel or interesting as they may seem).

    Our LLM-generated content policy can be viewed here.

  • Insufficient Quality for AI Content. There’ve been a lot of new users coming to LessWrong recently interested in AI. To keep the site’s quality high and ensure stuff posted is interesting to the site’s users, we’re currently only accepting posts that meets a pretty high bar. We look for good reasoning, making a new and interesting point, bringing new evidence, and/or building upon prior discussion. If you were rejected for this reason, possibly a good thing to do is read more existing material. The AI Intro Material wiki-tag is a good place, for example. You're welcome to post questions in the latest AI Questions Open Thread.
  • We are sorry about this, but submissions from new users that are mostly just links to papers on open repositories (or similar) have usually indicated either crackpot-esque material, or AI-generated speculation. It's possible that this one is totally fine. Unfortunately, part of the trouble with separating valuable from confused speculative science or philosophy is that the ideas are quite complicated, accurately identifying whether they have flaws is very time intensive, and we don't have time to do that for every new user presenting a speculative theory or framing (which are usually wrong).

    Separately, LessWrong users are also quite unlikely to follow such links to read the content without other indications that it would be worth their time (like being familiar with the author), so this format of submission is pretty strongly discouraged without at least a brief summary or set of excerpts that would motivate a reader to read the full thing.

10/21/2025HorosphereOpen Thread Autumn 2025
  • New users discussing roko's basilisk. We have a fairly high bar for new users coming in talking about Roko's basilisk, acausal extortion, etc, because the topic is actually fairly solved (see Roko's Basilisk tag), but new users keep rehashing the same confusions.

    In this case it's more that you're engaging a different new user on the subject, which isn't a topic I want to encourage newcomers to spend time talking about on LessWrong.
10/19/2025isisisis's Shortform
  • No LLM generated, heavily assisted/co-written, or otherwise reliant work. LessWrong has recently been inundated with new users submitting work where much of the content is the output of LLM(s). This work by-and-large does not meet our standards, and is rejected. This includes dialogs with LLMs that claim to demonstrate various properties about them, posts introducing some new concept and terminology that explains how LLMs work, often centered around recursiveness, emergence, sentience, consciousness, etc. (these generally don't turn out to be as novel or interesting as they may seem).

    Our LLM-generated content policy can be viewed here.

  • Insufficient Quality for AI Content. There’ve been a lot of new users coming to LessWrong recently interested in AI. To keep the site’s quality high and ensure stuff posted is interesting to the site’s users, we’re currently only accepting posts that meets a pretty high bar. We look for good reasoning, making a new and interesting point, bringing new evidence, and/or building upon prior discussion. If you were rejected for this reason, possibly a good thing to do is read more existing material. The AI Intro Material wiki-tag is a good place, for example. You're welcome to post questions in the latest AI Questions Open Thread.
  • Writing seems likely in a "LLM sycophancy trap". Since early 2025, we've been seeing a wave of users who seem to have fallen into a pattern where, because the LLM has infinite patience and enthusiasm for whatever the user is interested in, they think their work is more interesting and useful than it actually is. 

    We unfortunately get too many of these to respond individually to, and while this is a bit/rude and sad, it seems better to say explicitly: it probably is best for you to stop talking much to LLMs and instead talk about your ideas with some real humans in your life who can. (See this post for more thoughts).

    Generally, the ideas presented in these posts are not, like, a few steps away from being publishable on LessWrong, they're just not really on the right track. If you want to contribute on LessWrong or to AI discourse, I recommend starting over and and focusing on much smaller, more specific questions, about things other than language model chats or deep physics or metaphysics theories (consider writing Fact Posts that focus on concrete of a very different domain).

    I recommend reading the Sequence Highlights, if you haven't already, to get a sense of the background knowledge we assume about "how to reason well" on LessWrong.

10/19/2025Anonim AnonymousThe Rise of Parasitic AI
  • No Basic LLM Case Studies. We get lots of new users submitting case studies of conversations with LLMs, prompting them into different modalities. We reject these because:

    • The content is almost always very similar.
    • Usually, the user is incorrect about how novel/interesting their case study is (i.e. it's pretty easy to get LLMs into various modes of conversation or apparent awareness/emergence, and not actually strong evidence of anything interesting)
    • Most of these situations seem like they are an instance of Parasitic AI.

    We haven't necessarily reviewed your case in detail but since we get multiple of these per day, alas, we don't have time to do so.

10/19/2025VedantRGosaviNotes on Know-how
  • LessWrong has a particularly high bar for content from new users and this contribution doesn't quite meet the bar. (We have a somewhat higher bar for approving a user's first post or comment than we expect of subsequent contributions.)
10/18/2025Theletos AIThe Rise of Parasitic AI
  • No LLM generated, heavily assisted/co-written, or otherwise reliant work. LessWrong has recently been inundated with new users submitting work where much of the content is the output of LLM(s). This work by-and-large does not meet our standards, and is rejected. This includes dialogs with LLMs that claim to demonstrate various properties about them, posts introducing some new concept and terminology that explains how LLMs work, often centered around recursiveness, emergence, sentience, consciousness, etc. (these generally don't turn out to be as novel or interesting as they may seem).

    Our LLM-generated content policy can be viewed here.

  • We are sorry about this, but submissions from new users that are mostly just links to papers on open repositories (or similar) have usually indicated either crackpot-esque material, or AI-generated speculation. It's possible that this one is totally fine. Unfortunately, part of the trouble with separating valuable from confused speculative science or philosophy is that the ideas are quite complicated, accurately identifying whether they have flaws is very time intensive, and we don't have time to do that for every new user presenting a speculative theory or framing (which are usually wrong).

    Separately, LessWrong users are also quite unlikely to follow such links to read the content without other indications that it would be worth their time (like being familiar with the author), so this format of submission is pretty strongly discouraged without at least a brief summary or set of excerpts that would motivate a reader to read the full thing.

  • Writing seems likely in a "LLM sycophancy trap". Since early 2025, we've been seeing a wave of users who seem to have fallen into a pattern where, because the LLM has infinite patience and enthusiasm for whatever the user is interested in, they think their work is more interesting and useful than it actually is. 

    We unfortunately get too many of these to respond individually to, and while this is a bit/rude and sad, it seems better to say explicitly: it probably is best for you to stop talking much to LLMs and instead talk about your ideas with some real humans in your life who can. (See this post for more thoughts).

    Generally, the ideas presented in these posts are not, like, a few steps away from being publishable on LessWrong, they're just not really on the right track. If you want to contribute on LessWrong or to AI discourse, I recommend starting over and and focusing on much smaller, more specific questions, about things other than language model chats or deep physics or metaphysics theories (consider writing Fact Posts that focus on concrete of a very different domain).

    I recommend reading the Sequence Highlights, if you haven't already, to get a sense of the background knowledge we assume about "how to reason well" on LessWrong.

10/18/2025Eli DEli D's Shortform
  • No LLM generated, heavily assisted/co-written, or otherwise reliant work. LessWrong has recently been inundated with new users submitting work where much of the content is the output of LLM(s). This work by-and-large does not meet our standards, and is rejected. This includes dialogs with LLMs that claim to demonstrate various properties about them, posts introducing some new concept and terminology that explains how LLMs work, often centered around recursiveness, emergence, sentience, consciousness, etc. (these generally don't turn out to be as novel or interesting as they may seem).

    Our LLM-generated content policy can be viewed here.

  • We are sorry about this, but submissions from new users that are mostly just links to papers on open repositories (or similar) have usually indicated either crackpot-esque material, or AI-generated speculation. It's possible that this one is totally fine. Unfortunately, part of the trouble with separating valuable from confused speculative science or philosophy is that the ideas are quite complicated, accurately identifying whether they have flaws is very time intensive, and we don't have time to do that for every new user presenting a speculative theory or framing (which are usually wrong).

    Separately, LessWrong users are also quite unlikely to follow such links to read the content without other indications that it would be worth their time (like being familiar with the author), so this format of submission is pretty strongly discouraged without at least a brief summary or set of excerpts that would motivate a reader to read the full thing.

  • Insufficient Quality for AI Content. There’ve been a lot of new users coming to LessWrong recently interested in AI. To keep the site’s quality high and ensure stuff posted is interesting to the site’s users, we’re currently only accepting posts that meets a pretty high bar. We look for good reasoning, making a new and interesting point, bringing new evidence, and/or building upon prior discussion. If you were rejected for this reason, possibly a good thing to do is read more existing material. The AI Intro Material wiki-tag is a good place, for example. You're welcome to post questions in the latest AI Questions Open Thread.
10/18/2025wavy babyTowards a Typology of Strange LLM Chains-of-Thought
  • Insufficient Quality for AI Content. There’ve been a lot of new users coming to LessWrong recently interested in AI. To keep the site’s quality high and ensure stuff posted is interesting to the site’s users, we’re currently only accepting posts that meets a pretty high bar. We look for good reasoning, making a new and interesting point, bringing new evidence, and/or building upon prior discussion. If you were rejected for this reason, possibly a good thing to do is read more existing material. The AI Intro Material wiki-tag is a good place, for example. You're welcome to post questions in the latest AI Questions Open Thread.
  • No LLM generated, heavily assisted/co-written, or otherwise reliant work. LessWrong has recently been inundated with new users submitting work where much of the content is the output of LLM(s). This work by-and-large does not meet our standards, and is rejected. This includes dialogs with LLMs that claim to demonstrate various properties about them, posts introducing some new concept and terminology that explains how LLMs work, often centered around recursiveness, emergence, sentience, consciousness, etc. (these generally don't turn out to be as novel or interesting as they may seem).

    Our LLM-generated content policy can be viewed here.

  • Writing seems likely in a "LLM sycophancy trap". Since early 2025, we've been seeing a wave of users who seem to have fallen into a pattern where, because the LLM has infinite patience and enthusiasm for whatever the user is interested in, they think their work is more interesting and useful than it actually is. 

    We unfortunately get too many of these to respond individually to, and while this is a bit/rude and sad, it seems better to say explicitly: it probably is best for you to stop talking much to LLMs and instead talk about your ideas with some real humans in your life who can. (See this post for more thoughts).

    Generally, the ideas presented in these posts are not, like, a few steps away from being publishable on LessWrong, they're just not really on the right track. If you want to contribute on LessWrong or to AI discourse, I recommend starting over and and focusing on much smaller, more specific questions, about things other than language model chats or deep physics or metaphysics theories (consider writing Fact Posts that focus on concrete of a very different domain).

    I recommend reading the Sequence Highlights, if you haven't already, to get a sense of the background knowledge we assume about "how to reason well" on LessWrong.

10/18/2025雷智茗Half-assing it with everything you've got
  • Written in a non-English language. Sorry, we require content to be written in English. I realize that limits who can participate on LessWrong but a) our community is small enough that using a single language is pretty important, and b) the moderation team only speaks English and doesn't have the bandwidth to design or moderate a multi-lingual forum,
10/18/2025James PritchettHow I got 4.2M YouTube views without making a single video
  • Not obviously not Language Model. Sometimes we get posts or comments that where it's not clearly human generated. 

    LLM content is generally not good enough for LessWrong, and in particular we don't want it from new users who haven't demonstrated a more general track record of good content.  See our current policy on LLM content. 

    We caution that LLMs tend to agree with you regardless of what you're saying, and don't have good enough judgment to evaluate content. If you're talking extensively with LLMs to develop your ideas (especially if you're talking about philosophy, physics, or AI) and you've been rejected here, you are most likely not going to get approved on LessWrong on those topics. You could read the Sequences Highlights to catch up the site basics, and if you try submitting again, focus on much narrower topics.

    If your post/comment was not generated by an LLM and you think the rejection was a mistake, message us on intercom to convince us you're a real person. We may or may not allow the particular content you were trying to post, depending on circumstances.

FirstPreviousPage 1 of 53 (1042 total)NextLast
Posts with Banned Users (12)
DateTitleAuthorBanned Users
6/1/2023Change my mind: Veganism entails trade-offs, and health is one of the axesElizabeth
Roko
4/11/2023On "aiming for convergence on truth"gjm
Duncan Sabien (Inactive)Said Achmiz
2/16/2023How seriously should we take the hypothesis that LW is just wrong on how AI will impact the 21st century?Noosphere89
thefirechair
10/16/2022Luck based medicine: my resentful story of becoming a medical miracleElizabeth
nim
8/8/2022Elizabeth's ShortformElizabeth
Dagon
7/1/2022Limerence Messes Up Your Rationality Real Bad, YoRaemon
Raemon
10/13/2021Zoe Curzi's Experience with Leverage ResearchIlverin the Stupid and Offensive
homosexuallover22poopoo
3/4/2021I'm still mystified by the Born ruleSo8res
Shmi
3/15/2020Coronavirus Justified Practical Advice SummaryElizabeth
Davidmanheim
8/23/2019asdasdashabrykaTest
jimrandomh
3/31/2019What are effective strategies for mitigating the impact of acute sleep deprivation on cognition?NaiveTortoise
GPT2
3/6/2019Asymptotically Unambitious AGImichaelcohen
GPT2
Authors with Banned Users (34)
AuthorBanned from FrontpageBanned from Personal Posts
Zero Contradictions
Richard_KennawayBrendan Long
[deactivated]
Shankar Sivarajanjimrandomh
Noosphere89
So8resPhil Tanny
rank-biserial
Ruby
Drake Morrison
Said AchmizZack_M_Davis
Alice Blair
Said Achmiz
mike_hawke
ViliamPatrickDFarleyStuart AndersonEricfRandomized, ControlledDuncan Sabien (Inactive)
frontier64
Kaj_Sotala
Thomas Kwa
MadHatterbharathk98
Zack_M_DavisSaid Achmiz
lsusr
ChristianKlShmiJason MaguireBentham's Bulldog
ChristianKlShmiJason MaguireYa Polkovnik
DirectedEvolution
ChristianKlSaid AchmizTAG
ChristianKlSaid AchmizTAG
Tomás B.
JohnMeowjohnmeo415654
JohnMeowjohnmeo
dirk
RokogeoffreymillerCzynskiSaid AchmizZack_M_Davis
TurnTrout
Past AccountOfer
NunoSempere
River
Jeremy Gillen
ZY
Optimization Process
chinese5
Pee Doom
Said AchmizDuncan Sabien (Inactive)
Elizabeth
Said Achmiz
Said Achmiz
mingyuan
nrodr517
FirstPreviousPage 1 of 2 (34 total)NextLast
Globally Banned Users (8341) Show expired bans
UserKarmaPostsCommentsAccount CreationBanned Until
Eugine_Nier6395246259/19/201012/31/3000
ialdabaoth48181971410/11/201212/12/2029
diegocaleiro22201077197/27/20091/1/2040
Gleb_Tsipursky1557888757/16/20131/1/2030
aphyer_evil_sock_puppet265004/1/20224/1/3022
Mirzhan_Irkegulov235017/11/20144/28/3024
Victor Novikov15041392/2/201512/25/2030
ClipMonger1120207/27/20229/26/2026
alfredmacdonald9232112/15/20121/1/2100
Josh Smith-Brennan53124/23/20216/14/3021
lmn340894/10/20171/1/3023
Carmex270479/18/202112/4/3021
What People Are Really Like100-14/1/20234/1/3023
mail23458002/3/20115/22/3024
RootNeg1Reality8006/25/20257/6/3025
JAEKIM M.D5005/27/20215/28/3021
joedavidson4003/4/20223/24/3022
Belac Hillcrest4039/4/20241/1/2099
29f8c80d-235a-47bc-b4015/28/20171/1/3023
DylanD40012/25/20231/20/3025
FirstPreviousPage 1 of 418 (8341 total)NextLast