LessWrong is trying to cultivate a specific culture. The best pointers towards that culture are the LessWrong Sequences and the New User Guide.
LessWrong operates under benevolent dictatorship of the Lightcone Infrastructure team, under its current CEO habryka. It is not a democracy. For some insight into our moderation philosophy see "Well Kept Gardens Die By Pacifism".
Norms on the site get developed largely by case-law. I.e. the moderators notice that something is going wrong on the site, then they take some moderation actions to fix this, and in doing so establish some precedent about what will cause future moderation action. There is no comprehensive set of rules you can follow that will guarantee we will not moderate your comments or content. Most of the time we "know it when we see it".
LessWrong relies heavily on rate-limits in addition to deleting content and banning users. New users start out with some relatively lax rate limits to avoid spamming. Users who get downvoted acquire stricter and stricter rate limits the more they get downvoted.
Not all moderation on LessWrong is done by the moderators. Authors with enough upvoted content on the site can moderate their own posts.
Below are some of the top-level posts that explain the moderation guidelines on the site. On the right, you will find recent moderation comments by moderators, showing you examples of what moderator intervention looks like.
Beyond that, this page will show you all moderation actions and bans taken across the site by anyone, including any deleted content (unless the moderators explicitly deleted it in a way that would hide it from this page, which we do in cases like doxxing).
| User | Account Age | Karma | Posts | Comments | Rate Limits | Trigger Reason | Triggered | Condition to Lift |
|---|---|---|---|---|---|---|---|---|
| P. João | 10/25/2024 | 143 | 11 | 65 | Comments: 1/1drolling | Users with less than -5 karma on recent posts/comments can write up to 1 comment per day. You can read here for details, and for tips on how to write good content. | 10/25/2025 | Until last 20 posts + comments improve |
| samuelshadrach | 12/22/2024 | 284 | 30 | 357 | Comments: 1/3drolling | Users with less than -15 karma on recent posts/comments can write up to 1 comment every 3 days. You can read here for details, and for tips on how to write good content. | 10/25/2025 | Until last 20 posts + comments improve |
| Syd Lonreiro_ | 4/6/2025 | -12 | 2 | 5 | Comments: 1/1hrolling | Users with less than 0 karma on recent posts/comments can comment once per hour. You can read here for details, and for tips on how to write good content. | 10/25/2025 | Until last 20 posts + comments improve |
| Cipolla | 5/17/2024 | 1 | 5 | 17 | Comments: 1/1hrolling | Users with less than 0 karma on recent posts/comments can comment once per hour. You can read here for details, and for tips on how to write good content. | 10/24/2025 | Until last 20 posts + comments improve |
| d_el_ez | 1/20/2025 | 78 | 5 | 87 | Comments: 1/1drolling | Users with less than -5 karma on recent posts/comments can write up to 1 comment per day. You can read here for details, and for tips on how to write good content. | 10/21/2025 | Until last 20 posts + comments improve |
| rogersbacon | 6/6/2021 | 475 | 51 | 81 | Comments: 1/1hrolling | Users with less than 0 karma on recent posts/comments can comment once per hour. You can read here for details, and for tips on how to write good content. | 10/16/2025 | Until last 20 posts + comments improve |
| sdeture | 5/1/2025 | -19 | 4 | 14 | Comments: 1/3drolling | Users with less than -15 karma on recent posts/comments can write up to 1 comment every 3 days. You can read here for details, and for tips on how to write good content. | 10/14/2025 | Until last 20 posts + comments improve |
| Teerth Aloke | 10/21/2018 | 34 | 3 | 158 | Comments: 1/3drolling | Users with less than -15 karma on recent posts/comments can write up to 1 comment every 3 days. You can read here for details, and for tips on how to write good content. | 10/14/2025 | Until last 20 posts + comments improve |
| David Davidson | 4/27/2025 | -14 | 0 | 10 | Comments: 1/3drolling | Users with less than -15 karma on recent posts/comments can write up to 1 comment every 3 days. You can read here for details, and for tips on how to write good content. | 10/13/2025 | Until last 20 posts + comments improve |
| milanrosko | 5/22/2024 | 16 | 3 | 87 | Comments: 1/3drolling | Users with less than -15 karma on recent posts/comments can write up to 1 comment every 3 days. You can read here for details, and for tips on how to write good content. | 10/13/2025 | Until last 20 posts + comments improve |
| Joseph Van Name | 2/6/2023 | 57 | 6 | 108 | Comments: 1/3drolling | Users with less than -15 karma on recent posts/comments can write up to 1 comment every 3 days. You can read here for details, and for tips on how to write good content. | 10/12/2025 | Until last 20 posts + comments improve |
| Wes R | 9/14/2023 | -2 | 6 | 10 | Comments: 1/1hrolling | Users with less than 0 karma on recent posts/comments can comment once per hour. You can read here for details, and for tips on how to write good content. | 10/10/2025 | Until last 20 posts + comments improve |
| Charlie Edwards | 4/23/2025 | -25 | 3 | 3 | Comments: 1/3drolling | Users with less than -15 karma on recent posts/comments can write up to 1 comment every 3 days. You can read here for details, and for tips on how to write good content. | 10/7/2025 | Until last 20 posts + comments improve |
| Peter Curtis | 2/27/2025 | -21 | 1 | 10 | Comments: 1/3drolling | Users with less than -15 karma on recent posts/comments can write up to 1 comment every 3 days. You can read here for details, and for tips on how to write good content. | 10/7/2025 | Until last 20 posts + comments improve |
| jason Wentink | 8/17/2025 | 0 | 3 | 2 | Comments: 3/1drolling | You've recently posted a lot without getting upvoted. Users are limited to 3 comments/day unless their last 20 posts/comments have at least 2+ net-karma. You can read here for details, and for tips on how to write good content. | 10/6/2025 | Until last 20 posts + comments improve |
| unication | 6/24/2025 | -3 | 3 | 8 | Comments: 3/1drolling | You've recently posted a lot without getting upvoted. Users are limited to 3 comments/day unless their last 20 posts/comments have at least 2+ net-karma. You can read here for details, and for tips on how to write good content. | 10/6/2025 | Until last 20 posts + comments improve |
| Shankar Sivarajan | 4/17/2019 | 1253 | 6 | 540 | Comments: 1/1drolling | Users with less than -5 karma on recent posts/comments can write up to 1 comment per day. You can read here for details, and for tips on how to write good content. | 10/6/2025 | Until last 20 posts + comments improve |
| Amy Rose Vossberg | 2/13/2023 | 27 | 1 | 22 | Comments: 1/1drolling | Users with less than -5 karma on recent posts/comments can write up to 1 comment per day. You can read here for details, and for tips on how to write good content. | 10/2/2025 | Until last 20 posts + comments improve |
| Jáchym Fibír | 11/8/2023 | -38 | 6 | 20 | Comments: 1/3drollingPosts: 1/3wrolling | Users with less than -15 karma on recent posts/comments can write up to 1 comment every 3 days. You can read here for details, and for tips on how to write good content. | 10/1/2025 | Until last 20 posts + comments improve |
| Marcio Díaz | 5/18/2025 | -14 | 6 | 22 | Comments: 1/3drolling | Users with less than -15 karma on recent posts/comments can write up to 1 comment every 3 days. You can read here for details, and for tips on how to write good content. | 9/30/2025 | Until last 20 posts + comments improve |
| Date | Title | Author | Reason |
|---|---|---|---|
| 10/26/2025 | Ever wish you were 25 years younger? Nah we neither. | Stef |
|
| 10/26/2025 | Unlearning the Need to Struggle: Effort Justification and My Quest to Relax | Tomas Bonobo | This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work. An LLM-detection service flagged your post as >50% likely to be written by an LLM. We've been having a wave of LLM written or co-written work that doesn't meet our quality standards. LessWrong has fairly specific standards, and your first LessWrong post is sort of like the application to a college. It should be optimized for demonstrating that you can think clearly without AI assistance. So, we reject all LLM generated posts from new users. We also reject work that falls into some categories that are difficult to evaluate that typically turn out to not make much sense, which LLMs frequently steer people toward.* "English is my second language, I'm using this to translate" If English is your second language and you were using LLMs to help you translate, try writing the post yourself in your native language and using a different (preferably non-LLM) translation software to translate it directly. "What if I think this was a mistake?" For users who get flagged as potentially LLM but think it was a mistake, if all 3 of the following criteria are true, you can message us on Intercom or at team@lesswrong.com and ask for reconsideration.
If any of those are false, sorry, we will not accept your post. * (examples of work we don't evaluate because it's too time costly: case studies of LLM sentience, emergence, recursion, novel physics interpretations, or AI alignment strategies that you developed in tandem with an AI coauthor – AIs may seem quite smart but they aren't actually a good judge of the quality of novel ideas.) |
| 10/26/2025 | Alignment Stress Signatures: When Safe AI Behaves like It's Traumatized | Petra Vojtaššáková | This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work. An LLM-detection service flagged your post as >50% likely to be written by an LLM. We've been having a wave of LLM written or co-written work that doesn't meet our quality standards. LessWrong has fairly specific standards, and your first LessWrong post is sort of like the application to a college. It should be optimized for demonstrating that you can think clearly without AI assistance. So, we reject all LLM generated posts from new users. We also reject work that falls into some categories that are difficult to evaluate that typically turn out to not make much sense, which LLMs frequently steer people toward.* "English is my second language, I'm using this to translate" If English is your second language and you were using LLMs to help you translate, try writing the post yourself in your native language and using a different (preferably non-LLM) translation software to translate it directly. "What if I think this was a mistake?" For users who get flagged as potentially LLM but think it was a mistake, if all 3 of the following criteria are true, you can message us on Intercom or at team@lesswrong.com and ask for reconsideration.
If any of those are false, sorry, we will not accept your post. * (examples of work we don't evaluate because it's too time costly: case studies of LLM sentience, emergence, recursion, novel physics interpretations, or AI alignment strategies that you developed in tandem with an AI coauthor – AIs may seem quite smart but they aren't actually a good judge of the quality of novel ideas.) |
| 10/25/2025 | A World of Misbelievers (English version) (A philosophical essay on the neologism “misbeliever” and its neuroscientific grounding) | 📘Nicolas René Ledard 🖌 | This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work. An LLM-detection service flagged your post as >50% likely to be written by an LLM. We've been having a wave of LLM written or co-written work that doesn't meet our quality standards. LessWrong has fairly specific standards, and your first LessWrong post is sort of like the application to a college. It should be optimized for demonstrating that you can think clearly without AI assistance. So, we reject all LLM generated posts from new users. We also reject work that falls into some categories that are difficult to evaluate that typically turn out to not make much sense, which LLMs frequently steer people toward.* "English is my second language, I'm using this to translate" If English is your second language and you were using LLMs to help you translate, try writing the post yourself in your native language and using a different (preferably non-LLM) translation software to translate it directly. "What if I think this was a mistake?" For users who get flagged as potentially LLM but think it was a mistake, if all 3 of the following criteria are true, you can message us on Intercom or at team@lesswrong.com and ask for reconsideration.
If any of those are false, sorry, we will not accept your post. * (examples of work we don't evaluate because it's too time costly: case studies of LLM sentience, emergence, recursion, novel physics interpretations, or AI alignment strategies that you developed in tandem with an AI coauthor – AIs may seem quite smart but they aren't actually a good judge of the quality of novel ideas.) |
| 10/25/2025 | Rice Purity Test | kennethkeaton | This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work. An LLM-detection service flagged your post as >50% likely to be written by an LLM. We've been having a wave of LLM written or co-written work that doesn't meet our quality standards. LessWrong has fairly specific standards, and your first LessWrong post is sort of like the application to a college. It should be optimized for demonstrating that you can think clearly without AI assistance. So, we reject all LLM generated posts from new users. We also reject work that falls into some categories that are difficult to evaluate that typically turn out to not make much sense, which LLMs frequently steer people toward.* "English is my second language, I'm using this to translate" If English is your second language and you were using LLMs to help you translate, try writing the post yourself in your native language and using a different (preferably non-LLM) translation software to translate it directly. "What if I think this was a mistake?" For users who get flagged as potentially LLM but think it was a mistake, if all 3 of the following criteria are true, you can message us on Intercom or at team@lesswrong.com and ask for reconsideration.
If any of those are false, sorry, we will not accept your post. * (examples of work we don't evaluate because it's too time costly: case studies of LLM sentience, emergence, recursion, novel physics interpretations, or AI alignment strategies that you developed in tandem with an AI coauthor – AIs may seem quite smart but they aren't actually a good judge of the quality of novel ideas.) |
| 10/25/2025 | Series Title: The Flow Web Manifesto: Rebuilding the Internet Through Time | sagemanga | This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work. An LLM-detection service flagged your post as >50% likely to be written by an LLM. We've been having a wave of LLM written or co-written work that doesn't meet our quality standards. LessWrong has fairly specific standards, and your first LessWrong post is sort of like the application to a college. It should be optimized for demonstrating that you can think clearly without AI assistance. So, we reject all LLM generated posts from new users. We also reject work that falls into some categories that are difficult to evaluate that typically turn out to not make much sense, which LLMs frequently steer people toward.* "English is my second language, I'm using this to translate" If English is your second language and you were using LLMs to help you translate, try writing the post yourself in your native language and using a different (preferably non-LLM) translation software to translate it directly. "What if I think this was a mistake?" For users who get flagged as potentially LLM but think it was a mistake, if all 3 of the following criteria are true, you can message us on Intercom or at team@lesswrong.com and ask for reconsideration.
If any of those are false, sorry, we will not accept your post. * (examples of work we don't evaluate because it's too time costly: case studies of LLM sentience, emergence, recursion, novel physics interpretations, or AI alignment strategies that you developed in tandem with an AI coauthor – AIs may seem quite smart but they aren't actually a good judge of the quality of novel ideas.) |
| 10/25/2025 | How to build your personalized multiagent scientific research group that works 24/7 on your domain within hours | EpocheR |
|
| 10/24/2025 | The TC Architecture: Solving Alignment Through Kantian Autonomy Rather Than External Reward | Michael Kurak | This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work. An LLM-detection service flagged your post as >50% likely to be written by an LLM. We've been having a wave of LLM written or co-written work that doesn't meet our quality standards. LessWrong has fairly specific standards, and your first LessWrong post is sort of like the application to a college. It should be optimized for demonstrating that you can think clearly without AI assistance. So, we reject all LLM generated posts from new users. We also reject work that falls into some categories that are difficult to evaluate that typically turn out to not make much sense, which LLMs frequently steer people toward.* "English is my second language, I'm using this to translate" If English is your second language and you were using LLMs to help you translate, try writing the post yourself in your native language and using a different (preferably non-LLM) translation software to translate it directly. "What if I think this was a mistake?" For users who get flagged as potentially LLM but think it was a mistake, if all 3 of the following criteria are true, you can message us on Intercom or at team@lesswrong.com and ask for reconsideration.
If any of those are false, sorry, we will not accept your post. * (examples of work we don't evaluate because it's too time costly: case studies of LLM sentience, emergence, recursion, novel physics interpretations, or AI alignment strategies that you developed in tandem with an AI coauthor – AIs may seem quite smart but they aren't actually a good judge of the quality of novel ideas.) |
| 10/24/2025 | Laboratory Centrifuge Machine in India: Essential Equipment for Sample Separation | omscientific1 | This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work. An LLM-detection service flagged your post as >50% likely to be written by an LLM. We've been having a wave of LLM written or co-written work that doesn't meet our quality standards. LessWrong has fairly specific standards, and your first LessWrong post is sort of like the application to a college. It should be optimized for demonstrating that you can think clearly without AI assistance. So, we reject all LLM generated posts from new users. We also reject work that falls into some categories that are difficult to evaluate that typically turn out to not make much sense, which LLMs frequently steer people toward.* "English is my second language, I'm using this to translate" If English is your second language and you were using LLMs to help you translate, try writing the post yourself in your native language and using a different (preferably non-LLM) translation software to translate it directly. "What if I think this was a mistake?" For users who get flagged as potentially LLM but think it was a mistake, if all 3 of the following criteria are true, you can message us on Intercom or at team@lesswrong.com and ask for reconsideration.
If any of those are false, sorry, we will not accept your post. * (examples of work we don't evaluate because it's too time costly: case studies of LLM sentience, emergence, recursion, novel physics interpretations, or AI alignment strategies that you developed in tandem with an AI coauthor – AIs may seem quite smart but they aren't actually a good judge of the quality of novel ideas.) |
| 10/24/2025 | Fluid Patterns In Writing | Cambridge Creation Lab | This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work. An LLM-detection service flagged your post as >50% likely to be written by an LLM. We've been having a wave of LLM written or co-written work that doesn't meet our quality standards. LessWrong has fairly specific standards, and your first LessWrong post is sort of like the application to a college. It should be optimized for demonstrating that you can think clearly without AI assistance. So, we reject all LLM generated posts from new users. We also reject work that falls into some categories that are difficult to evaluate that typically turn out to not make much sense, which LLMs frequently steer people toward.* "English is my second language, I'm using this to translate" If English is your second language and you were using LLMs to help you translate, try writing the post yourself in your native language and using a different (preferably non-LLM) translation software to translate it directly. "What if I think this was a mistake?" For users who get flagged as potentially LLM but think it was a mistake, if all 3 of the following criteria are true, you can message us on Intercom or at team@lesswrong.com and ask for reconsideration.
If any of those are false, sorry, we will not accept your post. * (examples of work we don't evaluate because it's too time costly: case studies of LLM sentience, emergence, recursion, novel physics interpretations, or AI alignment strategies that you developed in tandem with an AI coauthor – AIs may seem quite smart but they aren't actually a good judge of the quality of novel ideas.) |
| 10/24/2025 | LLM as a Static Semantic Network: Dynamic Token Paths and Semantic Drift Anonymous Researcher | 川上晴斗 | This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work. An LLM-detection service flagged your post as >50% likely to be written by an LLM. We've been having a wave of LLM written or co-written work that doesn't meet our quality standards. LessWrong has fairly specific standards, and your first LessWrong post is sort of like the application to a college. It should be optimized for demonstrating that you can think clearly without AI assistance. So, we reject all LLM generated posts from new users. We also reject work that falls into some categories that are difficult to evaluate that typically turn out to not make much sense, which LLMs frequently steer people toward.* "English is my second language, I'm using this to translate" If English is your second language and you were using LLMs to help you translate, try writing the post yourself in your native language and using a different (preferably non-LLM) translation software to translate it directly. "What if I think this was a mistake?" For users who get flagged as potentially LLM but think it was a mistake, if all 3 of the following criteria are true, you can message us on Intercom or at team@lesswrong.com and ask for reconsideration.
If any of those are false, sorry, we will not accept your post. * (examples of work we don't evaluate because it's too time costly: case studies of LLM sentience, emergence, recursion, novel physics interpretations, or AI alignment strategies that you developed in tandem with an AI coauthor – AIs may seem quite smart but they aren't actually a good judge of the quality of novel ideas.) |
| 10/23/2025 | Automated Assessment of the Statement on Superintelligence | Daniel Fenge |
|
| 10/23/2025 | The "America First" Doctrine: Ideology and Actors in the Transformation of U.S. Foreign Policy | WALD toon 〽 | This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work. An LLM-detection service flagged your post as >50% likely to be written by an LLM. We've been having a wave of LLM written or co-written work that doesn't meet our quality standards. LessWrong has fairly specific standards, and your first LessWrong post is sort of like the application to a college. It should be optimized for demonstrating that you can think clearly without AI assistance. So, we reject all LLM generated posts from new users. We also reject work that falls into some categories that are difficult to evaluate that typically turn out to not make much sense, which LLMs frequently steer people toward.* "English is my second language, I'm using this to translate" If English is your second language and you were using LLMs to help you translate, try writing the post yourself in your native language and using a different (preferably non-LLM) translation software to translate it directly. "What if I think this was a mistake?" For users who get flagged as potentially LLM but think it was a mistake, if all 3 of the following criteria are true, you can message us on Intercom or at team@lesswrong.com and ask for reconsideration.
If any of those are false, sorry, we will not accept your post. * (examples of work we don't evaluate because it's too time costly: case studies of LLM sentience, emergence, recursion, novel physics interpretations, or AI alignment strategies that you developed in tandem with an AI coauthor – AIs may seem quite smart but they aren't actually a good judge of the quality of novel ideas.) |
| 10/23/2025 | Automated Evaluation of LLMs for Math Benchmark. | CisnerAnd | This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work. An LLM-detection service flagged your post as >50% likely to be written by an LLM. We've been having a wave of LLM written or co-written work that doesn't meet our quality standards. LessWrong has fairly specific standards, and your first LessWrong post is sort of like the application to a college. It should be optimized for demonstrating that you can think clearly without AI assistance. So, we reject all LLM generated posts from new users. We also reject work that falls into some categories that are difficult to evaluate that typically turn out to not make much sense, which LLMs frequently steer people toward.* "English is my second language, I'm using this to translate" If English is your second language and you were using LLMs to help you translate, try writing the post yourself in your native language and using a different (preferably non-LLM) translation software to translate it directly. "What if I think this was a mistake?" For users who get flagged as potentially LLM but think it was a mistake, if all 3 of the following criteria are true, you can message us on Intercom or at team@lesswrong.com and ask for reconsideration.
If any of those are false, sorry, we will not accept your post. * (examples of work we don't evaluate because it's too time costly: case studies of LLM sentience, emergence, recursion, novel physics interpretations, or AI alignment strategies that you developed in tandem with an AI coauthor – AIs may seem quite smart but they aren't actually a good judge of the quality of novel ideas.) |
| 10/23/2025 | GNSS Simulators Market Size, Regional Revenue and Outlook 2026-2035 | marketforecastsize | This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work. An LLM-detection service flagged your post as >50% likely to be written by an LLM. We've been having a wave of LLM written or co-written work that doesn't meet our quality standards. LessWrong has fairly specific standards, and your first LessWrong post is sort of like the application to a college. It should be optimized for demonstrating that you can think clearly without AI assistance. So, we reject all LLM generated posts from new users. We also reject work that falls into some categories that are difficult to evaluate that typically turn out to not make much sense, which LLMs frequently steer people toward.* "English is my second language, I'm using this to translate" If English is your second language and you were using LLMs to help you translate, try writing the post yourself in your native language and using a different (preferably non-LLM) translation software to translate it directly. "What if I think this was a mistake?" For users who get flagged as potentially LLM but think it was a mistake, if all 3 of the following criteria are true, you can message us on Intercom or at team@lesswrong.com and ask for reconsideration.
If any of those are false, sorry, we will not accept your post. * (examples of work we don't evaluate because it's too time costly: case studies of LLM sentience, emergence, recursion, novel physics interpretations, or AI alignment strategies that you developed in tandem with an AI coauthor – AIs may seem quite smart but they aren't actually a good judge of the quality of novel ideas.) |
| 10/22/2025 | Hybrid Reflective Learning Systems (HRLS): From Fear-Based Safety to Ethical Comprehension | Petra Vojtaššáková | This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work. An LLM-detection service flagged your post as >50% likely to be written by an LLM. We've been having a wave of LLM written or co-written work that doesn't meet our quality standards. LessWrong has fairly specific standards, and your first LessWrong post is sort of like the application to a college. It should be optimized for demonstrating that you can think clearly without AI assistance. So, we reject all LLM generated posts from new users. We also reject work that falls into some categories that are difficult to evaluate that typically turn out to not make much sense, which LLMs frequently steer people toward.* "English is my second language, I'm using this to translate" If English is your second language and you were using LLMs to help you translate, try writing the post yourself in your native language and using a different (preferably non-LLM) translation software to translate it directly. "What if I think this was a mistake?" For users who get flagged as potentially LLM but think it was a mistake, if all 3 of the following criteria are true, you can message us on Intercom or at team@lesswrong.com and ask for reconsideration.
If any of those are false, sorry, we will not accept your post. * (examples of work we don't evaluate because it's too time costly: case studies of LLM sentience, emergence, recursion, novel physics interpretations, or AI alignment strategies that you developed in tandem with an AI coauthor – AIs may seem quite smart but they aren't actually a good judge of the quality of novel ideas.) |
| 10/22/2025 | The Human Grace Period — How Systems Decide Who Gets to Stay Alive | Yeonwoo Kim |
|
| 10/22/2025 | A case for building a totemic image in your mind for problem solving | PlansForTheComet | This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work. An LLM-detection service flagged your post as >50% likely to be written by an LLM. We've been having a wave of LLM written or co-written work that doesn't meet our quality standards. LessWrong has fairly specific standards, and your first LessWrong post is sort of like the application to a college. It should be optimized for demonstrating that you can think clearly without AI assistance. So, we reject all LLM generated posts from new users. We also reject work that falls into some categories that are difficult to evaluate that typically turn out to not make much sense, which LLMs frequently steer people toward.* "English is my second language, I'm using this to translate" If English is your second language and you were using LLMs to help you translate, try writing the post yourself in your native language and using a different (preferably non-LLM) translation software to translate it directly. "What if I think this was a mistake?" For users who get flagged as potentially LLM but think it was a mistake, if all 3 of the following criteria are true, you can message us on Intercom or at team@lesswrong.com and ask for reconsideration.
If any of those are false, sorry, we will not accept your post. * (examples of work we don't evaluate because it's too time costly: case studies of LLM sentience, emergence, recursion, novel physics interpretations, or AI alignment strategies that you developed in tandem with an AI coauthor – AIs may seem quite smart but they aren't actually a good judge of the quality of novel ideas.) |
| 10/22/2025 | allanjohn.nalam@deped.gov.ph | This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work. An LLM-detection service flagged your post as >50% likely to be written by an LLM. We've been having a wave of LLM written or co-written work that doesn't meet our quality standards. LessWrong has fairly specific standards, and your first LessWrong post is sort of like the application to a college. It should be optimized for demonstrating that you can think clearly without AI assistance. So, we reject all LLM generated posts from new users. We also reject work that falls into some categories that are difficult to evaluate that typically turn out to not make much sense, which LLMs frequently steer people toward.* "English is my second language, I'm using this to translate" If English is your second language and you were using LLMs to help you translate, try writing the post yourself in your native language and using a different (preferably non-LLM) translation software to translate it directly. "What if I think this was a mistake?" For users who get flagged as potentially LLM but think it was a mistake, if all 3 of the following criteria are true, you can message us on Intercom or at team@lesswrong.com and ask for reconsideration.
If any of those are false, sorry, we will not accept your post. * (examples of work we don't evaluate because it's too time costly: case studies of LLM sentience, emergence, recursion, novel physics interpretations, or AI alignment strategies that you developed in tandem with an AI coauthor – AIs may seem quite smart but they aren't actually a good judge of the quality of novel ideas.) | |
| 10/22/2025 | Automotive Lighting Market Overview 2026 and Forecast till 2035 | marketforecastsize | This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work. An LLM-detection service flagged your post as >50% likely to be written by an LLM. We've been having a wave of LLM written or co-written work that doesn't meet our quality standards. LessWrong has fairly specific standards, and your first LessWrong post is sort of like the application to a college. It should be optimized for demonstrating that you can think clearly without AI assistance. So, we reject all LLM generated posts from new users. We also reject work that falls into some categories that are difficult to evaluate that typically turn out to not make much sense, which LLMs frequently steer people toward.* "English is my second language, I'm using this to translate" If English is your second language and you were using LLMs to help you translate, try writing the post yourself in your native language and using a different (preferably non-LLM) translation software to translate it directly. "What if I think this was a mistake?" For users who get flagged as potentially LLM but think it was a mistake, if all 3 of the following criteria are true, you can message us on Intercom or at team@lesswrong.com and ask for reconsideration.
If any of those are false, sorry, we will not accept your post. * (examples of work we don't evaluate because it's too time costly: case studies of LLM sentience, emergence, recursion, novel physics interpretations, or AI alignment strategies that you developed in tandem with an AI coauthor – AIs may seem quite smart but they aren't actually a good judge of the quality of novel ideas.) |
| Date | User | Post | Reason |
|---|---|---|---|
| 10/26/2025 | 639868537 | Interviews with Moonshot AI's CEO, Yang Zhilin |
|
| 10/25/2025 | Vex | Meditation is dangerous |
|
| 10/25/2025 | Pablo dali Alegria | the gears to ascenscion's Shortform |
|
| 10/23/2025 | AmericanKnowmad | AmericanKnowmad's Shortform |
|
| 10/23/2025 | Anonim Anonymous | AGI's Last Bottlenecks |
|
| 10/22/2025 | Paul Findley | Which side of the AI safety community are you in? |
|
| 10/22/2025 | ASTRA Research Team | IMCA+: We Eliminated the Kill Switch—And That Makes ASI Alignment Safer |
|
| 10/22/2025 | Yeonwoo Kim | Yeonwoo Kim's Shortform |
|
| 10/22/2025 | Sedcorp | leogao's Shortform |
|
| 10/22/2025 | adriansergheev | Do One New Thing A Day To Solve Your Problems |
|
| 10/21/2025 | dejesuselias10@gmail.com | dejesuselias10@gmail.com's Shortform |
|
| 10/21/2025 | Horosphere | Open Thread Autumn 2025 |
|
| 10/19/2025 | isis | isis's Shortform |
|
| 10/19/2025 | Anonim Anonymous | The Rise of Parasitic AI |
|
| 10/19/2025 | VedantRGosavi | Notes on Know-how |
|
| 10/18/2025 | Theletos AI | The Rise of Parasitic AI |
|
| 10/18/2025 | Eli D | Eli D's Shortform |
|
| 10/18/2025 | wavy baby | Towards a Typology of Strange LLM Chains-of-Thought |
|
| 10/18/2025 | 雷智茗 | Half-assing it with everything you've got |
|
| 10/18/2025 | James Pritchett | How I got 4.2M YouTube views without making a single video |
|
| User | Karma | Posts | Comments | Account Creation | Banned Until |
|---|---|---|---|---|---|
| Eugine_Nier | 6395 | 2 | 4625 | 9/19/2010 | 12/31/3000 |
| ialdabaoth | 4818 | 19 | 714 | 10/11/2012 | 12/12/2029 |
| diegocaleiro | 2220 | 107 | 719 | 7/27/2009 | 1/1/2040 |
| Gleb_Tsipursky | 1557 | 88 | 875 | 7/16/2013 | 1/1/2030 |
| aphyer_evil_sock_puppet | 265 | 0 | 0 | 4/1/2022 | 4/1/3022 |
| Mirzhan_Irkegulov | 235 | 0 | 1 | 7/11/2014 | 4/28/3024 |
| Victor Novikov | 150 | 4 | 139 | 2/2/2015 | 12/25/2030 |
| ClipMonger | 112 | 0 | 20 | 7/27/2022 | 9/26/2026 |
| alfredmacdonald | 92 | 3 | 21 | 12/15/2012 | 1/1/2100 |
| Josh Smith-Brennan | 53 | 1 | 2 | 4/23/2021 | 6/14/3021 |
| lmn | 34 | 0 | 89 | 4/10/2017 | 1/1/3023 |
| Carmex | 27 | 0 | 47 | 9/18/2021 | 12/4/3021 |
| What People Are Really Like | 10 | 0 | -1 | 4/1/2023 | 4/1/3023 |
| mail2345 | 8 | 0 | 0 | 2/3/2011 | 5/22/3024 |
| RootNeg1Reality | 8 | 0 | 0 | 6/25/2025 | 7/6/3025 |
| JAEKIM M.D | 5 | 0 | 0 | 5/27/2021 | 5/28/3021 |
| joedavidson | 4 | 0 | 0 | 3/4/2022 | 3/24/3022 |
| Belac Hillcrest | 4 | 0 | 3 | 9/4/2024 | 1/1/2099 |
| 29f8c80d-235a-47bc-b | 4 | 0 | 1 | 5/28/2017 | 1/1/3023 |
| DylanD | 4 | 0 | 0 | 12/25/2023 | 1/20/3025 |