LESSWRONG
LW

798
yue
8160
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
1yue's Shortform
3mo
10
yue's Shortform
yue3mo10

thank you for the thoughtful reply! i found the link that moderator sent me, here's the link

https://www.lesswrong.com/posts/nA58rarA7FPYR8cov/allamericanbreakfast-s-shortform?commentId=kpacwcjddWmSGEAwD

Reply
yue's Shortform
yue3mo*-1-4

I’ve noticed that some responses have focused on my English fluency, thanks for for the feedback, and I genuinely welcome corrections if you spot unclear phrasing.
but this is not really about me or my individual writing.

I’ve noticed that some of the responses focused on my English fluency. I appreciate the feedback, and I do welcome suggestions for clearer phrasing.

But my concern here isn’t really about my own writing—it’s about something larger:

I come from a background where what you’re allowed to say is often vague and implicitly policed. Not by specific rules, but by the constant fear of crossing a line you didn’t know existed.

In such an environment, people tend to stay silent—because you never know when something might be misinterpreted, or penalized. And I’ve found that a similar kind of uncertainty can arise here, around the LLM writing rules.

i carefully read the regulations regarding LLM-generated content. Perhaps because the development of LLMs has been so rapid and recent, there are still many grey areas in these rules. It takes constant experimentation to explore where the actual boundaries lie, and that’s why I wanted to raise this question.

Since I now have to spend extra time and energy revising the “writing style” of my posts without even knowing whether the changes are actually correct, I sometimes even have to add some“non-native mistakes” to avoid being misjudged. This already feels like a situation where you never know when you’re going to cross a red line.

To help LessWrong genuinely benefit from diverse, cross-cultural, and high-quality thinking, I believe the following suggestions could help reduce the current uncertainty around LLM-related content:
    1.    Allow users to voluntarily disclose how they used LLMs—for instance, “grammar check only,” “minor phrasing edits,” or “co-written.”
    2.    Foster a community-based language support system—something like peer review—where contributors can openly assist each other in refining language without fear of stigma.
    3.    Use AI-detection tools as soft signals or flags for moderator review, rather than as automatic deletion triggers.

These are just starting ideas—but I hope they point toward a more transparent and inclusive approach.

I personally believe (which I assume is also a widely shared view here) AI should empower individuals, giving a voice to those who might otherwise struggle to be heard, and helping communities grow through the inclusion of diverse perspectives. It should not become a new form of constraint.

When it comes to how society should understand and regulate LLM-generated content, many countries and regions still lack clear legal frameworks. We’re in a gray area, where the boundaries are uncertain and constantly shifting.

That’s exactly why communities like LessWrong-where technical knowledge meets thoughtful discourse—are uniquely positioned to explore the ethical boundaries of LLM use. By fostering open discussion and experimentation, we can help shape responsible norms not only for ourselves but for broader society.

(Edit note:) Some readers may have interpreted this post as taking a confrontational stance, but that wasn’t my intent. I was trying to highlight an uncertainty many non-native speakers may quietly face when navigating new moderation rules. I care about this community and believe honest feedback can help make the system more robust for everyone. I’m open to revising my assumptions if better alternatives are proposed.

Reply
yue's Shortform
yue3mo10

See, this is exactly why the bar for me to express myself is so high. It’s like, thousands of TOEFL examiners reading my words, silently grading me in their heads. The tension is real,and if I make a grammar mistake or say something that gets misinterpreted or pushed back on—not because my idea was bad, but because the English didn’t land right, it feels even worse than losing points on an actual exam essay.

I’m not just speaking for myself here.
Yes, the process is exhausting for me—writing a draft, running it through an LLM for grammar and clarity check out, then going back and deliberately editing out anything that sounds “too smooth” or “too LLM-like,” sometimes even reintroducing my own non-native quirks just to avoid being flagged( which is so weird). But I’m planning to study in an English-speaking country and pursue a PhD, so I can treat this as language training anyway.
What worries me more is that there are other non-native users here who are definitely smarter and more thoughtful than me, their valuable insights are being filtered out simply because of language.
If that’s what LLMs have brought us, then what exactly have we gained from the development of LLM here ?

Reply
yue's Shortform
yue3mo81

Thank you for your kind tone and for  noticing the effort I’ve put into improving my English.. I genuinely appreciate that. Also, since this site values really precise language, the bar for non-native speakers gets really high. Unless you speak more than one language fluently, it’s hard to understand how tough that can be. You need way more courage and patience—we constantly have to double-to check if our logic makes sense, wording is clear enough, or if we’ve 100% understood what others meant in the first place.  I believe your comment points to a deeper issue that deserves serious attention.(actually I’m worrying about if this sentence looks too “LLM-generated ” but i don’t know other way to explain my feelings clearly and accurately enough)

Let me first refer to the official policy itself:

“You can use AI as a writing or research assistant when writing content for LessWrong, but you must have added significant value beyond what the AI produced, the result must meet a high quality standard, and you must vouch for everything in the result.”

I completely agree with the intention behind this: to avoid AI replacing human thinking and to maintain the intellectual standard of the platform.

However, another line in the same guideline says:

“Prompting a language model to write an essay and copy-pasting the result will not typically meet LessWrong’s standards. Please do not submit unedited or lightly-edited LLM content.”

Added this part, I believe it reflects a native-speaker perspective like

“You prompt the AI for ideas or phrasing, and then rephrase and reframe everything in your own writing, then it’s your own work (to some extent)”

But for many non-native speakers like me, the process actually runs like:

“We come up with the ideas, write the draft ourselves, then use an LLM to check the grammar and phrasing to make sure the language is clear and not awkward.”

The goal is not to replace our thinking, but to make it readable in a high-standard English forum like LessWrong.

I fully support filtering out low-effort, AI-prompted fluff. But removing high-quality, idea-driven posts by non-native speakers simply because the writing “sounds like an LLM”—even though the thinking behind it is entirely original—defeats the very purpose of the rule.

Yesterday, I commented on a Quick Takes post about “why people idealize foreign cultures.” I offered a perspective grounded in psychology and my own cross-cultural experience(which means if you really read it though you would know it’s definitely not LLM-generated), then asked an LLM to review my grammar and phrasing, and the post was removed for “LLM-generated ”

This kind of outcome creates a painful contradiction:

A native speaker can submit a low-quality post, but it’s allowed because it sounds “human.”
A non-native speaker submits a thoughtful, valuable post (I’m not talking about myself, i know there must be other smarter nonnative speakers here are facing the same trouble)—but because the English is too clean or “LLM-like,” it gets rejected.

I don’t believe this is the intent of the policy. But the way it’s currently applied functions as a linguistic and cultural filter, shutting out good content from voices outside the English-speaking world. And that undermines the spirit of rationalism this community is built on.

Reply1
yue's Shortform
yue3mo*41

Are the LLM-writing rules here fair to non-native speakers?

For non-native English speakers who speak well like me (scored over 90 on the TOEFL, have English-speaking friends, can explain my field clearly in English—but don’t currently live in an English-speaking environment)reading and understanding English is OK but the hard part is recognizing the difference between “LLM style writing ” and “a perfect human writing .”

When I give my writing to an LLM for checking, and it changes some sentences, I tend to trust it If the meaning looks accurate, i’d just assume:“My original writing wasn’t native enough,  LLM would never make a grammar mistake, So I must be wrong, it must be right.”

Now, just to avoid looking like I used an LLM, I’m forced to write entirely on my own—I have to apologize for the ridiculous grammar mistakes you may see in this post in advance.

Reply
Policy for LLM Writing on LessWrong
yue3mo10

I read through the replies and noticed that most people are discussing the value of human thinking versus AI thinking—these big, abstract questions. But I just wanna ask one, simple,question

Has anyone ever thought about how non-native English speakers’ feeling?

This community asks for high-quality, clearly written posts, but at the same time says, “don’t write like an AI.” For non-native speakers, it’s sooooo hard to meet that standard.

I scored over 90 on the TOEFL, I can speak English fluently and even explain academic material in my field clearly. But to make sure I don’t make grammar mistakes and that I’m using the right technical terms, I have to use LLMs to help check my writing.
 

The ideas are 100% my own and I include personal experience. The writing is definitely high-effort and original. But I can’t always guarantee it “doesn’t look like AI’s work”.

If this policy doesn’t make space for non-native speakers, then it’s just using language as a filter to block high-quality ideas from other cultures. That goes against the principles of rationalism.

Reply
1yue's Shortform
3mo
10