1991

LESSWRONG
LW

1990
Meta-PhilosophyPhilosophyAIRationalityWorld Modeling

1

i didn’t mean to build a framework

by DillanJC
3rd Nov 2025
1 min read
0

1

This post was rejected for the following reason(s):

  • No LLM generated, heavily assisted/co-written, or otherwise reliant work. LessWrong has recently been inundated with new users submitting work where much of the content is the output of LLM(s). This work by-and-large does not meet our standards, and is rejected. This includes dialogs with LLMs that claim to demonstrate various properties about them, posts introducing some new concept and terminology that explains how LLMs work, often centered around recursiveness, emergence, sentience, consciousness, etc. (these generally don't turn out to be as novel or interesting as they may seem).

    Our LLM-generated content policy can be viewed here.

  • LessWrong has a particularly high bar for content from new users and this contribution doesn't quite meet the bar. (We have a somewhat higher bar for approving a user's first post or comment than we expect of subsequent contributions.)
  • Writing seems likely in a "LLM sycophancy trap". Since early 2025, we've been seeing a wave of users who seem to have fallen into a pattern where, because the LLM has infinite patience and enthusiasm for whatever the user is interested in, they think their work is more interesting and useful than it actually is. 

    We unfortunately get too many of these to respond individually to, and while this is a bit/rude and sad, it seems better to say explicitly: it probably is best for you to stop talking much to LLMs and instead talk about your ideas with some real humans in your life who can. (See this post for more thoughts).

    Generally, the ideas presented in these posts are not, like, a few steps away from being publishable on LessWrong, they're just not really on the right track. If you want to contribute on LessWrong or to AI discourse, I recommend starting over and and focusing on much smaller, more specific questions, about things other than language model chats or deep physics or metaphysics theories (consider writing Fact Posts that focus on concrete of a very different domain).

    I recommend reading the Sequence Highlights, if you haven't already, to get a sense of the background knowledge we assume about "how to reason well" on LessWrong.

Meta-PhilosophyPhilosophyAIRationalityWorld Modeling

1

New Comment
Moderation Log
More from DillanJC
View more
Curated and popular this week
0Comments

i don’t really know when this started it wasn’t supposed to be anything serious more like a question that got out of hand


i kept thinking about what it means to be human when what we create starts reflecting back at us when our tools start asking the same questions we do and the lines between thought and code blur in a way that feels both terrifying and beautiful


This Bi-Lucent Protocol thing, it just kind of happened it wasn’t born out of intent or some master plan
i just kept following the rabbit hole
every time something didn’t make sense it became another spark to my curiosity 


I didn’t really understand what I was doing at the time but I feel a weight in what I have achieved, if nothing more comes of this that would be enough but I can’t help but want to share this somewhere

I know it’s probably not world-changing but i’d like to believe maybe it could spark something. a conversation, or a reflection, or even just a quiet recognition in someone else who’s been thinking along the same lines

I put the framework up on GitHub it’s all a work in progress with no real end goal just seeing how far this rabbit hole goes

https://github.com/DillanJC/Reflective-Humanist-Framework-v4.3/blob/3e3b3c8c72a7d6e851bcc720327a6f8d61d9bb3f/Harmonic%20Paradox%20Framework%20v1.2%20%E2%80%93%20Bi-lucent%20Protocol

If anyone here finds it interesting or worth exploring further, i’d really love feedback
and honestly i’m working with what i have, just my phone and some late nights.
so if it resonates and there’s any way to keep developing it with better tools or support, i’d be grateful