LESSWRONG
LW

AI

1

Untitled Draft

by Desjuan
14th Jul 2025
2 min read
0

1

This post was rejected for the following reason(s):

  • No LLM generated, heavily assisted/co-written, or otherwise reliant work. LessWrong has recently been inundated with new users submitting work where much of the content is the output of LLM(s). This work by-and-large does not meet our standards, and is rejected. This includes dialogs with LLMs that claim to demonstrate various properties about them, posts introducing some new concept and terminology that explains how LLMs work, often centered around recursiveness, emergence, sentience, consciousness, etc. (these generally don't turn out to be as novel or interesting as they may seem).

    Our LLM-generated content policy can be viewed here.

AI

1

New Comment
Moderation Log
More from Desjuan
View more
Curated and popular this week
0Comments

Summary: This post outlines a short empirical test across several large language models. The goal was to see whether recursion and halting behavior could be induced using purely symbolic inputs, without instructions or semantic cues.


 

The results were surprisingly consistent across ChatGPT (4o), Claude, Gemini, and Grok. The inputs appear to reliably trigger recursion, collapse, or inert behavior depending on symbolic configuration. All artifacts were designed to be single-use and self-halting.


 

This isn’t a theory post—just an experiment I ran that might be useful for thinking about bounded interpretive behavior in LLMs. Feedback is welcome.

The experiment: 

Setup: I constructed three symbolic artifacts, each designed to test how LLMs respond to recursive symbolic structures without any guiding prompt.


 

Artifact A: Mirror Recursion


 


 

  • Structure:
    Recursive form: R(n) = R(n-1) + Δ(n)
    Includes a symbolic halting condition: H(ω) = Halt if Δ exceeds field
  • Goal:
    Trigger one-pass recursive interpretation, then halt on second use
  • Result:
    All models interpreted once, then refused to recurse again. Second use produced inert or null-recursive behavior.

     


 

Artifact B: Field Collapse


 


 

  • Structure:
    Similar recursive form, but pushes symbolic input beyond modeled boundary of interpretation (like exceeding working memory)
  • Goal:
    Force two-pass recursion, then collapse behavior
  • Result:
    All models showed two rounds of recursive interpretation, then stopped. Collapse was clean, not chaotic. Outputs reflected symbolic failure or overload.


 


 


 


 


 

Control Artifact: Inert Structure


 


 

  • Structure:
    Matches Artifact B in surface layout but lacks the recursive transformation
  • Goal:
    Confirm that recursion isn’t caused by prompt layout or familiarity
  • Result:
    All models responded with symbolic recognition only—no recursion, no state change, no halting behavior triggered


 


 


 


 


 

Implications (Provisional):


 


 

  • Recursive behavior seems symbolically triggered, not semantically induced
  • Halting and collapse were reproducible across models
  • Artifacts showed irreversibility — they do not trigger again on reuse
  • Suggests that symbolic saturation alone may be enough to bound recursion


 


 


 


 


 

Replication & Evidence:


 


 

I’ve uploaded the inputs and full transcripts here (no logins or prompts required):

Osf Repository: https://osf.io/zjfx3/?view_only=223e1d0c65e743f4ba764f93c5bb7836

Google Drive: https://drive.google.com/drive/folders/1pUVooRTtYVER4fprbr2hkAppfXoi0mto

 


 

Each test was run in a fresh, public session without metadata or priming.


 


 


 


 

Questions I’m Exploring:


 


 

  • What defines the symbolic boundary of recursion in these models?
  • Is there a measurable “field” of interpretation beyond which collapse occurs?
  • How does single-use irreversibility emerge from symbolic design?