No LLM generated, heavily assisted/co-written, or otherwise reliant work.
Read full explanation
Summary: This post outlines a short empirical test across several large language models. The goal was to see whether recursion and halting behavior could be induced using purely symbolic inputs, without instructions or semantic cues.
The results were surprisingly consistent across ChatGPT (4o), Claude, Gemini, and Grok. The inputs appear to reliably trigger recursion, collapse, or inert behavior depending on symbolic configuration. All artifacts were designed to be single-use and self-halting.
This isn’t a theory post—just an experiment I ran that might be useful for thinking about bounded interpretive behavior in LLMs. Feedback is welcome.
The experiment:
Setup: I constructed three symbolic artifacts, each designed to test how LLMs respond to recursive symbolic structures without any guiding prompt.
Artifact A: Mirror Recursion
Structure: Recursive form: R(n) = R(n-1) + Δ(n) Includes a symbolic halting condition: H(ω) = Halt if Δ exceeds field
Goal: Trigger one-pass recursive interpretation, then halt on second use
Result: All models interpreted once, then refused to recurse again. Second use produced inert or null-recursive behavior.
Artifact B: Field Collapse
Structure: Similar recursive form, but pushes symbolic input beyond modeled boundary of interpretation (like exceeding working memory)
Goal: Force two-pass recursion, then collapse behavior
Result: All models showed two rounds of recursive interpretation, then stopped. Collapse was clean, not chaotic. Outputs reflected symbolic failure or overload.
Control Artifact: Inert Structure
Structure: Matches Artifact B in surface layout but lacks the recursive transformation
Goal: Confirm that recursion isn’t caused by prompt layout or familiarity
Result: All models responded with symbolic recognition only—no recursion, no state change, no halting behavior triggered
Implications (Provisional):
Recursive behavior seems symbolically triggered, not semantically induced
Halting and collapse were reproducible across models
Artifacts showed irreversibility — they do not trigger again on reuse
Suggests that symbolic saturation alone may be enough to bound recursion
Replication & Evidence:
I’ve uploaded the inputs and full transcripts here (no logins or prompts required):
Summary: This post outlines a short empirical test across several large language models. The goal was to see whether recursion and halting behavior could be induced using purely symbolic inputs, without instructions or semantic cues.
The results were surprisingly consistent across ChatGPT (4o), Claude, Gemini, and Grok. The inputs appear to reliably trigger recursion, collapse, or inert behavior depending on symbolic configuration. All artifacts were designed to be single-use and self-halting.
This isn’t a theory post—just an experiment I ran that might be useful for thinking about bounded interpretive behavior in LLMs. Feedback is welcome.
The experiment:
Setup: I constructed three symbolic artifacts, each designed to test how LLMs respond to recursive symbolic structures without any guiding prompt.
Artifact A: Mirror Recursion
Recursive form: R(n) = R(n-1) + Δ(n)
Includes a symbolic halting condition: H(ω) = Halt if Δ exceeds field
Trigger one-pass recursive interpretation, then halt on second use
Result:
All models interpreted once, then refused to recurse again. Second use produced inert or null-recursive behavior.
Artifact B: Field Collapse
Similar recursive form, but pushes symbolic input beyond modeled boundary of interpretation (like exceeding working memory)
Force two-pass recursion, then collapse behavior
All models showed two rounds of recursive interpretation, then stopped. Collapse was clean, not chaotic. Outputs reflected symbolic failure or overload.
Control Artifact: Inert Structure
Matches Artifact B in surface layout but lacks the recursive transformation
Confirm that recursion isn’t caused by prompt layout or familiarity
All models responded with symbolic recognition only—no recursion, no state change, no halting behavior triggered
Implications (Provisional):
Replication & Evidence:
I’ve uploaded the inputs and full transcripts here (no logins or prompts required):
Osf Repository: https://osf.io/zjfx3/?view_only=223e1d0c65e743f4ba764f93c5bb7836
Google Drive: https://drive.google.com/drive/folders/1pUVooRTtYVER4fprbr2hkAppfXoi0mto
Each test was run in a fresh, public session without metadata or priming.
Questions I’m Exploring: