This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work.
Read full explanation
What if language and symbolic reasoning aren't default cognitive modes, but emergency fallbacks? I propose the Trigger-Conditioned Intelligence hypothesis: explicit symbolic processing — language, logical reasoning, formal planning — is gated by failure signals. An agent defaults to fast implicit processing and only invokes symbolic expression when that process fails. I formalize this with a gating function G(t) based on prediction error, entropy, and conflict signals, and draw cross-domain evidence from: Neuroscience (ACC conflict monitoring, prefrontal recruitment) Ethology (costly signaling theory) Multi-agent RL (communication thresholds) Hybrid architectures (CLARION) The framework also explains pathologies of over-triggering: verbal overshadowing, choking under pressure, analysis paralysis. Full preprint: https://zenodo.org/records/18558999 Curious whether this framing resonates with how people here think about language models — specifically, whether an LLM that generates tokens continuously is architecturally pathological by this definition.
What if language and symbolic reasoning aren't default cognitive modes, but emergency fallbacks?
I propose the Trigger-Conditioned Intelligence hypothesis: explicit symbolic processing — language, logical reasoning, formal planning — is gated by failure signals. An agent defaults to fast implicit processing and only invokes symbolic expression when that process fails.
I formalize this with a gating function G(t) based on prediction error, entropy, and conflict signals, and draw cross-domain evidence from:
Neuroscience (ACC conflict monitoring, prefrontal recruitment)
Ethology (costly signaling theory)
Multi-agent RL (communication thresholds)
Hybrid architectures (CLARION)
The framework also explains pathologies of over-triggering: verbal overshadowing, choking under pressure, analysis paralysis.
Full preprint: https://zenodo.org/records/18558999
Curious whether this framing resonates with how people here think about language models — specifically, whether an LLM that generates tokens continuously is architecturally pathological by this definition.