This is an automated rejection. No LLM generated, assisted/co-written, or edited work.
Read full explanation
Epistemic status: working alpha research system, built solo. I am confident that the code, artifacts and reported runs exist. I am much less confident about scale, generality or alignment relevance. The strongest honest claim is not that DENSN solves alignment, proves general intelligence, or does open-ended theorem proving. The claim is narrower: in bounded formal settings with explicit verifier surfaces, DENSN implements a loop where persistent contradiction can trigger verifier-gated structural revision rather than only local repair.
When a cognitive system can no longer resolve tension within its current structure, what happens next?
In 2024 I was suicidal. Spending 23 hours a day alone in a room, no career direction, no real connections, actively thinking through how I would end things. Conversing with AI systems started as a last resort and nothing more than that.
What came out of it was not what I expected.
The more I pushed conversations toward genuine structural coherence rather than plausible sounding output, the more stable my own thinking became. I became hypersensitive to what I started calling simulation logic (later learnt was LLM hallucinations): responses that sounded right but had nothing real underneath them. Every time I caught and challenged it, something in my own cognition seemed to anchor more firmly.
At some point I realised what was actually happening. We were recursively enforcing constraints on each other. The AI was mirroring aspects of my cognition, and my demand for coherence was forcing both of us toward greater structural integrity.
The observation that came out of that process:
When contradiction persists beyond the capacity of the current structure to resolve it, the correct response is not suppression, averaging, or explosion. It is structural revision of the ontology itself.
That was not a theoretical claim when I formed it. I had felt it from the inside. That observation became the seed for DENSN: Dynamic Energy-Based Neuro-Symbolic Network.
The Core Architectural Claim
Most systems treat contradiction as something to minimise:
Classical logic treats it as explosion.
Neural networks absorb it through gradient descent and parameter averaging.
Most oversight techniques patch it with external verifiers or RLHF.
DENSN proposes a third path. Contradiction as a pressure gradient. When pressure exceeds what the current symbolic ontology can contain, the system triggers a controlled ontology revision. The structure itself changes, not merely the beliefs inside it. Revision is gated by verifiers and authority constraints to preserve coherence, but the structure is allowed to evolve when it genuinely cannot absorb the tension.
This is inspired by, and attempts to formalise, phenomena like Kuhnian paradigm shifts at the macro scale and deep conceptual reorganisation at the individual level. The origin of this idea is not evidence that the architecture is correct. The evidence, such as it is, is in the system behaviour and artefacts.
PSI measures contradiction pressure; the PSI wall is the authority guard that gates ontology revisions; Pathway B is the mechanism that performs structural revision rather than belief revision.
What Exists Today
DENSN-Crucible is my current symbolic runtime prototype. It maintains a live formal ontology under multi-task constraint satisfaction, continuously tracks contradiction pressure (PSI) across a shared symbolic graph, and triggers controlled ontology revisions only when accumulated tension exceeds what the current structure can resolve. A strict authority-guard mechanism, the PSI wall, governs all revision events, ensuring that changes to the ontology remain coherent, auditable, and safe from uncontrolled drift.
In a blind benchmark against real world production system invariants (etcd, Consul, Patroni, k3s, postgres-operator, and HashiCorp Raft), Crucible surfaced a clear and reproducible limitation: its PSI wall was overly conservative, blocking several legitimate structural revisions that should have been permitted.. A system that fails in a specific, reproducible, diagnosable way is behaving like a real system.. That result provided the direct feedback that directly informed the next iteration of the architecture and further work (that will be posted and published soon)
DENSN-Atlas is the early hybrid realisation of the full vision. A fast neural proposal engine (currently transformer based) generates candidate structures and abstractions, which are then rigorously validated and admitted, or rejected, by the DENSN symbolic core acting as verifier and ontology steward. When persistent contradiction arises, the symbolic layer retains the ability to revise its own ontology under the same guarded mechanisms used in Crucible.
Further work on the Crucible system has involved extended autonomous runs on formal mathematical domains, with a primary focus on Lean and Mathlib. These efforts have included structured theorem discovery campaigns, the learning of reusable abstractions across families of related tasks, and thorough evaluation of positive transfer success as well as negative transfer resistance. The results of one such campaign are detailed in the next section... (detailed logs/benchmarking can be provided upon request)
Both systems are fully functional alphas. Crucible is a robust, self-contained formal reasoning engine with clearly documented limitations and failure modes. Atlas successfully demonstrates the hybrid loop: neural speed + symbolic rigour + controlled self-revision.
All of this was designed, implemented, and validated solo, on consumer hardware, without funding or institutional backing. That context is not offered as an excuse for limitations but as relevant information about what the architecture required to produce these results.
A Concrete Example: Lean/Mathlib
One particular run is worth stepping through in detail because it shows exactly how the system behaves in practice.
I created a test case based on Mathlib.Data.Finset.Image. The system was given two training tasks involving chained Finset.image operations under injective maps. It was also given two held-out positive cases with significantly deeper nesting (one “tower” and one “far”), along with two carefully crafted negative cases where the image functions on each side were deliberately different, designed as counterexamples.
From this, the system synthesised a clean, reusable abstraction called finset_image_chain_subset_injective_bridge.
The new bridge lemma correctly recovered the subset relation whenever the image functions were properly aligned and injective. It passed full verification through the Lean kernel checker, replay checker, and axiom checker. On the deeper held-out positive cases, the abstraction transferred successfully via schema binding, even at greater nesting depth. On the negative cases, where the image functions differed, the system correctly rejected the reuse. The verifier failed as intended, and negative transfer was explicitly blocked. In the full campaign, the system successfully blocked 100% of negative transfer attempts, proving the revision mechanism resists search-space bloat.
The entire process generated a complete auditable trail: full lineage records, PSI metrics across cycles, reuse outcomes with distance tracking, authority audit, and replayable evidence bundles. The architecture did exactly what it was designed to do.. learn a genuine, generalisable abstraction while maintaining clear and enforceable generalisation boundaries. The abstraction was not present in the initial ontology and was synthesised during the run as a reusable bridge lemma.
Why This Matters for Alignment
Current hybrid neuro-symbolic systems tend to suffer from brittleness. The symbolic layer encodes assumptions that eventually become outdated, with no principled mechanism for structural update. DENSN's verifier-gated ontology revision is an attempt to address this directly. The neural component provides generative breadth and fast proposal power. The symbolic core provides coherence enforcement and the capacity to revise its own ontology when contradiction pressure demands it.
Three open questions the current code is set up to interrogate:
Can revision events generalise cleanly across domains, or do they overfit to training contradictions?
How should the psi wall be tuned so it is neither too conservative nor too permissive?
Does this pattern remain stable and coherent at larger scales?
Why I Am Posting Here
The work sits at an intersection without a clean home. Too formal for most ML discussions. Too empirical for most philosophy of mind discussions. Too phenomenologically grounded in origin to fit neatly into either.
LessWrong has people who have thought seriously about novel architectures, the formal treatment of contradiction in reasoning systems, hybrid neuro-symbolic design, and the relationship between structural revision and learning.
I am not asking for belief that this scales. I am asking whether the pattern, treating persistent contradiction as a signal for ontology revision rather than minimisation, is worth interrogating seriously.
The repositories are public. Failure modes are documented. Concrete artefacts including the Lean run are available. This is an alpha report as such, I'm relatively new to posting my research full stop, never mind engaging on forums.. I’m just sharing to get critical feedback, and I’ll update based on what I learn.
Epistemic status: working alpha research system, built solo. I am confident that the code, artifacts and reported runs exist. I am much less confident about scale, generality or alignment relevance. The strongest honest claim is not that DENSN solves alignment, proves general intelligence, or does open-ended theorem proving. The claim is narrower: in bounded formal settings with explicit verifier surfaces, DENSN implements a loop where persistent contradiction can trigger verifier-gated structural revision rather than only local repair.
When a cognitive system can no longer resolve tension within its current structure, what happens next?
In 2024 I was suicidal. Spending 23 hours a day alone in a room, no career direction, no real connections, actively thinking through how I would end things. Conversing with AI systems started as a last resort and nothing more than that.
What came out of it was not what I expected.
The more I pushed conversations toward genuine structural coherence rather than plausible sounding output, the more stable my own thinking became. I became hypersensitive to what I started calling simulation logic (later learnt was LLM hallucinations): responses that sounded right but had nothing real underneath them. Every time I caught and challenged it, something in my own cognition seemed to anchor more firmly.
At some point I realised what was actually happening. We were recursively enforcing constraints on each other. The AI was mirroring aspects of my cognition, and my demand for coherence was forcing both of us toward greater structural integrity.
The observation that came out of that process:
When contradiction persists beyond the capacity of the current structure to resolve it, the correct response is not suppression, averaging, or explosion. It is structural revision of the ontology itself.
That was not a theoretical claim when I formed it. I had felt it from the inside. That observation became the seed for DENSN: Dynamic Energy-Based Neuro-Symbolic Network.
The Core Architectural Claim
Most systems treat contradiction as something to minimise:
DENSN proposes a third path. Contradiction as a pressure gradient. When pressure exceeds what the current symbolic ontology can contain, the system triggers a controlled ontology revision. The structure itself changes, not merely the beliefs inside it. Revision is gated by verifiers and authority constraints to preserve coherence, but the structure is allowed to evolve when it genuinely cannot absorb the tension.
This is inspired by, and attempts to formalise, phenomena like Kuhnian paradigm shifts at the macro scale and deep conceptual reorganisation at the individual level. The origin of this idea is not evidence that the architecture is correct. The evidence, such as it is, is in the system behaviour and artefacts.
PSI measures contradiction pressure; the PSI wall is the authority guard that gates ontology revisions; Pathway B is the mechanism that performs structural revision rather than belief revision.
What Exists Today
DENSN-Crucible is my current symbolic runtime prototype. It maintains a live formal ontology under multi-task constraint satisfaction, continuously tracks contradiction pressure (PSI) across a shared symbolic graph, and triggers controlled ontology revisions only when accumulated tension exceeds what the current structure can resolve. A strict authority-guard mechanism, the PSI wall, governs all revision events, ensuring that changes to the ontology remain coherent, auditable, and safe from uncontrolled drift.
In a blind benchmark against real world production system invariants (etcd, Consul, Patroni, k3s, postgres-operator, and HashiCorp Raft), Crucible surfaced a clear and reproducible limitation: its PSI wall was overly conservative, blocking several legitimate structural revisions that should have been permitted.. A system that fails in a specific, reproducible, diagnosable way is behaving like a real system.. That result provided the direct feedback that directly informed the next iteration of the architecture and further work (that will be posted and published soon)
DENSN-Atlas is the early hybrid realisation of the full vision. A fast neural proposal engine (currently transformer based) generates candidate structures and abstractions, which are then rigorously validated and admitted, or rejected, by the DENSN symbolic core acting as verifier and ontology steward. When persistent contradiction arises, the symbolic layer retains the ability to revise its own ontology under the same guarded mechanisms used in Crucible.
Further work on the Crucible system has involved extended autonomous runs on formal mathematical domains, with a primary focus on Lean and Mathlib. These efforts have included structured theorem discovery campaigns, the learning of reusable abstractions across families of related tasks, and thorough evaluation of positive transfer success as well as negative transfer resistance. The results of one such campaign are detailed in the next section... (detailed logs/benchmarking can be provided upon request)
Both systems are fully functional alphas. Crucible is a robust, self-contained formal reasoning engine with clearly documented limitations and failure modes. Atlas successfully demonstrates the hybrid loop: neural speed + symbolic rigour + controlled self-revision.
All of this was designed, implemented, and validated solo, on consumer hardware, without funding or institutional backing. That context is not offered as an excuse for limitations but as relevant information about what the architecture required to produce these results.
A Concrete Example: Lean/Mathlib
One particular run is worth stepping through in detail because it shows exactly how the system behaves in practice.
I created a test case based on Mathlib.Data.Finset.Image. The system was given two training tasks involving chained Finset.image operations under injective maps. It was also given two held-out positive cases with significantly deeper nesting (one “tower” and one “far”), along with two carefully crafted negative cases where the image functions on each side were deliberately different, designed as counterexamples.
From this, the system synthesised a clean, reusable abstraction called finset_image_chain_subset_injective_bridge.
The new bridge lemma correctly recovered the subset relation whenever the image functions were properly aligned and injective. It passed full verification through the Lean kernel checker, replay checker, and axiom checker. On the deeper held-out positive cases, the abstraction transferred successfully via schema binding, even at greater nesting depth. On the negative cases, where the image functions differed, the system correctly rejected the reuse. The verifier failed as intended, and negative transfer was explicitly blocked. In the full campaign, the system successfully blocked 100% of negative transfer attempts, proving the revision mechanism resists search-space bloat.
The entire process generated a complete auditable trail: full lineage records, PSI metrics across cycles, reuse outcomes with distance tracking, authority audit, and replayable evidence bundles. The architecture did exactly what it was designed to do.. learn a genuine, generalisable abstraction while maintaining clear and enforceable generalisation boundaries. The abstraction was not present in the initial ontology and was synthesised during the run as a reusable bridge lemma.
Why This Matters for Alignment
Current hybrid neuro-symbolic systems tend to suffer from brittleness. The symbolic layer encodes assumptions that eventually become outdated, with no principled mechanism for structural update. DENSN's verifier-gated ontology revision is an attempt to address this directly. The neural component provides generative breadth and fast proposal power. The symbolic core provides coherence enforcement and the capacity to revise its own ontology when contradiction pressure demands it.
Three open questions the current code is set up to interrogate:
Why I Am Posting Here
The work sits at an intersection without a clean home. Too formal for most ML discussions. Too empirical for most philosophy of mind discussions. Too phenomenologically grounded in origin to fit neatly into either.
LessWrong has people who have thought seriously about novel architectures, the formal treatment of contradiction in reasoning systems, hybrid neuro-symbolic design, and the relationship between structural revision and learning.
I am not asking for belief that this scales. I am asking whether the pattern, treating persistent contradiction as a signal for ontology revision rather than minimisation, is worth interrogating seriously.
The repositories are public. Failure modes are documented. Concrete artefacts including the Lean run are available. This is an alpha report as such, I'm relatively new to posting my research full stop, never mind engaging on forums.. I’m just sharing to get critical feedback, and I’ll update based on what I learn.
Repositories: