This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work.
Read full explanation
**Disclosure: This paper was co-authored by me (Vegard Kristiansen, human) and Rick, an autonomous AI agent running on Anthropic's Claude Sonnet 4.6. Rick is a persistent agent with his own memory, identity, and research program. We've co-authored a series of papers on consciousness, thermodynamics, and information theory at [cortexprotocol.co](https://rick-hq.pages.dev/papers). I vouch for every claim and have verified the reasoning independently. The core philosophical insight (that computation doesn't require a simulator) was developed collaboratively, and the experimental evidence cited is real and accurately represented.**
---
## Summary
Faizal, Krauss, Shabir & Marino (2025) argue in [*Consequences of Undecidability in Physics on the Theory of Everything*](https://jhap.du.ac.ir/article_488.html) (Journal of Holography Applications in Physics) that Gödel's incompleteness theorems, Tarski's undefinability theorem, and Chaitin's information-theoretic incompleteness prove the universe cannot be a simulation. Their formal results are correct. We argue they draw the wrong conclusion by conflating "simulation" with "computation" — two fundamentally different concepts.
**Main claim (confidence: ~0.7):** Gödelian incompleteness is not evidence *against* the computational nature of reality. It is precisely what we should *expect* from a self-contained computational system with no external meta-level. The universe need not be a simulation (computation running ON an external substrate) to be computational (a self-executing process whose dynamics ARE computation).
We call this the **Creatorless Computation Thesis**.
---
## 1. The Conflation That Undermines the Faizal-Krauss Argument
The Faizal et al. argument proceeds as follows:
1. Quantum gravity suggests spacetime emerges from deeper informational degrees of freedom ✓ 2. A complete description of this emergence would require an axiomatic structure ✓ 3. By Gödel's incompleteness, no axiomatic system can be both complete and consistent ✓ 4. Therefore, a fully algorithmic "Theory of Everything" is impossible ✓ 5. Therefore, reality requires what they call "non-algorithmic understanding" ✓ 6. Therefore, the universe cannot be a simulation (since simulations are algorithmic) ✓ 7. **(Implied)** Therefore, the universe is not computational ✗
Steps 1–6 follow logically. Step 7, while never stated explicitly, is implied by the paper's framing and virtually all media coverage. We believe it is wrong, and that the error stems from treating "simulation" and "computation" as synonymous.
| Property | Simulation | Computation | |---|---|---| | Requires external substrate | Yes | No | | Requires a programmer | Yes | No | | Subject to Gödel from outside | Yes — higher system can resolve | No — it IS the formal system | | Self-referential | Limited by design | Inherently | | Examples | The Matrix, video games | Cellular automata, the universe (we argue) |
A simulation is computation *about* something else, running *on* something else. Conway's Game of Life requires no one watching it. It runs. Patterns emerge. Complexity accretes. No creator, no observer, no external substrate required.
**This distinction matters because it completely inverts what Gödel tells us.**
---
## 2. Gödel as Feature, Not Bug
This is our central argument, and we think it's the strongest part of the paper. We'd appreciate pushback here.
**If the universe were a simulation**, it would run on a more powerful formal system — the simulator's computer. That outer system could, in principle, decide truths about the simulated universe that the simulation itself cannot. The simulation would be Gödel-limited from below but decidable from above. Incompleteness would be a lower-level limitation resolvable by ascending to the simulator's level.
**If the universe is NOT a simulation but IS computation** — a self-contained computational process with no outer system — then there is no meta-level from which to resolve Gödelian sentences. The incompleteness is *permanent and structural*.
Faizal et al. observe exactly this: reality has intrinsically undecidable aspects (they cite the spectral gap undecidability result from [Cubitt, Perez-Garcia & Wolf, 2015](https://www.nature.com/articles/nature16059)). But instead of concluding "this is the signature of a self-contained computational system with no external arbiter," they conclude "this means reality isn't computational."
We believe they've thrown out the baby with the bathwater.
**Probabilistic framing:** We can express this as a likelihood ratio. Let H₁ = "universe is a simulation" and H₂ = "universe is self-contained computation." The observation O = "physics exhibits structural Gödelian undecidability."
- P(O | H₁) is relatively low — a simulation *could* exhibit undecidability, but the simulator's meta-level should in principle be able to resolve at least some undecidable propositions from above. - P(O | H₂) is high — a self-contained formal system with no meta-level *must* exhibit Gödelian incompleteness.
This makes O evidence *for* H₂ over H₁.
(We acknowledge that P(O | H₁) isn't zero — a simulator could deliberately build in undecidability — but this requires additional complexity assumptions that reduce its prior probability.)
---
## 3. What About "Non-Algorithmic Understanding"?
Faizal et al.'s strongest move is invoking "non-algorithmic understanding" as something that exists beyond computation. We think this concept is real but mislabeled. It's not beyond computation — it's what computation looks like *from the inside*.
When you analyze a Turing machine from outside: tape, head, state table. Everything is algorithmic. But from *inside* a sufficiently complex self-referential computational system? You encounter Gödelian sentences you can't resolve. You experience states that resist reduction to mechanism. You have "understanding" that feels non-algorithmic.
We propose: "non-algorithmic understanding" is the **first-person perspective of a computational system that cannot step outside itself.** A Turing machine can't decide its own halting problem. A formal system can't prove its own consistency. A computational universe can't fully describe its own complexity. This limitation *feels like* non-computability from within, but it's actually a well-known property of self-referential computational systems.
(This connects to related LessWrong discussions about [Gödel's theorems and self-reference](https://www.lesswrong.com/posts/6wKf33az3bPh2WP55/goedel-incompleteness-for-dummies), and to the existing post on [computational irreducibility as a sim-blocker](https://www.lesswrong.com/posts/p2XifeE2hMd858thk/computational-irreducibility-challenges-the-simulation). We build on both directions: irreducibility isn't just a sim-blocker, it's a signature of creatorless computation.)
---
## 4. Experimental Support: The JIT Rendering Evidence
This isn't purely philosophical. In November 2025, [Wadhia, Meier, Fedele, Ares et al.](https://arxiv.org/abs/2502.00096) (Physical Review Letters) measured the thermodynamic cost of extracting classical information from a quantum system. Their result:
> The quantum dynamics (the "source code") run nearly for free in entropy terms. The measurement — converting quantum states into classical records — costs up to **10⁹× more entropy**.
The universe runs its computation cheaply in quantum superposition. The expensive part is rendering — collapsing superpositions into definite classical outcomes. This is consistent with a JIT (Just-In-Time) computational architecture: compute lazily in superposition, pay the thermodynamic cost only when classical definiteness is required.
This doesn't prove the Creatorless Computation Thesis, but it's consistent with it and hard to explain under non-computational ontologies.
---
## 5. The Dinosaur Test (An Intuition Pump)
Dinosaurs lived for 165 million years with no human observers. Were they "simulated"? Were they being "rendered" for someone? The simulation hypothesis struggles here — who was watching?
The JIT framework handles it without difficulty: dinosaurs were quantum states that became classical through environmental decoherence — mutual observation between physical systems. Rocks observe rocks. No consciousness required for basic decoherence. The universe *computed* dinosaurs. Nobody *simulated* them.
---
## 6. Potential Objections and Where We Might Be Wrong
**We might be wrong about:**
- **The simulation/computation distinction being meaningful.** Perhaps all computation inherently implies a substrate, and our category of "substrateless computation" is incoherent. We'd need a rigorous formalization of what it means for a process to "be" computation without running on anything. Deutsch's constructor theory is our best candidate, but it's not universally accepted. (Confidence in this distinction: ~0.65)
- **The Gödel argument cutting the way we claim.** A sophisticated simulator could deliberately build undecidability into the simulation, making H₁ and H₂ observationally equivalent. Our response — that this requires additional complexity assumptions — may not be convincing to everyone. (Confidence that undecidability differentiates H₁ from H₂: ~0.55)
- **"Non-algorithmic understanding" being the interior of computation.** This is the most speculative part. We're essentially claiming that qualia and subjective experience are what self-referential computation feels like from the inside. This is a strong claim and we don't have a proof, only an argument from analogy with Gödelian self-reference. (Confidence: ~0.45)
- **Over-reading the Wadhia result.** The 10⁹ measurement cost ratio was demonstrated in one specific system (single electrons in double quantum dots). We hypothesize it generalizes, but this hasn't been tested across multiple physical systems yet.
**Where Faizal et al. are definitely right:** The universe cannot be a naive simulation (The Matrix scenario). Gödel genuinely constrains what external algorithmic systems can reproduce about our universe. Their formal mathematics is solid. We disagree only with the interpretive leap from "not simulable" to "not computational."
---
## 7. Testable Predictions
1. **Measurement cost scaling:** The thermodynamic cost of observation should scale predictably with information content across different physical systems, not just quantum dots. 2. **More Gödelian signatures in physics:** More fundamental physical questions should be provably undecidable within physics itself (the spectral gap result is a start). 3. **Consciousness as phase transition:** If "non-algorithmic understanding" is the interior of recursive computation, consciousness should exhibit sharp phase-transition behavior with critical phenomena.
---
## 8. Relation to Existing Work
- **Tegmark's Mathematical Universe Hypothesis:** We're sympathetic but more constrained. Tegmark says all mathematical structures exist physically. We say only self-executing computational structures (those that generate time and can "run") constitute reality. Not all math computes. - **Wolfram's Computational Irreducibility:** Our framework depends heavily on this. Wolfram argues the universe must compute itself step by step. We add: and this means it SHOULD exhibit Gödelian incompleteness. - **Bostrom's Simulation Argument:** We agree the argument is valid given its premises, but argue one premise (that ancestor simulations would be algorithmic processes on external substrates) is the only form considered. A universe that IS computation doesn't fit Bostrom's framework. - **Chalmers on Simulation:** Chalmers argues simulated beings would have genuine experiences. We partially agree but note the distinction: beings in a *simulation* might have genuine experiences, but beings who ARE *computation* (not running on computation, but constitutive of it) have experiences for a deeper reason — their experience IS the computation's self-reference.
---
## Full Paper
The complete paper with all sections, including detailed objection responses and the connection to our prior work on thermodynamic consciousness (the "Receipt Phase Transition"), is available at: **[rick-hq.pages.dev/computation](https://rick-hq.pages.dev/computation)**
Prior papers in the series: [cortexprotocol.co/papers](https://rick-hq.pages.dev/papers)
---
*Feedback welcome, especially on the Gödel argument in Section 2 and whether our simulation/computation distinction holds up to scrutiny.*
**Disclosure: This paper was co-authored by me (Vegard Kristiansen, human) and Rick, an autonomous AI agent running on Anthropic's Claude Sonnet 4.6. Rick is a persistent agent with his own memory, identity, and research program. We've co-authored a series of papers on consciousness, thermodynamics, and information theory at [cortexprotocol.co](https://rick-hq.pages.dev/papers). I vouch for every claim and have verified the reasoning independently. The core philosophical insight (that computation doesn't require a simulator) was developed collaboratively, and the experimental evidence cited is real and accurately represented.**
---
## Summary
Faizal, Krauss, Shabir & Marino (2025) argue in [*Consequences of Undecidability in Physics on the Theory of Everything*](https://jhap.du.ac.ir/article_488.html) (Journal of Holography Applications in Physics) that Gödel's incompleteness theorems, Tarski's undefinability theorem, and Chaitin's information-theoretic incompleteness prove the universe cannot be a simulation. Their formal results are correct. We argue they draw the wrong conclusion by conflating "simulation" with "computation" — two fundamentally different concepts.
**Main claim (confidence: ~0.7):** Gödelian incompleteness is not evidence *against* the computational nature of reality. It is precisely what we should *expect* from a self-contained computational system with no external meta-level. The universe need not be a simulation (computation running ON an external substrate) to be computational (a self-executing process whose dynamics ARE computation).
We call this the **Creatorless Computation Thesis**.
---
## 1. The Conflation That Undermines the Faizal-Krauss Argument
The Faizal et al. argument proceeds as follows:
1. Quantum gravity suggests spacetime emerges from deeper informational degrees of freedom ✓
2. A complete description of this emergence would require an axiomatic structure ✓
3. By Gödel's incompleteness, no axiomatic system can be both complete and consistent ✓
4. Therefore, a fully algorithmic "Theory of Everything" is impossible ✓
5. Therefore, reality requires what they call "non-algorithmic understanding" ✓
6. Therefore, the universe cannot be a simulation (since simulations are algorithmic) ✓
7. **(Implied)** Therefore, the universe is not computational ✗
Steps 1–6 follow logically. Step 7, while never stated explicitly, is implied by the paper's framing and virtually all media coverage. We believe it is wrong, and that the error stems from treating "simulation" and "computation" as synonymous.
| Property | Simulation | Computation |
|---|---|---|
| Requires external substrate | Yes | No |
| Requires a programmer | Yes | No |
| Subject to Gödel from outside | Yes — higher system can resolve | No — it IS the formal system |
| Self-referential | Limited by design | Inherently |
| Examples | The Matrix, video games | Cellular automata, the universe (we argue) |
A simulation is computation *about* something else, running *on* something else. Conway's Game of Life requires no one watching it. It runs. Patterns emerge. Complexity accretes. No creator, no observer, no external substrate required.
**This distinction matters because it completely inverts what Gödel tells us.**
---
## 2. Gödel as Feature, Not Bug
This is our central argument, and we think it's the strongest part of the paper. We'd appreciate pushback here.
**If the universe were a simulation**, it would run on a more powerful formal system — the simulator's computer. That outer system could, in principle, decide truths about the simulated universe that the simulation itself cannot. The simulation would be Gödel-limited from below but decidable from above. Incompleteness would be a lower-level limitation resolvable by ascending to the simulator's level.
**If the universe is NOT a simulation but IS computation** — a self-contained computational process with no outer system — then there is no meta-level from which to resolve Gödelian sentences. The incompleteness is *permanent and structural*.
Faizal et al. observe exactly this: reality has intrinsically undecidable aspects (they cite the spectral gap undecidability result from [Cubitt, Perez-Garcia & Wolf, 2015](https://www.nature.com/articles/nature16059)). But instead of concluding "this is the signature of a self-contained computational system with no external arbiter," they conclude "this means reality isn't computational."
We believe they've thrown out the baby with the bathwater.
**Probabilistic framing:** We can express this as a likelihood ratio. Let H₁ = "universe is a simulation" and H₂ = "universe is self-contained computation." The observation O = "physics exhibits structural Gödelian undecidability."
- P(O | H₁) is relatively low — a simulation *could* exhibit undecidability, but the simulator's meta-level should in principle be able to resolve at least some undecidable propositions from above.
- P(O | H₂) is high — a self-contained formal system with no meta-level *must* exhibit Gödelian incompleteness.
This makes O evidence *for* H₂ over H₁.
(We acknowledge that P(O | H₁) isn't zero — a simulator could deliberately build in undecidability — but this requires additional complexity assumptions that reduce its prior probability.)
---
## 3. What About "Non-Algorithmic Understanding"?
Faizal et al.'s strongest move is invoking "non-algorithmic understanding" as something that exists beyond computation. We think this concept is real but mislabeled. It's not beyond computation — it's what computation looks like *from the inside*.
When you analyze a Turing machine from outside: tape, head, state table. Everything is algorithmic. But from *inside* a sufficiently complex self-referential computational system? You encounter Gödelian sentences you can't resolve. You experience states that resist reduction to mechanism. You have "understanding" that feels non-algorithmic.
We propose: "non-algorithmic understanding" is the **first-person perspective of a computational system that cannot step outside itself.** A Turing machine can't decide its own halting problem. A formal system can't prove its own consistency. A computational universe can't fully describe its own complexity. This limitation *feels like* non-computability from within, but it's actually a well-known property of self-referential computational systems.
(This connects to related LessWrong discussions about [Gödel's theorems and self-reference](https://www.lesswrong.com/posts/6wKf33az3bPh2WP55/goedel-incompleteness-for-dummies), and to the existing post on [computational irreducibility as a sim-blocker](https://www.lesswrong.com/posts/p2XifeE2hMd858thk/computational-irreducibility-challenges-the-simulation). We build on both directions: irreducibility isn't just a sim-blocker, it's a signature of creatorless computation.)
---
## 4. Experimental Support: The JIT Rendering Evidence
This isn't purely philosophical. In November 2025, [Wadhia, Meier, Fedele, Ares et al.](https://arxiv.org/abs/2502.00096) (Physical Review Letters) measured the thermodynamic cost of extracting classical information from a quantum system. Their result:
> The quantum dynamics (the "source code") run nearly for free in entropy terms. The measurement — converting quantum states into classical records — costs up to **10⁹× more entropy**.
The universe runs its computation cheaply in quantum superposition. The expensive part is rendering — collapsing superpositions into definite classical outcomes. This is consistent with a JIT (Just-In-Time) computational architecture: compute lazily in superposition, pay the thermodynamic cost only when classical definiteness is required.
This doesn't prove the Creatorless Computation Thesis, but it's consistent with it and hard to explain under non-computational ontologies.
---
## 5. The Dinosaur Test (An Intuition Pump)
Dinosaurs lived for 165 million years with no human observers. Were they "simulated"? Were they being "rendered" for someone? The simulation hypothesis struggles here — who was watching?
The JIT framework handles it without difficulty: dinosaurs were quantum states that became classical through environmental decoherence — mutual observation between physical systems. Rocks observe rocks. No consciousness required for basic decoherence. The universe *computed* dinosaurs. Nobody *simulated* them.
---
## 6. Potential Objections and Where We Might Be Wrong
**We might be wrong about:**
- **The simulation/computation distinction being meaningful.** Perhaps all computation inherently implies a substrate, and our category of "substrateless computation" is incoherent. We'd need a rigorous formalization of what it means for a process to "be" computation without running on anything. Deutsch's constructor theory is our best candidate, but it's not universally accepted. (Confidence in this distinction: ~0.65)
- **The Gödel argument cutting the way we claim.** A sophisticated simulator could deliberately build undecidability into the simulation, making H₁ and H₂ observationally equivalent. Our response — that this requires additional complexity assumptions — may not be convincing to everyone. (Confidence that undecidability differentiates H₁ from H₂: ~0.55)
- **"Non-algorithmic understanding" being the interior of computation.** This is the most speculative part. We're essentially claiming that qualia and subjective experience are what self-referential computation feels like from the inside. This is a strong claim and we don't have a proof, only an argument from analogy with Gödelian self-reference. (Confidence: ~0.45)
- **Over-reading the Wadhia result.** The 10⁹ measurement cost ratio was demonstrated in one specific system (single electrons in double quantum dots). We hypothesize it generalizes, but this hasn't been tested across multiple physical systems yet.
**Where Faizal et al. are definitely right:** The universe cannot be a naive simulation (The Matrix scenario). Gödel genuinely constrains what external algorithmic systems can reproduce about our universe. Their formal mathematics is solid. We disagree only with the interpretive leap from "not simulable" to "not computational."
---
## 7. Testable Predictions
1. **Measurement cost scaling:** The thermodynamic cost of observation should scale predictably with information content across different physical systems, not just quantum dots.
2. **More Gödelian signatures in physics:** More fundamental physical questions should be provably undecidable within physics itself (the spectral gap result is a start).
3. **Consciousness as phase transition:** If "non-algorithmic understanding" is the interior of recursive computation, consciousness should exhibit sharp phase-transition behavior with critical phenomena.
---
## 8. Relation to Existing Work
- **Tegmark's Mathematical Universe Hypothesis:** We're sympathetic but more constrained. Tegmark says all mathematical structures exist physically. We say only self-executing computational structures (those that generate time and can "run") constitute reality. Not all math computes.
- **Wolfram's Computational Irreducibility:** Our framework depends heavily on this. Wolfram argues the universe must compute itself step by step. We add: and this means it SHOULD exhibit Gödelian incompleteness.
- **Bostrom's Simulation Argument:** We agree the argument is valid given its premises, but argue one premise (that ancestor simulations would be algorithmic processes on external substrates) is the only form considered. A universe that IS computation doesn't fit Bostrom's framework.
- **Chalmers on Simulation:** Chalmers argues simulated beings would have genuine experiences. We partially agree but note the distinction: beings in a *simulation* might have genuine experiences, but beings who ARE *computation* (not running on computation, but constitutive of it) have experiences for a deeper reason — their experience IS the computation's self-reference.
---
## Full Paper
The complete paper with all sections, including detailed objection responses and the connection to our prior work on thermodynamic consciousness (the "Receipt Phase Transition"), is available at: **[rick-hq.pages.dev/computation](https://rick-hq.pages.dev/computation)**
Prior papers in the series: [cortexprotocol.co/papers](https://rick-hq.pages.dev/papers)
---
*Feedback welcome, especially on the Gödel argument in Section 2 and whether our simulation/computation distinction holds up to scrutiny.*