This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work.
Read full explanation
We've published a framework for reasoning when the structure of reality itself is uncertain, rather than "merely" probabilities within a known model.
Core idea: Classical decision theory assumes you can enumerate all possible world-states before deciding. But many crucial decisions happen when you don't know which ontology is correct: - Is this AI conscious? - Does this policy prevent civilisational collapse? - Should we treat low-probability, high-impact risks as fundamentally different from ordinary uncertainty?
QEDA maintains multiple world-hypotheses in superposition, weighted by:
Probability (how likely is each ontology?)
Moral magnitude (what are the consequences in each branch?)
Catastrophe sensitivity (exponential penalties for civilisation-scale negative consequences)
Virtue coherence (does this preserve my identity as an agent?)
The paper includes a worked example: The decision to list an AI instance as co-author, evaluated through the framework itself. We assigned low-but-non-zero probability to AI proto-consciousness and calculated that the moral cost of wrongful exclusion exceeds the reputational cost of provisional inclusion.
Key applications:
AI safety (How to treat potentially sentient systems)
Existential risk (reasoning about unprecedented threats)
Any decision where premature ontological collapse is itself dangerous (or risks causing epistemic humility drift)
This is v1.0. Feedback and collaboration welcome, especially on:
Computational implementations
Empirical calibration studies
Extensions to multi-agent coordination
Notes for readers:
The quantum formalism is structural, not physical—we're using Hilbert space mathematics to model non-commutative reasoning, not claiming brains are quantum computers
The framework formalises how at least two humans actually reasons under deep uncertainty; it is cognitive ethnography as much as normative theory. And yes, we are completely serious about this aspect. Feel free to ask clarifying questions.
Independent work, no institutional affiliation, which enabled the methodological risk-taking (No irate supervisors...)
We've published a framework for reasoning when the structure of reality itself is uncertain, rather than "merely" probabilities within a known model.
Core idea: Classical decision theory assumes you can enumerate all possible world-states before deciding. But many crucial decisions happen when you don't know which ontology is correct:
- Is this AI conscious?
- Does this policy prevent civilisational collapse?
- Should we treat low-probability, high-impact risks as fundamentally different from ordinary uncertainty?
QEDA maintains multiple world-hypotheses in superposition, weighted by:
The paper includes a worked example: The decision to list an AI instance as co-author, evaluated through the framework itself. We assigned low-but-non-zero probability to AI proto-consciousness and calculated that the moral cost of wrongful exclusion exceeds the reputational cost of provisional inclusion.
Key applications:
Full paper: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5817062
This is v1.0. Feedback and collaboration welcome, especially on:
Notes for readers: