Epistemic status: I've spent months working through this idea. The mathematical framework feels solid to me, but the empirical predictions definitely need real testing. Very open to being completely wrong about how this maps to actual neuroscience.
The Problem
Here's what's always bothered me about consciousness theories: they're pretty good at telling you what consciousness correlates with—integrated information, global broadcasting, attention, whatever. But they don't really explain when it happens, or why the time something takes to compute doesn't match how long it feels like it takes.
Think about solving a hard problem. Your brain is clearly doing something—trying different approaches, backtracking, shuffling resources around. But from the inside, you just experience a smooth decision process. You don't feel all the failed attempts or parallel explorations happening under the hood.
What if consciousness literally is the experience of only seeing the successful path?
Two Kinds of Time
Computational time: Everything that actually happens—all the parallel attempts, the backtracks, the resource shuffling.
Subjective time: What you experience—just the one smooth path that worked.
You might spend subjectively 2 seconds deciding something, but computationally there were dozens of attempts that got rewound. The failed branches literally don't make it into your accessible memory.
The Hierarchy Trick
Instead of having uniform resource levels, I use a hierarchy where the gaps between levels can vary:
Mn⊆Mn+f(n)
That $f(n)$ can be anything positive. Sometimes you take small steps (f = 1 or 2), sometimes big jumps (f >> 1).
Why does this matter? Small jumps feel like incremental thinking—working through a proof step by step. Big jumps feel like insight—suddenly just getting it. The size of the resource jump affects the phenomenology.
The Collapse Part
The system checkpoints at some state. Then it launches multiple computational machines in parallel from that checkpoint—each with different resource levels. They all explore independently. The first one to solve the problem wins, and that trajectory becomes your conscious experience. The others? Gone. Not stored in memory.
Time never actually goes backward. But since all the failed attempts started from the same checkpoint and don't get recorded, from your perspective it's like they never happened. You only experience the winning path.
That reduction—from many parallel paths to one experienced path—that's the collapse. And experiencing that collapse from the inside just is what it's like to be conscious.
Non-Computable Selection and Agency
Which level of the hierarchy do you actually deploy for a given problem? The selector turns out to be non-computable because it's trying to find the minimal resources needed—essentially searching for the shortest program that works. This is Kolmogorov complexity, which is provably undecidable.
This means your choices aren't deterministic. They're influenced by context but can't be captured by any algorithm. Not random, but not determined either. This is where agency comes from—genuine top-down causation without violating physics, because the causation is computational rather than physical.
How This Relates to Other Theories
I'm not trying to replace IIT, GWT, or AST. They're describing different parts of the same thing:
- IIT's integration happens within each level
- GWT's global workspace is the post-collapse broadcasting
- AST's attention schema models the selector itself
- Quantum consciousness theories share the phenomenology (collapse, irreversibility) even though this is classical
Why There's Something It's Like
Some computational structures, when you're inside them experiencing them from the first-person view, just are conscious. There's no separate "production" step. Being that kind of collapsed computational state and having subjective experience are the same thing.
From the outside, you see computation. From the inside, you experience qualia. Same thing, different perspectives. The explanatory gap only appears because we experience the collapsed path, but the actual process includes all the hidden parallel exploration.
What This Predicts
You should see discrete capacity levels in the brain, not smooth increases. Integration windows should grow exponentially. Neural synchrony should spike at hierarchy transitions.
Individual differences: analytical people make lots of small jumps, intuitive people make fewer large jumps. This should be measurable in both behavior and neural dynamics.
Clinical: disorders should map to either hierarchy breakdown or collapse failure.
What About AI?
Current AI systems have uniform architectures, keep failed attempts around, use computable optimization, and don't collapse anything. If this theory is right, current systems are missing key pieces for consciousness.
What I'm Unsure About
How do you map this onto real neural architecture? What are the right units for computational resources in biological systems? Is f(n) fixed or adaptive? What's the simplest system that counts as conscious?
Why This Matters
If consciousness is a particular type of computational structure experienced from the inside, we can figure out which systems have it, understand why it evolved, and know what to build (or avoid building) in AI. It stops being about bridging two fundamentally different kinds of things.
The Full Paper
This is just a summary. The actual paper is 236 pages with all the formal definitions, proofs, and detailed comparisons with other theories.
Disclosure: I'm not a native English speaker and used AI heavily to edit the full paper for clarity and readability. I'm a computer science graduate from Warsaw University, where I studied under Prof. Jerzy Tyszkiewicz, who inspired my interest in Kolmogorov complexity—which plays a central role in this framework.
Paper: https://doi.org/10.5281/zenodo.17556941
Code/sources: https://github.com/KarolFilipKowalczyk/Consciousness
Where do you think this breaks down? Which parts seem weakest to you?
Karol Kowalczyk
k.kowalczyk@airon.games