40

LESSWRONG
LW

39
ConsciousnessInformation TheoryPhilosophy

1

đź§  A Formal Model of Consciousness as Belief Alignment: Conscious, Schizo-Conscious, and Unconscious States

by Trent hughes
29th May 2025
1 min read
0

1

This post was rejected for the following reason(s):

  • We are sorry about this, but submissions from new users that are mostly just links to papers on open repositories (or similar) have usually indicated either crackpot-esque material, or AI-generated speculation. It's possible that this one is totally fine. Unfortunately, part of the trouble with separating valuable from confused speculative science or philosophy is that the ideas are quite complicated, accurately identifying whether they have flaws is very time intensive, and we don't have time to do that for every new user presenting a speculative theory or framing (which are usually wrong).

    Separately, LessWrong users are also quite unlikely to follow such links to read the content without other indications that it would be worth their time (like being familiar with the author), so this format of submission is pretty strongly discouraged without at least a brief summary or set of excerpts that would motivate a reader to read the full thing.

1

New Comment
Moderation Log
More from Trent hughes
View more
Curated and popular this week
0Comments
ConsciousnessInformation TheoryPhilosophy

I recently uploaded a paper to PhilArchive that proposes a formal, information-theoretic model of consciousness—not as subjective experience, but as the alignment between beliefs and objective descriptions.

In this framework:

Consciousness is the proportion of an object's inherent description that an observer believes correctly.

Schizo-Consciousness refers to misbeliefs—statements the observer believes but which contradict the object's true description.

Unconsciousness refers to unknowns—parts of the object for which the observer holds no belief.

 

Formally:

\text{Consciousness} = \frac{\text{Complexity of true beliefs (T)}}{\text{Complexity of full description (D)}}

Descriptions are represented using O(x)-Q(y) statements (objects and their qualities), and observers are modeled as possessing internal belief-updating codes influenced by stimuli.

Key features:

The model allows vector-based representation of belief alignment—two observers might have the same consciousness score but over different parts of the object.

Complexity can be measured via bit-length, code length, or entropy (Shannon or Kolmogorov).

The model supports comparative consciousness and simulates evolving belief states.

 

The full paper is here (PDF, ~8 pages):

[https://philpapers.org/rec/HUGAIT-6]

 

https://drive.google.com/file/d/1IMexDlOqZuwNDtE4SAbB8AbdCsnx1qjg/view?usp=drivesdk

I’d love critical feedback. Is this a useful lens to formalize epistemic accuracy? Can this be applied to AI alignment or belief modeling? Are there already better formalisms I’ve missed?

—Anonymous