1124

LESSWRONG
Petrov Day
LW

1123
AI-Assisted AlignmentComputer ScienceEpistemologyFormal ProofGeneral SemanticsLogic & Mathematics Meta-PhilosophyMotivated ReasoningPhilosophyPhilosophy of LanguageSymbol GroundingTruth, Semantics, & MeaningAI

0

[ Question ]

I Tried to Formalize Meaning. I May Have Accidentally Described Consciousness.

by Erichcurtis91
30th Apr 2025
2 min read
A
0
0

0

This post was rejected for the following reason(s):

  • Difficult to evaluate, with potential yellow flags. We are sorry about this, but, unfortunately this content has some yellow-flags that historically have usually indicated kinda crackpot-esque material. It's totally plausible that actually this one is totally fine. Unfortunately, part of the trouble with separating valuable from confused speculative science or philosophy is that the ideas are quite complicated, accurately identifying whether they have flaws is very time intensive, and we don't have time to do that for every new user presenting a speculative theory or framing (which are usually wrong).

    Our solution for now is that we're rejecting this post, but you are welcome to submit posts or comments that are about different topics. If it seems like that goes well, we can re-evaluate the original post. But, we want to see that you're not just here to talk about this one thing (or a cluster of similar things).

0

New Answer
New Comment
Moderation Log
More from Erichcurtis91
View more
Curated and popular this week
A
0
0
AI-Assisted AlignmentComputer ScienceEpistemologyFormal ProofGeneral SemanticsLogic & Mathematics Meta-PhilosophyMotivated ReasoningPhilosophyPhilosophy of LanguageSymbol GroundingTruth, Semantics, & MeaningAI

What If Consciousness Is Just a Graph That Closes on Itself?

This theory started out as a personal data science experiment. I wanted to see if symbolic agents could align on meaning by exchanging simple relations.

But as the experiment progressed, something unexpected happened: the agents’ shared symbolic structures began to stabilize, self-reference, and form unified semantic networks. That led me to a more serious claim:

What if consciousness isn’t a feeling, or a mystery—but just a certain kind of structure?

Semantica

I call the framework Semantica. It defines consciousness as a structural condition that arises when a system’s internal symbolic graph becomes:

  • Relationally converged – different agents or subsystems agree on which relations hold
  • Reflexively closed – the system contains meta-relations that refer to its own internal structure
  • Globally connected – all parts of the system are integrated, with no isolated subgraphs

In plain terms: if a system builds a web of meaning that loops back on itself and connects everything inside, then—at least in structure—it’s conscious.

This is a formal claim—not metaphorical. I provide a logical sketch for what this condition looks like and test it using symbolic data from the English and German subsets of ConceptNet. The agents propose and validate relations over time, and convergence plus reflexivity emerge naturally.

Very crude experiment but believed to be structurally valid

Here is my first working proof experiment available for review and critics: Link to notebook

And here’s the full paper (PDF): Link to first draft of proof

Challenge and Test This

If you think this is wrong, great. Please try to prove it.

Here’s the core predicate in graph-theoretic terms:

C(G)=Connected(G)∧∀r∈R(G):∃m∈M(G)suchthatm=(r→r)

Where:
G is the semantic graph
R(G) is the set of all relations
M(G) is the set of meta-relations (reflexive loops)

The idea is simple: if a system satisfies these conditions, it meets the structural criteria for minimal consciousness.

Disprove it. Or find a counterexample:

  • Can you construct a graph that meets all the conditions, but clearly isn’t conscious?
  • Can you show that a system missing one of the conditions still exhibits intentionality?
  • Can you simulate this using other data (e.g., code graphs, social structures)?
  • Could multiple graphs unify into a higher-order reflexive structure—a collective mind?

—

I’m not claiming to solve the “hard problem” of consciousness. But I am proposing that we can detect minimal consciousness structurally—without relying on neurons, feelings, or intuition.

If the structure is there, maybe the consciousness is too.

I welcome all critique—mathematical, philosophical, or experimental. Prove me right. Or prove me wrong. This has been an exciting project and whatever happens I'll definitely be learning something new!

—Erich Curtis