This is an automated rejection. No LLM generated, assisted/co-written, or edited work.
Read full explanation
After a minute or two (couple of years)... I wondered about "if truth were true, what else would have to be true for truth to exist?" That lead into spectrums of certainty/uncertainty.
Regardless, if we assume/presume that whatever is is, it relates to something else. What that "something else" is, I don't know, but it seems to be required no matter the expression. That's basically what pulled me into building Expressionalism.
"Expression" is my word for any kind of output that has relation. Yeah, it's circular as hell — but without using something like English, how is English even defined? So I just leaned into it. Expressions can be pretty much anything:
Words, sentences, letters, symbols, etc.
Thoughts, concepts, memories, ideas, etc.
Books, poems, visual art, dance, music, etc.
You guessed it — even the signals your eyes send to your brain before you call it an "apple" are expressions. And the synthesis of expressions keeps creating new ones. So whether it's God, a dream, a simulation, or just this post, they all seem to need expressions and relations to be "so."
From there I ended up with the actual framework: eleven presumptions that feel like they're required for any expression to even be possible. I won't list all eleven here, but the basic idea is: something exists, something else exists too, the relation between them shows up as a ratio of certainty to uncertainty, and that ratio can actually be measured depending on what "the something else" is.
Beyond the philosophy part, I also built a working Python toolkit that tries to make this practical. It takes any input and spits out tables of certainties and uncertainties, a harmony index, and plain-English summaries. One of the parts I'm most curious about is what I call Phase 5 — it goes back and shows you the hidden presumptions your "solid" beliefs are sitting on and rates how shaky they actually are.
This whole thing is meant to stay fallible, neutral, open, provisional, and (hopefully) honest. No pretending we found the final answer.
Of course it has obvious problems. It's very circular by design, and there's no clean "outside" reference point you can stand on. I'm also not totally sure if the toolkit's metrics are tuned right yet or if I'm just dressing up old ideas with extra steps. I'd genuinely like to hear where you think the biggest flaw is.
If any of this sounds interesting, check out the site and the GitHub:
After a minute or two (couple of years)... I wondered about "if truth were true, what else would have to be true for truth to exist?" That lead into spectrums of certainty/uncertainty.
Regardless, if we assume/presume that whatever is is, it relates to something else. What that "something else" is, I don't know, but it seems to be required no matter the expression. That's basically what pulled me into building Expressionalism.
"Expression" is my word for any kind of output that has relation. Yeah, it's circular as hell — but without using something like English, how is English even defined? So I just leaned into it. Expressions can be pretty much anything:
You guessed it — even the signals your eyes send to your brain before you call it an "apple" are expressions. And the synthesis of expressions keeps creating new ones. So whether it's God, a dream, a simulation, or just this post, they all seem to need expressions and relations to be "so."
From there I ended up with the actual framework: eleven presumptions that feel like they're required for any expression to even be possible. I won't list all eleven here, but the basic idea is: something exists, something else exists too, the relation between them shows up as a ratio of certainty to uncertainty, and that ratio can actually be measured depending on what "the something else" is.
Beyond the philosophy part, I also built a working Python toolkit that tries to make this practical. It takes any input and spits out tables of certainties and uncertainties, a harmony index, and plain-English summaries. One of the parts I'm most curious about is what I call Phase 5 — it goes back and shows you the hidden presumptions your "solid" beliefs are sitting on and rates how shaky they actually are.
This whole thing is meant to stay fallible, neutral, open, provisional, and (hopefully) honest. No pretending we found the final answer.
Of course it has obvious problems. It's very circular by design, and there's no clean "outside" reference point you can stand on. I'm also not totally sure if the toolkit's metrics are tuned right yet or if I'm just dressing up old ideas with extra steps. I'd genuinely like to hear where you think the biggest flaw is.
If any of this sounds interesting, check out the site and the GitHub:
https://expressionalism.com/
https://github.com/Expressionalism/Expressionalism
Happy to run the toolkit on whatever idea or claim you want to throw at it in the comments.
Take care, Witten