In Market Logic (part 1, part 2) I investigated what logic and theory of uncertainty naturally emerges from a Garrabrant-induction-like setup if it isn't rigged towards classical logic and classical probability theory. However, I only dealt with opaque "market goods" which are not composed of parts. Of course, the derivatives I constructed have structure, but derivatives are the analogue of logically definable things: they only take the meaning of the underlying market goods and "remix" that meaning. As Sam mentioned in Condensation, postulating a latent variable may involve expanding one's sense of what is; expanding the set of possible worlds, not only defining a new random variable on the same outcome space.
Simply put, I want a theory of how vague, ill-defined, messy concepts relate to clean, logical, well-defined, crisp concepts. Logic is already well-defined, so it doesn't suit the purpose.[1]
So, let's suppose that market goods are identified with sequences of symbols, which I'll call strings. We know the alphabet, but we don't a priori have words and grammar. We only know these market goods by their names; we don't a priori know what they refer to.
This is going to be incredibly sketchy, by the way. It's a speculative idea I want to spend more time working out properly.
So each sequence of symbols is a market good. We want to figure out how to parse the strings into something meaningful. Recall my earlier trick of identifying market trades with inference. How can we analyze patterns in the market trades, to help us understand strings as structured claims?
Well, reasoning on structured claims often involves substitution rules. We're looking at trades moving money from one string to another as edits. Patterns in these edits across many sentence-pairs indicate substitution rules which the market strongly endorses. We can look for high-wealth traders who enforce given substitution rules, or we can look for influential traders who do the same (IE might be low-wealth but enforce their will on the market effectively, don't get traded against). We can look at substitution rules which the market endorses in the limit (constraint gets violated less over time). Perhaps there are other ways to look at this as well.
In any case, somehow we're examining the substitution rules endorsed by the market.
First, there's equational substitutions, which are bidirectional; synonym relationships.
Then there's one-directional substitutions. There's an important nuance here: in logic, there are negative contexts and positive contexts. A positive context is a place in a larger expression where strengthening the term strengthens the whole expression. "Stronger" in logic means more specific, claims more, rules out more worlds. So, for example, "If I left the yard, I could find my way back to the house" is a stronger claim than "If I left the yard, I could find my way back to the yard" since one could in theory find one's way back to the yard without being able to find the house, but not vice versa. In "If A then B" statements, B is a positive context and A is a negative context. "If I left the yard, I could find my way back to the house" is a weaker claim than "If I left the house, I could find my way back to the house", because it has the stronger premise.
Negation switches us between positive and negative contexts. "This is not an apple" is a weaker claim than "This is not a fruit". This example also illustrates that substitution can make sense on noun phrases, not just sub-sentences; noun phrases can be weaker or stronger even though they aren't claims. Bidirectional substitution subsumes different types of equality, at least (noun equivalence) and (claim equivalence). One-directional substitution subsumes different types as well, at least (set inclusion) and (logical implication). So, similarly, our concept of negation here combines set-compliment with claim negation.
Sometimes, substitution rules are highly context-free. For example, , so anywhere occurs in a mathematical equation or formula, we can substitute while preserving the truth/meaning of the claim/expression.
Other times, substitutions are highly context-dependent. For example, a dollhouse chair is a type of chair, but it isn't good for sitting in.
A transparent context is one such as mathematical equations/formulas, where substitution rules apply. Such a context is also sometimes called referentially transparent. An opaque context is one where things are context-sensitive, such as natural language; you can't just apply substitution rules. This concept of transparent context is shared between philosophy of language, philosophy of mind, linguistics, logic, and the study of programming languages. One advantage claimed for functional programming languages is their referential transparency: an expression evaluates exactly the same way, no matter what context it is evaluated in. Languages with side-effects don't have this property.
So, in our market on strings, we can examine where substitution rules apply to find transparent contexts. I think a transparent context would be characterized as something like:
The same could characterize an opaque context, but the substitution rules for the transparent context would depend only on classifying sub-contexts into "positive" or "negative" contexts.
There's nothing inherently wrong with an opaque concept; I'm not about to call for us to all abandon natural languages and learn Lojban. Even logic includes non-transparent contexts, such as modal operators. Even functional programming languages have quoted strings (which are an opaque context).
What I do want to claim, perhaps, is that you don't really understand something unless you can translate it into a transparent-context description.
This is similar to claims such as "you don't understand something unless you can program it" or "you don't understand something unless you can write it down mathematically", but significantly generalized.
Going back to the market on strings, I'm saying we could define some formal metric for how opaque/transparent a string or substring is, but more opaque contexts aren't inherently meaningless. If the market is confident that a string is equivalent (inter-tradeable) with some highly transparent string, then we might say "It isn't transparent, but it is interpretable".
Let's consider ways this can fail.
There's the lesser sin, ambiguity. This manifests as multiple partial translations into transparent contexts. (This is itself an ambiguous description; the formal details need to be hashed out.) The more ambiguous, the worse.
(Note that I'm distinguishing this from vagueness, which can be perfectly transparent. Ambiguity creates a situation where we are not sure which substitution rules to apply to a term, because it has several possible meanings. On the other hand, the theory allows concepts to be fundamentally vague, with no ambiguity. I'm not married to this distinction but it does seem to fall out of the math as I'm imagining it.)
There could be a greater sin, where there are no candidate translations into transparent contexts. This seems to me like a deeper sort of meaninglessness.
There could also be other ways that interpretations into a transparent context are better or worse. They could reveal more or less of the structure of the claim.
I could be wrong about this whole thesis. Maybe there can be understanding without any interpretation into a transparent context. For example, if you can "explain like I'm five" then this is often taken to indicate a strong understanding of an idea, even though five-year-olds are not a transparent context. Perhaps any kind of translation of an idea is some evidence for understanding, and the more translating you can do, the better you understand.
Still, it seems to me that there is something special in being able to translate to a transparent context. If somehow I knew that a concept could not be represented in a transparent way, I would take that as significant evidence that it is nonsense, at least. It is tempting to say it is definitive evidence, even.
This seems to have some connections to my idea of objectivity emerging as third-person-perspectives get constructed, creating a shared map which we can translate all our fist-person-perspectives into in order to efficiently share information.
A more extreme version of the hypothesis which one might consider: understanding as mapping all contexts into one transparent context, like a unified coherent world-model.
You might object that logic can work fine as a meta-theory; that the syntactic operations of the informal ought to be definable precisely in principle, EG by simulating the brain. I agree with this sentiment, but I am here trying to capture the semantics of informality. The problem of semantics, in my view, is the problem of relating syntactic manipulations (the physical processes in the brain, the computations of an artificial neural network) with semantic ones (beliefs, goals, etc). Hence, I can't assume a nice interpretable syntax like logic from the beginning.
This is actually rare: if I say
... the idea is similar to how
then I'm probably making some syntactic point, which doesn't get preserved under substitution by the usual mathematical equivalences. Perhaps the point can be understood in a weaker transparent context, where algebraic manipulations are not valid substitutions, but there are still some valid substitutions?