Maybe the appropriate mathematical object to represent trust might be related to those used to represent uncertainty in complex systems, such as wave functions associated with probabilities. After all, you can trust someone to precisely the extent to which you can constrain your own uncertainty about whether they will do things you wouldn't want. These things , while a kind of scalar, certainly contain lots of information in the form of their distribution throughout space, as well as being complex numbers.
That's a good take: treating trust as “some kind of structured uncertainty object over futures” is very close to what I was gesturing toward because a bare scalar clearly isn’t sufficient.
On reflection, I have to admit I was using “trust” a bit loosely in the post. What it's become clear to me I’m really trying to model isn’t trust in the common usage sense (intentions, warmth, etc.), but something structural: roughly, how stable someone’s behavior is under visible strain, and who tends to bear the cost when things get hard. In my head it’s closer to a relational stability/reliability profile than trust per se, but trust had been the mental shorthand I was employing.
That’s also why I’d be a bit cautious about equating this model of trust with “how much I can constrain my uncertainty about them doing things I wouldn’t want.” Predictability and trust can come apart: I can have very low uncertainty that someone will reliably screw me over, but that doesn’t make them high-trust. I think that interpretation is actually right for the actual content of what I was describing in the post and the mismatch comes from my loose language (so thanks for this comment because it was the impetus to make a change I'd had kicking around for a minute.)
It seems like we need both a representation of a distribution over future behaviors/trajectories, and a way to mark which regions of that space are good for me/the system vs “bad”.
What's most important to me is modeling without needing to pretend to know someone's internals. The visibility/strain/cost/memory breakdown is my attempt at that: who shows up where, what pressures they’re under, who actually eats the cost, and how that pattern evolves over time.
All that said, I really like the intuition of “not a scalar but a distribution-like object.” In my head, what's coming together is something like a trajectory-based stability profile built from a few real-valued measurable signals rather than a full-blown complex wavefunction. I've got another post in the works that goes into more detail and once that's formalized soon I'm certainly open to revisiting the modeling to see where these concepts intersect.
I've been thinking a lot about trust recently, and about how we as people experience trust and how we represent trust digitally are radically different.
Digital trust tends to be binary or scalar out of necessity: you either have access or you don't. Sometimes you get gradients, where we grant elevated permissions to certain individuals. Roles are easy to assign, but need to be reviewed and more often than not end up as static badges.
Trust in real life works differently. There are people I'd trust with a personal secret that I wouldn't take financial advice from. I might let you work on my car but wouldn't let you babysit. I know I can count on you to come through in a pinch, but you'll never get anywhere on time.
Trust, in short, is messy.
So that gets me thinking: what are the minimum aspects of trust we would need to have a faithful digital representation of how trust actually works relationally? We're not talking from the traditional security standpoint here (e.g. cryptography), just looking at the social aspect.[1]
To start us off, if you can't see someone you can't even know that there's someone to trust. If you can't see their actions, you can't meaningfully evaluate those actions. So this gives us our first aspect: visibility.
So what happens once someone becomes visible to us? We get an immediate sense of their posture, their action-in-motion, we can make inferences about the current state of the environment and what forces are in play. Importantly, all of these forces are immediately accessible to our internal state because each of them contribute to our next aspect: strain.[2]
Once we have an idea of the forces that are at work in a situation, we then can see how someone reacts to the presence of strain: do they choose to take on that strain? Do they pass off the strain to someone else? How do they do that? All of the application questions play into the third aspect: cost. Who is actually bearing the cost of the strain in play? Who is visibly expending effort, emotional labor, or cognitive load?
Now all of this gives us the snapshot of trust-in-the-moment without depending on claiming to know the internal state of a person. For a robust treatment of trust we must limit ourselves to only what we can observe, otherwise we devolve into making judgments based on inferred mental state which is notoriously prone to projection and bias.
But just having the snapshot doesn't give us the long term value of trust. Now trust by definition is a risk proposition. If there wasn't inherent risk, there would be no need for trust. And while certain games (ex. the standard prisoner's dilemma) are one-and-done, the actual real-life application of trust is rarely instantaneous. We need a way to keep track of what happened in the past and tie it in to what is happening now, which requires our fourth aspect: memory.
So to summarize:
Visibility allows us to see who is present in the system
Strain shows us what forces are in play
Cost shows us where and how strain is distributed
Memory enables retention and comparison of system state over time
Returning to our original question: what are the aspects of trust that, if removed, would prevent meaningful modeling?
Without visibility you have no sense of the system at all.
Without strain the system becomes inert.
Without cost you cannot meaningfully interpret behavior.
Without memory you can't track behavior pattern over time.
To round this off, how could we meaningfully actually use these aspects in the digital space? Let's take forum posting as an example, which is a high clarity, bounded, publicly accessible, opt-in environment. We're going to look at both the snapshot and memory-assisted versions of the variables:
Visibility allows us to model who has posted and where in the forum's ecosystem they've posted. Do they tend to specialize in one section, or do they post everywhere? Do they respond to comments or are they a post-and-dasher? None of this has anything to do with the content of the posts, this simply enables locating the posts within the environment.
Strain is where we begin to take into account the actual contents and context of the post. What kind of language is the post using? Are they using inflammatory or coercive language? When the poster responds to comments or critiques does the language change?
Cost is potentially the most illuminating of the variables once placed in context. Drawing on our earlier definition, is there any indication that the poster is expending effort, performing emotional labor, or bearing cognitive load? Does the poster accept critique gracefully, or do they deflect? Do they demonstrate epistemic humility, or do they offload the responsibility of reasoning to others?
In short: where do they post, what do they post, how do they respond when challenged, and what is the pattern of these interactions over time?
Open questions:
-Is this the full set of minimum variables, or are there more? (Any serious attempt to model trust appears to ask for least something along these lines, but these may not be the full set.)
-How can the values of the variables be normalized?
-What are the specific relationships the normalized variables have to each other?
-Can the normalized variables be combined into something more useful than a scalar value?
This isn't intended to be an exhaustive list; adding higher dimensionality will in many cases give you improved resolution. We're looking for those aspects where if you removed them you would be unable to meaningfully measure trust in any capacity.
Notably, different aspects of the situation can have wildly different strain values. For this thought experiment we'll treat them as a single variable, but strain can be broken down into its discrete components to evaluate along different axes.