Programmers are familiar with navigating interfaces and protocols. Whether it’s ASCII, a function signature, an OOP interface, HTTP, or gRPC, these kinds of interfaces are ubiquitous, exist at all levels of abstraction, and are necessary for connecting distinct modules to form a coherent system.
I've been musing about the ways that culture, laws, and authority act as dynamic social interfaces. These structures seem to emerge as solutions to social coordination problems. Different solutions have different advantages and disadvantages.
Large groups have coordination problems
Many families, communes, and kibbutzes demonstrate a successful kind of anarcho-communism - there is a high degree of trust, and individuals get along through mutual aid and without formal codes. One explanation is to consider that the ad-hoc coordination exhibited in small groups works because of a kind of empathic simulation. Each individual has a highly accurate predictive model of social interaction within the group; given virtually any scenario and a subgroup, an individual can simulate how the subgroup will react to the situation. I speculate that the ability to accurately anticipate behavior is the essence of “trust”. (This sentiment seems core to expressions like “I don’t like him, but I trust him”).
However, we see empirically that the social cohesion of unstructured groups tends to dissolve as the number of members exceeds about 150. Robin Dunbar’s research on primate brain size and social groups suggests that these problems are due to fundamental cognitive limits on Homo sapiens ability to process information. This makes sense given the model of trust presented above. As the number of members in the group increases, the size of the set of possible interactions undergoes a combinatorial explosion. At some point, the cognitive load to grok it is too much. At this point, group trust sharply decreases. As simple experiments in game theory show, trustless coordination is expensive. More is different.
The solution to trust and coordination problems is the introduction of external structures to mediate the interaction. For example, the prisoners dilemma is easily solved in the real world by two key mediators:
- The mob boss, who decrees that finks will sleep with the fishes
- The mobster’s code, which suggests that “snitches get stitches”
Each of these structures creates a kind of transitive trust for the prisoners. The mob boss has a monopoly on use of force; since his decrees have guaranteed enforcement, he can trust anyone to do what he says. Since Prisoner A trusts the outcome of the mob boss’s decree, and the mob boss trusts that Prisoner B must do as he says, Prisoner A can trust Prisoner B not to rat. Likewise with the mobster’s code. More generally, these two structures can be thought of as analogous to government fiat (centralized enforcement) and culture (distributed enforcement). Each structure takes a complex web of interactions and flattens it into a simple set of rules - hence the term, “social interface”.
Different interfaces, different advantages
The level of formality and mechanism of enforcement of these social interfaces create tradeoffs. A very formal social interface (a written law or code of conduct) is clearly understood and enforced. However, a less formal social interface (a culture) permits more nuance and adaptation to changing circumstances. Central enforcement creates common understanding and prevents vigilantism. Distributed enforcement may be more specialized to local circumstances.
It is also interesting to consider that some styles of social interfaces may be more stable at different group sizes. What kinds of interfaces scale? Which ones don’t?
Distance and varied systems introduce friction and reduce trust. Local communities typically have high-trust relationships because of common culture and higher accountability for local externalities (the “don’t shit where you eat” principle). Trust is lower between more distant polities. Culture and legal structures differ more. Individuals have a less predictive mental model for behavior. Federated models are necessary.
Impersonal and Unequal
It seems clear that for humans to collaborate at the massive scales that we do today, we need social interfaces that simplify the ever-more-complex trust issues that have emerged. However, there are also clear problems with the interfaces we have today. Creating interfaces that are consistent means the circumstances of individuals have to be abstracted away. Enforcement of interfaces traditionally requires monopoly on force, which means that hierarchies must be created and some individuals are given more power. As our societies become larger, they seem to create more inequality and social alienation.
What comes next?
Past innovations have let us coordinate beyond our individual cognitive limits. Some argue that the written word built ancient empires and that the printing press empowered nation-states. We now have a global internet, instant video call, and trustless distributed ledgers. Some LWers are thinking about prediction markets. Scott Alexander has posted a number of interesting thought experiments in alternative governance.
What failed ideas were just too early? What new and better social modes can we create?
This sort of framing has been useful to me. In particular, I often think in terms of what kind of interface am I offering for people to interact with me via the clothes I wear, the way I talk, etc. as in what sort of options are salient to them. Although there can be issues with changing this interface if you are identified with it (the way you look is part of your personal identity rather than a fact of the world that you control), even without that ability to fluidly change your interface just knowing it's there can be helpful for making sense of, say, why people treat you the way they do.
I enjoyed your post. Specifically, using programs as an analogy for society seems like something that could generate a few interesting ideas. I have actually done the same and will share one of my own thoughts in this space at the end.
To summarize some of your key points:
Regarding mental prediction of group behavior as the definition of trust. I am not sure on this one. What about when you reliably predict someone will lie?
Regarding the continuum of formality for social rules I agree that formality is an important dimension. Although I would suggest decoupling enforcement and understanding. Consider people who work at corporations or live in tyrannies- these environments have high enforcement/concentrations of power, but often an opaque ruleset. Carl Popper in his book The Open Society spends a good amount of time discussing the institutionalization of norms into policies/laws etc, vs rules which simply give people in a hierarchy discretionary power. You may enjoy it. Chapter 17 section VII. The overall point though is that for rules to be understandable in a meaningful sense (beyond "don't piss off the monarch") they can't delegate discretion to other people.
Is the idea behind this maybe something like everybody in a democracy implements get_vote(issue) -> true|false?
Is this a problem?
Lastly, to share an idea I am currently trying to research more extensively, but uses the software analogy:
What if someone founded a new political party whose candidates run on the platform that if elected they will send every bill voted on to their constituents using an app of sorts and will always vote the way the constituency says. Essentially having no opinions of their own. I think of this political party as an adapter that turns a representative democracy into a direct (or liquid or whatever you implement in the app) democracy.
I think I am troubled by the same situation as you. How to organize society that uses hierarchy less, but still has law, order, and good coordination between people. To me, more direct forms of democracy are the next logical step. Doing the above would erode lobbying power/corruption. I am researching similar concepts for companies as well.
Thanks for the thoughtful response. Great summary. I think this is missing something:
Not exactly what I was going for. Many actors + game theoretic concerns -> complex simulation. Eventually good simulation becomes intractable. However, when a common set of rules is enforced strongly enough, each individual's utility function aligns with that set of rules. This simplifies the situation and creates a higher level interface. This is why I thought to include enforcement as an important dimension.
In response to this:
If you can reliably predict that someone's statements are untruths, then you can trust them to do the opposite of what they said. Sarcasm is trustworthy untruth. I think that the lack of trust arises only when I'm highly uncertain about which statements are truths vs. lies.
That said, I do think that this definition of trust is imperfect. You might "trust" your doctor to prescribe the right medicine, even if you don't know what decision they will make. I guess I could argue that my prediction is about the doctor acting in my best interest, rather than the particular action... I think the definition is imprecise, but still useful.
I appreciate the book recommendation and the intro to your thinking on this topic. I'll have to update when I have a chance to do the suggested reading :)
Thank you for additional detail, I understand your point about conformity to rules, the way that increases predictability, and how that allows for larger groups to coordinate effectively. I think I am getting hung up on the word trust, as I tend to think of it as when I take for granted someone has good intentions towards me and basic shared values. (e.g. they can't think whats best for me is to kill me) I think I am pretty much on board with everything else about the article.
I wonder if another productive way to think about all this would be (continuing to riff on interfaces, and largely restating what you have already said) something like: when people form relationships they understand how each other will behave, relationships enable coordination, humans can handle understanding and coordinating up to Dunbar's number, to work around this limit above 150 we begin grouping people- essentially abstracting them back down to a single person (named for example 'Sales' or 'The IT Department'), if that group of people follow rules/process then the group becomes understandable and we can have a relationship and coordinate with that group, and if we all follow shared rules, everyone can understand and coordinate with everyone else without having to know them. I think I am pretty much agreeing with the point you make about small groups being able to predict each other's behavior, and that being key. Instead of saying one person trusts another person, I'd favor one person understands another person. I think this language is compatible with your examples of sarcasm, lies, and the prisoner's dilemma.
Anyway, I'll leave it at that. Thank you for the discussion.
The idea that you erode lobbying power by direct democracy misunderstands political power. In a direct democracy, when there's a bill you don't like you don't need to convince anyone who actually read the bill that the bill is bad. You can just run a lot of ads that say "bill X is bad because of Y".
To get good governance you need a system that allows votes for laws to be made based on a good analysis of the merits of the law.
I think its fair to say direct democracy would not eliminate lobbying power. And to your final point, I agree that reliable educational resources or perhaps some other solution would be needed to make sure whomever is doing the voting is as rational as they can be. It's not sufficient to only give everyone a vote.
Regarding your point around running ads, to make sure I am understanding: do you mean the number of people who actually read the bill will be sufficiently low, that a viable strategy to get something passed would be to appeal to the non-reading voters and misinform them?
Understanding what a law does takes effort and time even if you are generally educated. Even if there are educational resources available plenty of people don't have the time to inform themselves about every law.
Representative democracy is about giving that job of understanding laws to democratically elected officials and their staff.
In the absence of that, the people who spent full time engaging with laws are people who need to get a paycheck from somewhere else. Those can be lobbyists. They can also be journalists. Most journalists also get paid by corporate masters.
I like this distinction, because it seems like a good starting point when you want to design a system.
I guess the government usually has a power over certain territory, while culture depends on a clear distinction between ingroup and outgroup. Because, in both cases, the rules are not universal, so how will you predict who follows them and who doesn't?