This post originally appeared here; I've updated it slightly and posted it here as a follow-up to this post.

David Friedman has a fascinating book on alternative legal systems. One chapter focuses on prison law - not the nominal rules, but the rules enforced by prisoners themselves.

The unofficial legal system of California prisoners is particularly interesting because it underwent a phase change sometime after the 1960’s.

Prior to the 1960’s, prisoners ran on a decentralized code of conduct - various unwritten rules roughly amounting to “mind your own business and don’t cheat anyone”. Prisoners who kept to the code were afforded some respect by their fellow inmates. Prisoners who violated the code were ostracized, making them fair game for the more predatory inmates. There was no formal enforcement; the code was essentially a reputation system.

Sometime after the 1960’s, that changed. During the code era, California’s total prison population was only about 5000, with about 1000 inmates in a typical prison. That’s quite a bit more than Dunbar’s number, but still low enough for a reputation system to work through second-order connections. By 1970, California’s prison population had ballooned past 25000; today it is over 170000. The number of prisons also grew, but not nearly as quickly as the population, and today’s prisoners frequently move across prisons anyway. In short, a decentralized reputation system became untenable. There were too many other inmates to keep track of.

As the reputation system collapsed, a new legal institution grew to fill the void: prison gangs. Under the gang system, each inmate is expected to affiliate with a gang (though most are not formal gang members). The gang will explain the rules, often in written form, and enforce them on their own affiliates. When conflict arises between affiliates of different gangs, the gang leaders negotiate settlement, with gang leaders enforcing punishments on their own affiliates. (Gang leaders are strongly motivated to avoid gang-level conflicts.) Rather than needing to track reputation of everyone individually, inmates need only pay attention to gangs at a group level.
Of course, inmates need some way to tell who is affiliated with each gang - thus the rise of racial segregation in prison. During the code era, prisoners tended to associate by race and culture, but there was no overt racial hostility and no hard rules against associating across race. But today’s prison gangs are highly racially segregated, making it easy to recognize the gang affiliation of individual inmates. They claim territory in prisons - showers or ball courts - and enforce their claims, resulting in hard racial segregation.

The change from a small, low-connection prison population to a large, high-connection population was the root cause. That change drove a transition from a decentralized, reputation-based system to prison gangs. This, in turn, involved two further transitions. First, a transition from decentralized, informal unwritten rules to formal written rules with centralized enforcement. Second, a transition from individual to group-level identity, in this case manifesting as racial segregation.

Generalization

This is hardly unique to prisons. The pattern is universal among human institutions. In small groups, everybody knows everybody. Rules are informal, identity is individual. But as groups grow:

  • Rules become formal, written, and centrally enforced
  • Identity becomes group-based.

Consider companies. I work at a ten-person company. Everyone in the office knows everyone else by name, and everyone has some idea of what everyone else is working on. We have nominal job titles, but everybody works on whatever needs doing. Our performance review process is to occasionally raise the topic in weekly one-on-one meetings.

Go to a thousand or ten thousand person company, and job titles play a much stronger role in who does what. People don’t know everyone, so they identify others by department or role. They understand what a developer or a manager does, rather than understanding what John or Allan does. Identity becomes group-based. At the same time, hierarchy and bureaucracy are formalized.

The key parameter here is number of interactions between each pair of people (you should click that link, it's really cool). In small groups, each pair of people has many interactions, so people get to know each other. In large groups, there are many one-off interactions between strangers. Without past interactions to fall back on, people need other ways to figure out how to interact with each other. One solution is formal rules, which give guidance on interactions with anyone. Another solution is group-based identity - if I know how to interact with lawyers at work in general, then I don’t need to know each individual lawyer.

In this regard, prisons and companies are just microcosms of society in general.

Society

At some point over the past couple hundred years, society underwent a transition similar to that of the California prison system.

In 1800, people were mostly farmers, living in small towns. The local population was within an order of magnitude of Dunbar’s number, and generally small enough to rely on reputation for day-to-day dealings.

Today, that is not the case [citation needed].

Just as in prisons and companies, we should expect this change to drive two kinds of transitions:

  • A transition from informal, decentralized rules to formal, written, centrally-enforced rules.
  • A transition from individual to group-level identity.

This can explain an awful lot of the ways in which society has changed over the past couple hundred years, as well as how specific social institutions evolve over time.
To take just a few examples…

  • Regulation. As people have more one-off interactions, reputation becomes less tenable, and we should expect formal regulation to grow. Conversely, regulations are routinely ignored among people who know each other.
  • Litigation. Again, with more one-off interactions, we should expect people to rely more on formal litigation and less on informal settlement. Conversely, people who interact frequently rarely sue each other - and when they do, it’s expected to mess up the relationship.
  • Professional licensing. Without reputation, people need some way to signal that they are safe to hire. We should expect licensing to increase as pairwise interactions decrease.
  • Credentialism. This is just a generalization of licensing. As reputation fails, we should expect people to rely more heavily on formal credentials - “you are your degree” and so forth.
  • Stereotyping. Without past interactions with a particular person, we should expect people to generalize based on superficially “similar” people. This could be anything from the usual culprits (race, ethnicity, age) to job roles (actuaries, lawyers) to consumption signals (iphone, converse, fancy suit).
  • Tribalism. From nationalism to sports fans to identity politics, an increasing prevalence of group-level identity means an increasing prevalence of tribal behavior. In particular, I'd expect that social media outlets with more one-off or low-count interactions are characterized by more extreme tribalism.
  • Standards for impersonal interactions. “Professionalism” at work is a good example.

I’ve focused mostly on negative examples here, but it’s not all bad - even some of these examples have upsides. When California’s prisons moved from an informal code to prison gangs, the homicide rate dropped like a rock; the gangs hate prison lockdowns, so they go to great lengths to prevent homicides. Of course, gangs have lots of downsides too. The point which generalizes is this: bodies with centralized power have their own incentives, and outcomes will be “good” to exactly the extent that the incentives of the centralized power align with everybody else’ incentives and desires.

Consider credentialism, for example. It’s not all bad - to the extent that we now hire based on degree rather than nepotism, it’s probably a step up. But on the other hand, colleges themselves have less than ideal incentives. Even setting aside colleges’ incentives, the whole credential system shoehorns people into one-size-fits-all solutions; a brilliant patent clerk would have a much more difficult time making a name in physics today than a hundred years ago.

Takeaway

Of course, all of these examples share one critical positive feature: they scale. That’s the whole reason things changed in the first place - we needed systems which could scale up beyond personal relationships and reputation.

This brings us to the takeaway: what should you do if you want to change these things? Perhaps you want a society with less credentialism, regulation, stereotyping, tribalism, etc. Maybe you like some of these things but not others. Regardless, surely there’s something somewhere on that list you’re less than happy about.
The first takeaway is that these are not primarily political issues. The changes were driven by technology and economics, which created a broader social graph with fewer repeated interactions. Political action is unlikely to reverse any of these changes; the equilibrium has shifted, and any policy change would be fighting gravity. Even if employers were outlawed from making hiring decisions based on college degree, they’d find some work-around which amounted to the same thing. Even if the entire federal register disappeared overnight, de-facto industry regulatory bodies would pop up. And so forth.

So if we want to e.g. reduce regulation, we should first focus on the underlying socioeconomic problem: fewer interactions. A world of Amazon and Walmart, where every consumer faces decisions between a million different products, is inevitably a world where consumers do not know producers very well. There’s just too many products and companies to keep track of the reputation of each. To reduce regulation, first focus on solving that problem, scalably. Think amazon reviews - it’s an imperfect system, but it’s far more flexible and efficient than formal regulation, and it scales.

Now for the real problem: online reviews are literally the only example I could come up with where technology offers a way to scale-up reputation-based systems, and maybe someday roll back centralized control structures or group identities. How can we solve these sorts of problems more generally? Please comment if you have ideas.

New to LessWrong?

New Comment
26 comments, sorted by Click to highlight new comments since: Today at 1:40 PM

“Phase change in 1960’s” - first claim is california’s prison pop went from 5k to 25k. According to wikipedia this does seem to happen… but then it’s immediately followed by a drop in prison population between 1970 and 1980. It also looks like the growth is pretty stable starting in the 1940s.

According to this prison pop in California was a bit higher than 5k historically, 6k-8k, and started growing in 1945 by about 1k/year fairly consistently until 1963. It was then fairly steady, even dropping a bit, until 1982 when it REALLY exploded, more than doubling from 28k in 1981 to 57k in 1986!

I had planned to do more follow up (for example, looking at prison murder rates) but this took longer than I wanted.

Overall these numbers are technically consistent but from the article I expected to find something more like the 1980s happening in the 1960s. I'd very much want to see how stats from the 1980s compare, and whether there were new changes in that much larger and faster period of growth.

Here's the graph on that wikipedia page:

Thanks for finding these!

This is actually a pretty abysmal failure of an epistemic spot check.

For people not paying close attention: I believe magfrump is saying that they failed to check the claims as thoroughly as intended, not that the claims were false; those numbers do broadly agree with the post. (I misread it the first time through and was very confused about why the claims were supposedly failing an epistemic spot check when the numbers were generally consistent with the post's claims.)

That is not what I intended, and I think we disagree here, though probably I don't endorse calling it an abysmal failure.

  1. in the original post you claim that total prison populations were about 5k, though they were noticably more for a long time. This is a small point.

  2. you claim that prison populations ballooned in the 1960s. According to the sources linked, prison populations grew consistently from 1945ish-1968ish. Certainly they grew to a larger size than ever before, but it's not at all clear where in that range the escape from dunbar's number would occur (why 25k, why not 10k? What are individual prison populations is something I didn't see). Maybe there was a more gradual process of displacement or a series of specific incidences of failures of the code system? But this seems like an important missing piece. I went in looking for a specific phase change in prison population and what I found was a phase change in prison growth, starting 25 years earlier.

  3. you say prison populations reached 25k in 1970, then that they are currently 170k. These numbers are reasonably accurate, but misleading, suggesting smooth growth from 1970 to the present. 25k was the high water mark and prisons then dropped in population for a decade before REALLY exploding in the 1980s. That seems like the point where I'd expect a phase change in systems to be necessary, or at least to be a serious challenge of how well the gang based conflict resolution methods scale.

Overall I went into the search expecting to look at a graph and see "here's the obvious phase change" and have that timeline match up, and for that change to be over a short period of time. I saw a clear phase change at a completely different set of dates from those outlined in the post, and that makes me skeptical. Looking at these in detail (instead of trying to summarize a slow and somewhat frustrating search before going to bed) I don't think the points are terribly damaging to the narrative of the article but they do leave me with more questions.

I'll edit my original comment to remove the "abysmal failure" part, now that it accurately describes my spot check.

Ah, I see. To draw an analogy: if you boil water, there is not a big upward jump in temperature at the point where the water turns to steam. Likewise here: the prison population is like temperature. I would not particularly expect a large upward jump in the population, right around the time of the phase change.

If there were a large upward jump in population during some time, then that would be a more-than-usually-likely time to cross phase-change thresholds, just as a large jump in temperature is likely to cross phase-change thresholds. But the converse does not apply: a phase change can occur even when the underlying parameter is changing slowly. So while this is Bayesian evidence against the model on net, it is quite weak.

(I do think the big jump in prison population in the 80's is indicative of a phase change in a different system somewhere upstream of imprisonment, but I think that's largely a separate phenomenon from that discussed in the post. Continuing the analogy: if we see a sharp jump in temperature, that's probably indicative of something sharply changing in the system, but not necessarily water boiling.)

Anyway, I did just go back and re-check the relevant chapter of Legal Systems Very Different From Ours, and I do think my summary was wrong on the timeline: it says that there has been a dramatic increase in gangs and gang members "since the 60's", with the "Code era" covering roughly the first half of the twentieth century. So I will definitely edit that. Thanks again for the check!

(Note that the timeline from the book is a bit more consistent with what you expected, though I still maintain that we shouldn't put particularly high prior on a big population jump around the time of the phase change. Weak evidence, either way.)

(That is a good cite of Sarah's blogpost. I might use it like that in future.)

I just wanted to say "thanks for actually doing an epistemic spot check here". I think* I currently endorse John's response explanation about why he doesn't think "sharp increase in prisoners" is the thing to be looking for, but I think doing any kind of serious spot check is big chunk of work that's often not as rewarding as it should be. Have a strong upvote.

I didn't notice until just recently that this post fits into a similar genre as (what I think) the Moral Mazes discussion is pointing at (which may be different from what Zvi thinks).

Where one of the takeaways from Moral Mazes might be: "if you want your company to stay aligned, try not to grow the levels of hierarchy too much, or be extremely careful when you do."

"Don't grow the layers of hierarchy" is (in practice) perhaps a similar injunction to "don't grow the company too much at all" (since you need hierarchy to scale)

Immoral Mazes posits a specific failure due to middle managers being disconnected from reality, and evolving an internal ecosystem that then sets out to protect itself. This post points at an (upstream?) issue where, regardless of middle management, the fundamental reality is that people cannot rely on repeated-interactions to build trust. 

I actually totally forgot until just now the final paragraph, and key point of this post

So if we want to e.g. reduce regulation, we should first focus on the underlying socioeconomic problem: fewer interactions. A world of Amazon and Walmart, where every consumer faces decisions between a million different products, is inevitably a world where consumers do not know producers very well. There’s just too many products and companies to keep track of the reputation of each. To reduce regulation, first focus on solving that problem, scalably. Think amazon reviews - it’s an imperfect system, but it’s far more flexible and efficient than formal regulation, and it scales.

Now for the real problem: online reviews are literally the only example I could come up with where technology offers a way to scale-up reputation-based systems, and maybe someday roll back centralized control structures or group identities. How can we solve these sorts of problems more generally? Please comment if you have ideas.

I think this maps to one of the key (according to me) problems raised by the Immoral Mazes sequence: we don't know how to actually identify and reward competence among middle managers, so all we have are easily goodhartable metrics. (And in the case of middle management that there's a deep warping that happens because the thing that got goodharted on was "office politics")

Unfortunately... well, nobody commented with ideas on this post, and also I don't know that anyone came up with any way to track competence of management either. 

The actionable place where this matters in my local environment is EA grantmakers awarding researchers, which are often pretty hard to evaluate. I think this is a serious bottleneck to scaling efforts.

I notice that forecasting is one of few domains where rationalsphere-folk are experimenting with scalable solutions for evaluation. I've been somewhat pessimistic about forecasting, but I think this might have convinced me to allocate more attention to it.

...huh. I would not have expected this post to be closely associated in my head with amplifying generalist forecasting. But, now I think it is.

My guess is that part of what's going on here is that in certain ways attempting to optimize for coordination is at greater risk for goodhart than other things. To take an example from the post to its limit, the freelancer who invests 100% in selling their services and 0% in being skilled at them or providing them is a fraud. But also the freelancer who invests 100% in skills but 0% in selling is out of business.

So there's a need for some sort of dynamic balance.

But my guess is that for whatever reasons (documented in Moral Mazes no doubt) certain kinds of organizations put pressure on the managers to go all the way in one direction, rather than finding that balance.

Over the years I've become increasingly more negative about bureaucracy and regulations, so I appreciate this essay that contextualizes such issues and attempts to explain how such social institutions evolve. That said, I only read the essay during the Nomination process, not in 2019, so I can't say whether it will affect my thinking long-term.

This post was fairly central in my thinking about how societies work. 

There's a particular genre of LessWrong post which is "summarize a chapter of Legal Systems Different From Our Own", where I feel a little bad about giving most of the credit to the post author rather than David Friedman. If this made it through the review I think I'd want David Friedman somewhat involved?

I'm curious what your thoughts are on this now that we've experienced extended shelter in place & have even less access to repeated interactions.

Interesting question.

I don't think that shelter in place has had that large an effect on most of the kinds of interaction relevant here. People still have largely the same economic interactions - we're still buying mostly the same products from mostly the same sources. There's maybe some shift in middlemen - more Amazon, less brick-and-mortar - but our interactions with the brick-and-mortar Walmart store weren't really any more personal than the interactions with Amazon anyway. There might have been a time when everyone was on a first-name basis with their grocer, but those days were long past even before COVID.

To the extent that COVID has had an effect, I expect it's probably increased the proportion of repeated interactions, rather than decreased that proportion. There just aren't as many opportunities to go out and interact with strangers. We're interacting less with cashiers or waiters or bartenders or hotel staff. Conversely, we're still interacting about as much with family or coworkers - those close ties are relatively less sensitive to the barriers created by COVID.

I'm very curious about what the actual rules of the various gangs look like. If they exist in written form in an environment where it's easy to confiscate documents I would expect them to be publically accessible. 

Well I messed up. I even wrote a nomination comment for this post, but it was written this year. Silly me. I'll nominate it next year instead.

Hey Ben, it's 2020 now!

[-]Ben2y30

Very interesting article. Some of the social engineering implications are really interesting. (I don't mean social engineering in a bad way). For example: if you believe the gang system in prisons is leading to increased re-offending rates as people who leave the prison system are recruited into gang structures outside of it then you can think about having more small prisons with prisoners moving between locations very infrequently.

Finally saw this courtesy of the 2019 review; you might be interested in David Skarbek's two books on prison order, "The Social Order of the Underworld" and "The Puzzle of Prison Order." They flesh out this question in the context of prisons quite a bit (and have been getting a lot of buzz in the political science world).

Good post! Thanks for writing this, it adds additional clarity and generality to things I've been thinking about recently.

This prompted me to finally purchase David Friedman's book on legal systems (after reading a few different reviews of it that each sounded fascinating).

Are you aware of any research in the area that numerical modeled this Dunbar instability?

Yes! Click the link that says "you should click that link, it's really cool".

The click leads to an iterated prisoner dilemma game, not sure what this has to do with the Dunbar instability.

Iterated prisoners' dilemma is used to model the breakdown of reputation. Roughly speaking, when the interaction count is high, there's plenty of time to realize you're playing against a defector and to punish them, so defectors don't do very well - that's a reputation system in action. But as the interaction count gets lower, defectors can "hit-and-run", so they flourish, and the reputation system breaks down. The link goes into all of this in much more depth.

Dunbar just comes in as a (very) rough estimate for where the transition point occurs.

Fwiw, I keep bouncing off the Iterated Prisoner's Dilemma app, because no one really gave it much context and the initial few rounds seemed very much like stuff I already knew. (this is more of a general message to people trying to share the app – I think if this was my first exposure to it I'd have bounced off again, and if people explained how long it took to play and roughly what they got out if I'd have played it sooner)

[+]Wes F5y-140