In my reading group, a friend asked why we see so many examples of failures to co-ordinate, especially against negative sum competition, i.e. Moloch.
In some ways, this is a strange question. For one, what is the apriori amount of molochian failures in human affairs? If we look at animals, they don't seem to be doing so great at co-ordinating to avoid negative sum competition. So shouldn't we expect humans to be even worse at grappling with this god of iron cocks?
Maybe. But nature tends to have very local failures. They don't have a Great Leap Forward, where an entire nation fell into madness. Or even risking total annihilation of all life on our planet via AI. In some ways, our co-ordination failures are keeping pace with our increased capacities in general.
Which kind of matches my friend's actual position. He clarified that it surprises him that as we've scaled up our species, it is odd that we don't seem to have unlocked new modes of co-operation at ever larger scales that push back Moloch step by step.
What to make of this? Well, to borrow a Bostrom-ism, we've got the worst possible co-ordination tech that could allow for an industrial civilization. That is, we did unlock new modes of co-ordination. And immediately after, we scaled ourselves up to the point at which these co-ordination mechanisms started breaking down. This generated new kinds of co-ordination failures at the largest scale, but at smaller scales we succeeded at patching (some) older co-ordination failures.
For instance, we've got a lot less war now than we did in the past. We destroyed smallpox! We've got unusually good property rights! We can talk around the world! We (nearly) all use the same set of units!
And, of course, we've got whole new co-ordination failures, like existential risks.
What're some implications of this? Well, we're expecting a coming singularity, which means we can still expect a huge increase in the number of general intelligence in the world if things go well. Leaving aside why this might not work out great for current people, I'd like to point out that this influx of minds would probably push our current co-ordination mechanisms well past their breaking point.
To me, that suggests some kind of social collapse, or society developing entirely new modes of co-operation in order to make use of all that extra cognitive horsepower.
The obvious new modes are digital cloning + memory swapping. Both would let us create many minds that very similar to each other, with common knowledge of similarity. This would make co-ordination a lot easier, and hence co-operation too.
I think it is hard to overstate how big a difference this would make. Because co-ordination in a flat hierarchy grows as the square of the participant count, we're forced to come up with loads of schemes that effectively reduce the number of decision-making nodes in the network, e.g. hierarchical orgs. These are ineffective in a lot of ways, and we hit diminishing returns past probably a few hundred to a few thousand people. It's just the best we can make do with at the moment.
But with clones, you can get tens of trillions of entities moving around with eerie synchronization. Case in point: human bodies and their cells.
We can then stack our old co-ordination tech of hierarchical organizations, far-mode tech and so on top of this in order to get even more scale. Perhaps enough to handle the first years worth of growing past the singularity, though who really knows.
Another implication is that these minds will have to have much better self-knowledge in order to pull off this kind of co-operation, unlike clones in fiction. They'll have to reason about themselves less in far-mode, understand what they're really like in various situations, or so forth.
But there's an issue. Namely, I think that historically the new modes of social co-ordination we've unlocked at large scales relies more on far-mode cognition.
When I look at the biggest examples of organizations I know of, the Chinese nation, the Catholic Church and so on, I note that they tend to use a lot of memes promoting goals in language that seems quite far-mode.
For instance, ideals that movements rally behind, 5 word slogans that serve as shibboleths shaping actions, sacred cows that can't be disputed etc. I mean, look at the Bible for Pete's sake. That's as far mode as it gets, and is perhaps the most successful artifact inducing social co-ordination in human history.
Conversely, at the small scale, I mostly use near-mode reasoning to co-ordinate with my colleagues and family, since we can rely on a lot more shard concrete details that are common knowledge like what time we tend to wake up, where we get food, what are our comparative advantages etc.
I think this tendency, of larger scale coordination tech leaning more on our far-mode cognition, plays a big part in the kinds of co-ordination failures we see in the world at large. E.g. co-ordinating around blocking nuclear power in order to protect the environment. However, I'm not sure if this leads to Molochian failures per se.
And I'm not sure how it squares with the next mode of co-ordination, requiring less thinking of ourselves in far-mode.
Perhaps one way to square the circle is that we'll still rely on far-mode reasoning at the largest scales to co-ordinate, but they'll be the least distortionary form of far mode reasoning it can be. Perhaps the sacred cow of the future will be something like mathematics.
Probably it will be much weirder than that.