Sometimes, groups of humans disagree about what to do.
We also sometimes disagree about how to decide what to do.
Sometimes we even disagree about how to decide how to decide.
Among the philosophically unsophisticated, there is a sad, frustrating way this can play out: People resolve "how to decide" with yelling, or bloodshed, or, (if you’re lucky), charismatic leaders assembling coalitions. This can leave lots of value on the table, or actively destroy value.
Among the extremely philosophically sophisticated, there are different sad, frustrating ways this can play out: People have very well thought out principles informing their sense of "how to coordinate well." But, their principles are not the same, and they don’t have good meta-principles on when/how to compromise. They spend hours (or years) arguing about how to decide. Or they burn a lot of energy in conflict. Or they end up walking away from what could have been a good deal, if only people were a bit better at communicating.
I’ve gone through multiple iterations on this sequence intro, some optimistic, some pessimistic.
Optimistic takes include: “I think rationalists are in a rare position to actually figure out good coordination meta-principles, because we are smart, and care, and are in positions where good coordination actually matters. This is exciting, because coordination is basically the most important thing . Anyone with a shot at pushing humanity’s coordination theory and capacity forward should do that.”
Pessimistic takes include: “Geez louise, rationalists are all philosophical contrarians with weird, extreme, self-architected psychology who are a pain to work with”, as well as “Actually, the most important facets of coordination to improve are maybe more like ‘slightly better markets’ than like ‘figuring out how to help oddly specific rationalists get along’.”
I started writing this post several years ago because I was annoyed at, like, 6 particular people, many of them smarter and more competent than me, many of whom were explicitly interested in coordination theory, who nonetheless seemed to despair at coordinating with rationalists-in-particular (including each other). The post grew into a sequence. The sequence grew into a sprawling research project. My goal was “provide a good foundation to get rationalists through the Valley of Bad Coordination”. I feel like we’re so close to being able to punch above our weight at coordination and general competence.
I think my actual motivations were sort of unhealthy. “If only I could think better and write really good blogposts, these particular people I’m frustrated with could get along.”
I’m currently in a bit of a pessimistic swing, and do not expect that writing sufficiently good blogposts will fix the things I was originally frustrated by. The people in question (probably) have decent reasons for having different coordination strategies.
Nonetheless, I think “mild irritation at something not quite working” is pretty good as motivations go. I’ve spent the past few years trying to reconcile the weirdly-specific APIs of different rationalists who each were trying to solve pretty real problems, and who had developed rich, complex worldviews along the way that point towards something important. I feel like I can almost taste the center of some deeper set of principles that unite them.
Since getting invested in this, I’ve come to suspect “If you want to succeed at coordination, ‘incremental improvements on things like markets’ is more promising than ‘reconcile weird rationalist APIs.” But, frustration with weird rationalist APIs was the thing that got me on this path, and I think I’m just going to see that through to the end.
Here is this sequence, and here is what the deal is:
Deep Inside Views,
and the Coordination Frontier
A common strength of rationalists is having deep inside-view models. Rich, gears-based inside views are often a source of insight, but are hard to communicate about because they are many inferential steps away from common knowledge.
Normally, that’s kinda fine. If you’re not specifically building a product together, it’s okay if you mostly go off in different directions, think hard-to-explain-thoughts, and only occasionally try to distill your thoughts down into something the median LessWronger can understand.
But it’s trickier when your rich, nuanced worldview is specifically about coordinating with other people.
The Coordination Frontier is my term for “the cutting edge of coordination techniques, which are not obvious to most people.” I think it's a useful concept for us to collectively have as we navigate complex new domains in the coming years.
Sometimes you are on the coordination frontier, and unfortunately that means it's either your job to explain a principle to other people, or you have to sadly watch value get destroyed. Often, this is in the middle of a heated conflict, where noticing-what’s-going-on is particularly hard.
Other times, you might think you are on the coordination frontier, but actually you're wrong – your principles are missing something important and aren’t actually an improvement. Maybe you’re just rationalizing things that are convenient for you.
Sometimes, Alice and Bob disagree on principles, but are importantly both somewhat right, and would benefit from somehow integrating their different principles into a coherent decision framework.
When you are trying to innovate along the coordination frontier, there aren’t purely right-or-wrong answers. There are different things you can optimize for. But, I think there are righter and wronger answers. There are principles that constrain what types of coordination solutions are appropriate, given a particular goal. There are failure modes you can fall into, or, notice and avoid.
And, if you are a particular agent with a particular set of skills and cognitive bandwidth and time and goals, interacting with other agents with particular goals and resources…
...then I think there (might) be a fairly narrow range of theoretically-best-answers to the question “how do I coordinate with these people.”
A rationalist failure mode is to get overly attached to the belief that you’ve found “the right answer.” One of the more important meta-coordination principles is “We don’t really have time to agree on which of our weird philosophical positions is right, and we need to coordinate anyway”.
Nonetheless, I do think there is something important about the fact that “righter answers exist.”
My overall preferred approach is a mixture of pragmatism in the day-to-day, and curious, lawful thinking about the theoretical ideal.
Distinctions near the Frontier
A few people read an earlier draft of this post and were like “Cool, but, I don’t know that I could use ‘Coordination Frontier’ in a sentence.” I think it’s easiest to describe it by contrasting a few neighboring concepts:
- The Coordination Baseline
- Coordination Pioneers
- The Coordination Frontier
- The Coordination Limit
The Coordination Baseline
AKA “mainstream civilization”
The Coordination Baseline is what most people around you are doing. In your particular city or culture, what principles do people take as obvious? Which norms do they follow? Which systems do they employ? Does a shopkeeper charge everyone a standardized price for an item, or do they haggle with each individual? Do people vote? Can you generally expect people to be honest? When people communicate, does it tend to be Ask Culture or Guess Culture?
Who exactly this is referring to depends on the context of a discussion. It might refer to an entire country, a city, or a particular subculture. But there is at least some critical mass of people who interact with each other, who have baseline expectations for how coordination works.
Some people explore novel ways of coordinating, beyond the baseline. They develop new systems and schemes and norms – voting systems, auctions, leadership styles, etc. They are Coordination Pioneers.
Sometimes they are solving fully novel problems that have never been solved before – such as inventing a completely new voting system.
Sometimes they are following the footsteps of others who have already blazed a trail. Perhaps they are reinventing approval voting, not realizing it’s already been discovered. Or, perhaps they read about it, and then get excited about it, and join a political movement to get the new voting system adopted.
The Coordination Frontier
The upper limit of human knowledge of how to coordinate well.
The coordination frontier is the pareto frontier of “what coordination strategies we are theoretically capable of implementing.”
The frontier changes over time. Once upon a time, our best coordination tools were “physical might makes right, and/or vaguely defined exchange of social status.” Then we invented money, and norms like “don’t lie”.
During the cold war, the United States and Soviet Union were suddenly thrown into a novel, dangerous situation where either could lay devastating waste to the other. Game theorists like Thomas Schelling had to develop strategies that incorporated the possibility of mutually assured destruction, where in some ways it was better if both sides had the ability to reliably, inevitably counterattack.
Most people in the world probably didn’t understand the principles underlying MAD at the time, but, somewhere in the world were people who did. (Hopefully, ranking generals and diplomats in the US and Soviet Union).
The Coordination Limit
The upper limit of what is theoretically possible.
For any given set of agents, in a given situation, with a given amount of time to think and communicate, there are limits on what the best joint decisions they could reach. The Coordination Limit is the theoretical upper bound of how much value they could jointly optimize for.
There will be different points along a curve, optimizing for different things. There might be multiple “right answers”, for any given optimization target. But I think the set of options for “perfect-ish play” are relatively constrained.
I think it’s useful to track separately “what would N fully informed agents do, if they are perfectly skilled at communicating and decisionmaking”, as well as “given a set of agents who aren’t fully knowledgeable of coordination theory, with limited communication or decisionmaking skills and some muddled history of interaction, what is the space of possible optimization targets they can hit given their starting point?”
Where is this going?
The thing I am excited about is pushing the coordination frontier forward, towards the limit.
This sequence covers a mixture of meta-coordination principles, and object-level coordination tools. As I post this, I haven’t finished the sequence, nor have I settled on the single-most-important takeaways.
But here are my current guesses for where this is going:
- Most of the value of coordination-experimentation lives in the future. Locally, novel coordination usually costs more than it gains. This has implications on what to optimize for when you’re experimenting. Optimize for longterm learning, and for building up coordination-bubbles where you’ll get to continue reaping the benefits.
- Complex coordination requires some combination of Shared-Understanding-And-Skills, or, Simplifying UI.
- Misjudging inferential distance, and failing to model theory of mind properly, are particularly common failure modes. People are usually not coordinating based on the same principles as you. This is more true the more you’ve thought about your principles. Adjust your expectations accordingly.
- Lack of reliable reputation systems is a major bottleneck, at multiple scales. (Open Problem #1)
- Another bottleneck is ability to quickly converge on coordination-frame. This is tricky because “which coordination frame we use” is a negotiation, often with winners and losers. But I think rationalists often spend more time negotiating over coordination-frame that it’s worth. (Open Problem #2)
- Coordination is very important during a crisis, but it’s hard to apply new principles or depend-on-particular-skills during high stakes crises. This means it’s valuable to establish good policies during non-crisis times (and, make sure to learn from crises that do happen)