Sometimes, groups of humans disagree about what to do. 

We also sometimes disagree about how to decide what to do. 

Sometimes we even disagree about how to decide how to decide.

Among the philosophically unsophisticated, there is a sad, frustrating way this can play out: People resolve "how to decide" with yelling, or bloodshed, or, (if you’re lucky), charismatic leaders assembling coalitions. This can leave lots of value on the table, or actively destroy value. 

Among the extremely philosophically sophisticated, there are different sad, frustrating ways this can play out: People have very well thought out principles informing their sense of "how to coordinate well." But, their principles are not the same, and they don’t have good meta-principles on when/how to compromise. They spend hours (or years) arguing about how to decide. Or they burn a lot of energy in conflict. Or they end up walking away from what could have been a good deal, if only people were a bit better at communicating.

I’ve gone through multiple iterations on this sequence intro, some optimistic, some pessimistic. 

Optimistic takes include: “I think rationalists are in a rare position to actually figure out good coordination meta-principles, because we are smart, and care, and are in positions where good coordination actually matters. This is exciting, because coordination is basically the most important thing [citation needed]. Anyone with a shot at pushing humanity’s coordination theory and capacity forward should do that.”

Pessimistic takes include: “Geez louise, rationalists are all philosophical contrarians with weird, extreme, self-architected psychology who are a pain to work with”, as well as “Actually, the most important facets of coordination to improve are maybe more like ‘slightly better markets’ than like ‘figuring out how to help oddly specific rationalists get along’.”

I started writing this post several years ago because I was annoyed at, like, 6 particular people, many of them smarter and more competent than me, many of whom were explicitly interested in coordination theory, who nonetheless seemed to despair at coordinating with rationalists-in-particular (including each other). The post grew into a sequence. The sequence grew into a sprawling research project. My goal was “provide a good foundation to get rationalists through the Valley of Bad Coordination”. I feel like we’re so close to being able to punch above our weight at coordination and general competence.

I think my actual motivations were sort of unhealthy. “If only I could think better and write really good blogposts, these particular people I’m frustrated with could get along.”

I’m currently in a bit of a pessimistic swing, and do not expect that writing sufficiently good blogposts will fix the things I was originally frustrated by. The people in question (probably) have decent reasons for having different coordination strategies. 

Nonetheless, I think “mild irritation at something not quite working” is pretty good as motivations go. I’ve spent the past few years trying to reconcile the weirdly-specific APIs of different rationalists who each were trying to solve pretty real problems, and who had developed rich, complex worldviews along the way that point towards something important. I feel like I can almost taste the center of some deeper set of principles that unite them.

Since getting invested in this, I’ve come to suspect “If you want to succeed at coordination, ‘incremental improvements on things like markets’ is more promising than ‘reconcile weird rationalist APIs.” But, frustration with weird rationalist APIs was the thing that got me on this path, and I think I’m just going to see that through to the end.

So. 

Here is this sequence, and here is what the deal is:

Deep Inside Views, 
and the Coordination Frontier

A common strength of rationalists is having deep inside-view models. Rich, gears-based inside views are often a source of insight, but are hard to communicate about because they are many inferential steps away from common knowledge. 

Normally, that’s kinda fine. If you’re not specifically building a product together, it’s okay if you mostly go off in different directions, think hard-to-explain-thoughts, and only occasionally try to distill your thoughts down into something the median LessWronger can understand.

But it’s trickier when your rich, nuanced worldview is specifically about coordinating with other people.

The Coordination Frontier is my term for “the cutting edge of coordination techniques, which are not obvious to most people.” I think it's a useful concept for us to collectively have as we navigate complex new domains in the coming years

Sometimes you are on the coordination frontier, and unfortunately that means it's either your job to explain a principle to other people, or you have to sadly watch value get destroyed. Often, this is in the middle of a heated conflict, where noticing-what’s-going-on is particularly hard.

Other times, you might think you are on the coordination frontier, but actually you're wrong – your principles are missing something important and aren’t actually an improvement. Maybe you’re just rationalizing things that are convenient for you.

Sometimes, Alice and Bob disagree on principles, but are importantly both somewhat right, and would benefit from somehow integrating their different principles into a coherent decision framework. 

When you are trying to innovate along the coordination frontier, there aren’t purely right-or-wrong answers. There are different things you can optimize for. But, I think there are righter and wronger answers. There are principles that constrain what types of coordination solutions are appropriate, given a particular goal. There are failure modes you can fall into, or, notice and avoid.

And, if you are a particular agent with a particular set of skills and cognitive bandwidth and time and goals, interacting with other agents with particular goals and resources… 

...then I think there (might) be a fairly narrow range of theoretically-best-answers to the question “how do I coordinate with these people.” 

A rationalist failure mode is to get overly attached to the belief that you’ve found “the right answer.” One of the more important meta-coordination principles is “We don’t really have time to agree on which of our weird philosophical positions is right, and we need to coordinate anyway”

Nonetheless, I do think there is something important about the fact that “righter answers exist.”

My overall preferred approach is a mixture of pragmatism in the day-to-day, and curious, lawful thinking about the theoretical ideal.

Distinctions near the Frontier

A few people read an earlier draft of this post and were like “Cool, but, I don’t know that I could use ‘Coordination Frontier’ in a sentence.” I think it’s easiest to describe it by contrasting a few neighboring concepts:

  • The Coordination Baseline
  • Coordination Pioneers
  • The Coordination Frontier
  • The Coordination Limit

The Coordination Baseline

AKA “mainstream civilization”

The Coordination Baseline is what most people around you are doing. In your particular city or culture, what principles do people take as obvious? Which norms do they follow? Which systems do they employ? Does a shopkeeper charge everyone a standardized price for an item, or do they haggle with each individual? Do people vote? Can you generally expect people to be honest? When people communicate, does it tend to be Ask Culture or Guess Culture?

Who exactly this is referring to depends on the context of a discussion. It might refer to an entire country, a city, or a particular subculture. But there is at least some critical mass of people who interact with each other, who have baseline expectations for how coordination works.

Coordination Pioneers

Some people explore novel ways of coordinating, beyond the baseline. They develop new systems and schemes and norms – voting systems, auctions, leadership styles, etc. They are Coordination Pioneers.

Sometimes they are solving fully novel problems that have never been solved before – such as inventing a completely new voting system. 

Sometimes they are following the footsteps of others who have already blazed a trail. Perhaps they are reinventing approval voting, not realizing it’s already been discovered. Or, perhaps they read about it, and then get excited about it, and join a political movement to get the new voting system adopted. 

The Coordination Frontier

The upper limit of human knowledge of how to coordinate well.

The coordination frontier is the pareto frontier of “what coordination strategies we are theoretically capable of implementing.” 

The frontier changes over time. Once upon a time, our best coordination tools were “physical might makes right, and/or vaguely defined exchange of social status.” Then we invented money, and norms like “don’t lie”. 

During the cold war, the United States and Soviet Union were suddenly thrown into a novel, dangerous situation where either could lay devastating waste to the other. Game theorists like Thomas Schelling had to develop strategies that incorporated the possibility of mutually assured destruction, where in some ways it was better if both sides had the ability to reliably, inevitably counterattack.

Most people in the world probably didn’t understand the principles underlying MAD at the time, but, somewhere in the world were people who did. (Hopefully, ranking generals and diplomats in the US and Soviet Union). 

The Coordination Limit

The upper limit of what is theoretically possible.

For any given set of agents, in a given situation, with a given amount of time to think and communicate, there are limits on what the best joint decisions they could reach. The Coordination Limit is the theoretical upper bound of how much value they could jointly optimize for. 

There will be different points along a curve, optimizing for different things. There might be multiple “right answers”, for any given optimization target. But I think the set of options for “perfect-ish play” are relatively constrained.

I think it’s useful to track separately “what would N fully informed agents do, if they are perfectly skilled at communicating and decisionmaking”, as well as “given a set of agents who aren’t fully knowledgeable of coordination theory, with limited communication or decisionmaking skills and some muddled history of interaction, what is the space of possible optimization targets they can hit given their starting point?”

Where is this going?

The thing I am excited about is pushing the coordination frontier forward, towards the limit. 

This sequence covers a mixture of meta-coordination principles, and object-level coordination tools. As I post this, I haven’t finished the sequence, nor have I settled on the single-most-important takeaways.

But here are my current guesses for where this is going:

  1. Most of the value of coordination-experimentation lives in the future. Locally, novel coordination usually costs more than it gains. This has implications on what to optimize for when you’re experimenting. Optimize for longterm learning, and for building up coordination-bubbles where you’ll get to continue reaping the benefits.
     
  2. Complex coordination requires some combination of Shared-Understanding-And-Skills, or, Simplifying UI.
     
  3. Misjudging inferential distance, and failing to model theory of mind properly, are particularly common failure modes. People are usually not coordinating based on the same principles as you. This is more true the more you’ve thought about your principles. Adjust your expectations accordingly.
     
  4. Lack of reliable reputation systems is a major bottleneck, at multiple scales. (Open Problem #1)
     
  5. Another bottleneck is ability to quickly converge on coordination-frame. This is tricky because “which coordination frame we use” is a negotiation, often with winners and losers. But I think rationalists often spend more time negotiating over coordination-frame that it’s worth. (Open Problem #2)
     
  6. Coordination is very important during a crisis, but it’s hard to apply new principles or depend-on-particular-skills during high stakes crises. This means it’s valuable to establish good policies during non-crisis times (and, make sure to learn from crises that do happen)

New to LessWrong?

New Comment
22 comments, sorted by Click to highlight new comments since: Today at 10:11 AM

There was recently an episode of the Zero Knowledge Podcast with privacy activist Harry Halpin who discussed a lot of issues the folks at the frontier of privacy technology were running into. Scientists working on privacy tools like PGP, mixnets, and Tor weren't optimizing for User Experience, and accordingly, many of the communities that would have benefited most from these tools--activists under repressive regimes, for instance--used obviously inferior tools for organizing, because of the technological sophistication barrier to entry. With cryptocurrencies, however, there's been similarly great incentive to suffer poor UX for the sake of potential gain; as opposed to activists in repressive regimes, who were not exploring technology for the sake of gain, but often using sub-optimal non-private coordination solutions on semi-public platforms like facebook.

These might be useful case examples in particular for reinforcing your second point about UI: improving coordination.

Looking forward to the next post!

In my experience, people seem to coordinate ok when they genuinely share the same goals. A lot of friends of mine have mostly shared the goal of 'make six to eight figures on crypto'. Truly enormous amounts of money was loaned on trust. Several deals were made when the price was still unclear (just get me 30K on Biden) and people never asked for receipts. No one was ever stiffed out of their money. As far as I know, there has not been a single serious dispute.

Many of the people involved don't even like each other! And yet huge amounts of money changed hands based on reputation and trust. Many people were willing to help out people they didn't even like at moderate or high risk to themselves (and no upside, just screwed over or neutral). This is very common in crypto. Even at the scale of OTC desks, many things are run on trust. There are definitely scammers but among the 'community' most things are settled in a very high trust environment.

If people really want they can coordinate. Most of the time there are actually hugely conflicting goals and people don't want to 'coordinate'. 

I will say people working on 'finding the best mtg deck' seem to coordinate really well too. If you don't spend much time in communities with genuinely shared goals it is easy to forget what it looks like!

coordination is basically the most important thing [citation needed]

Citation. Well, sort of. That version of the post was a little shy about calling it the most important thing; the original was more direct about that, but wasn't as good a post.

I'm looking forward to this sequence, it sounds excellent.

Cf Epistea Summer Experiment (ESE)


"The central problem [of coordination between rationalists] is that people use beliefs for many purposes - including tracking what is true. But another, practically important purpose is coordination. We think it’s likely that if an aspiring rationalist decides to “stop bullshitting”, they lose some of the social technology often used for successfully coordinating with other people. How exactly does this dynamic affect coordination? Can we do anything about it?"

Also: based on ESE experience, I have some "rich data but small sample size" research about rationalists failing at coordination, in experimental settings. Based on this I don't think rationalists would benefit most e.g. from more advanced and complex S2-level coordination schemes, but more from something like improving "S1/S2" interfaces / getting better at coordination between their S1 a S2 coordination models. (In a somewhat similar way I believe most people's epistemic rationality benefits more from learning things like "noticing confusion" compared to e.g. "learning more from the heuristics and biases literature".)

(Also as a sidenote ... we have developed few group rationality techniques/exercised for ESE; I'm unlikely to write them for LW, but if someone would be interested in something like "write things in a legible way based on conversations" I would be happy to spend time on that (also likely could be payed work). )

I'm not sure whether your sequence will touch on this, but the things that make me hopeful in this space are not techniques and strategies for individuals (which might require training, or willpower, or shared values), but rather suggestions for novel institutions and mechanisms for coordination.

For instance, when you take a public goods problem (like pollution or something), expecting tons of people to agree or negotiate on how to resolve the problem seems utterly intractable, whereas if you could have started with a market design which internalizes such negative externalities, the problem might mostly resolve itself.

Since successful longstanding institutions (like nations) necessarily have a strong bias towards self-preservation, however, I don't really see how most novel mechanisms could possibly be implemented (e.g. charter cities are a great idea, but pretty much all nations are deeply skeptical of them due to highly valuing their sovereignty).

One avenue that seems like it could have a bit more hope is in the cryptocurrency sphere, if only because it's still quite new, plus it's also inherently weird enough that people might not immediately balk at bizarre-sounding concepts like quadratic voting.

For instance, Vitalik Buterin's blog contains many proposed novel coordination mechanisms:

  • Summary of the essay "Alternatives to selling at below-market-clearing prices for achieving fairness (or community sentiment, or fun)": Why do people sell concert tickets below market-clearing prices? This has big negative consequences like incentivizing scalpers, but also some advantages like following some intuitive principles of fairness (e.g. not locking poor people out of the market), as well as more cynical reasons like "products selling out and having long lines creates a perception of popularity and prestige"; etc. So the post suggests a market design that allows selling at mostly market-clearing prices while still preserving e.g. fairness, and concludes with: "In all of these cases, the core of the solution is simple: if you want to be reliably fair to people, then your mechanism should have some input that explicitly measures people. Proof of personhood protocols do this (and if desired can be combined with zero knowledge proofs to ensure privacy). Ergo, we should take the efficiency benefits of market and auction-based pricing, and the egalitarian benefits of proof of personhood mechanics, and combine them together."
  • The essay Moving beyond coin voting governance points out in an aside that cryptocurrencies invest ridiculous sums in network security (proof of work), e.g. here's a chart of spending on proof of work vs. research & development. The difference is that network security was considered as a public good during design of the cryptocurrency protocols, while e.g. research wasn't. So the former gets huge amounts of funding; the latter doesn't. (Though the essay also points out that rewarding R&D explicitly would compromise their independence etc.)
  • Other mechanisms include quadratic voting, which was IIRC also used here on Less Wrong for the 2018 Review; as well as the related concept of quadratic funding.

To which extent these benefits will actually materialize is of course still an open question, but conceptually, this sounds like the right approach: align incentives, internalize externalities, consider public goods from the start, etc. Try to improve systems, rather than people.

Yeah this is largely what I meant when saying "I've updated towards 'things like somewhat better markets.'" I have been reading Vitalik Buterin and agree that crypto is a pretty interesting testing ground for new coordination schemes. (another motivating example was microcovid.org, which helped streamline covid negotiation for large numbers of people)

Part of my goal of the sequence is to orient a bit on "how rationalists/EAs/longtermists might deliberately experiment with coordination schemes internally as they scale", with part of the hope being that this enables them to perform better.

I think smaller-scale strategies are still important because (at current margins) many groups I know are small enough that scalable-mechanisms usually aren't (immediately) the bottleneck.

I want to experiment with including exercises and/or discussion prompts for this sequence. This post is fairly general, so let's start with just a few 

What coordination problems have you actually run into over the past few years? What seemed to be the underlying cause of them? How tractable were they to solve? If they had been solved, how much value would have been generated? 

(this question seems useful both for checking whether this sequence will actually be practically useful for you, and, if so, giving you some hooks for how to apply later posts)

Have you run into situations where 'meta-coordination-failures' seemed to be the problem? i.e. where multiple people were trying to solve a coordination problem using different approaches? How did those situations go? Were there any unilateral actions could you have taken to help them go better (without relying on other people doing anything different)?

I show care and respect for people by only scheduling things when I know even a 5th percentile outcome has me showing up on time and in a reasonably good mood, at the cost of scheduling fewer things. A very good friend of mine shows care and respect for people by squeezing them in when she doesn't really have time, so she's often late and kind of frazzled. We never solved this, and we could never agree on how to solve it, in part because that required agreeing on a time to do so.  

Some of the difficulty is baggage from when we were worse at things and I think if I ran into the same problem with a new person I'd do somewhat better, but in my heart I still kind of believe the answer is "you stop being wrong".

I attended a low tier university, after having left a higher tier university because of mental health issues. 

I consistently struggled to find peers who were interested in studying the things I was interested in, or simply learning for learning's sake. I was aware that my program of supplementary self education would have benefited from finding peers, though I never successfully found peers to study with.

There's one: the coordination problem of discovering peers. This seems broadly improved by the existence of an internet, examples in this forum, and in subcommunities like reddit, but I'm continually uncertain how to use those tools to meet people. So there's a second coordination problem: how to use the tools.

Agreed. I wish I'd found this community like 3 years earlier (~2014), it could've changed the course of my life. Note that aspiring rationalists or "sanepunks" remain in short supply; I just hosted an ACX meetup in a city of 1.2 million, and no one showed up.

Coordination problems

As soon as I started reading this, the topic of automated epistemic coordination came to mind. So, I spend a lot of time on the ACX forums. And traditionally we've all independently tried to figure out the truth and then maybe we wander over to ACX where we communicate our findings with each other mostly from memory, without references, in a non-searchable (Google ignores it) database of comments sorted chronologically. There is no voting or reputation system there either.

It's an inefficient way to learn and an awful filing system. LW is a little better, but not much, and more limited in scope than ACX. So I've been thinking there should be an "evidence clearinghouse" website for recording a massive hierarchy (directed acyclic graph) of claims, counterclaims and the evidence for each. It would include attributes of StackOverflow (voting & reputation system, with collaborative and competitive aspects) and Wikipedia (a hyperlinked web of information with academic and non-academic references).

I envision that larger claims ("humans are responsible for the increase in CO2 concentration in the atmosphere over the last 100 years") can be built out of smaller claims ("Law of conservation of mass" + "Human CO2 emissions are greater than the rate of atmospheric increase") which themselves can be built out of even smaller claims ("Estimates of annual human CO2 emissions" + "Rate of atmospheric increase / keeling curve"). And then, importantly, the reputation of smaller claims contributes to larger claims provided that users judge the logic as sound. Also, negative reputation in subclaims drags down the credibility of claims that use them. And obviously, voting needs to be more sophisticated than just "up" and "down". (and surely some sort of Bayesian math should be in there somewhere.)

Anyway, there's lots of details to work out and I have neither money nor time to build it (yet), but I do want to highlight the value of automated coordination algorithms. Systems like this could also nudge non-rationalists to coordinate with each other too, just by using a web site. And that's a big deal!

Less important, I've been trying to work out how to build an open-source community for a decade or so, and not only has it not worked, it's really rare even to find someone who understands or cares about any of the goals. It's weird because the problem is seems almost obvious to me. I can't even tell if what I'm bad at is solving coordination problems, or advertising, or communication, or if nobody has time to write software for free these days.

Meta-coordination:

Well, I talked to a guy on Reddit about that web site idea. He had a similar idea but different, described it, then I said that overall I preferred my version of the idea, and... no response; the discussion ended right then and there. We are so bad at this.

A question I’m interested in, before I get into various specific posts in the sequence: from just the descriptions in this post, does the concept of ‘the coordination frontier’ feel actually helpful?

(The next couple already-written posts don’t super depend on the concept, but it felt very central to my own thinking. Curious how relevant it intuitively feels to others)

I also vote for very intuitive. The pareto frontier analogy is crunchy enough to come to grips with it, but giving it its own name is sufficiently imprecise as to not keep us stuck in game theory or otherwise hamstrung by artificial narrowness.

[-]jp3y30

Very intuitive, but perhaps I’m unusual in how much I think about pareto frontiers. (I mean, obviously I‘m unusual in that, but the question is how much I’m unusual relative to your target audience.)

Glad to hear. Interestingly, originally I didn't actually have the "coordination frontier == pareto frontier of coordination" ironed out. Instead I was using coordination frontier as a vague metaphor, which included "coordination tools a little bit outside what your current culture uses" and "the cutting edge of human knowledge."

I became worried both that I (personally) was equivocating between those two definitions, and also that people might organically mistakenly conflate it with "pareto frontier". And the best solution seemed to be "formally make the definition 'pareto frontier' and then come up with other terms for "somewhat nonstandard/novel coordination tech." (I ended up using "coordination pioneering" for that, which I'm worried is still a bit confusing)

Tentatively excited to read the rest of the sequence, though I think I would have gotten more out of this if I knew more about what your motivating examples of rationalists failing to coordinate are like. Would be interesting to hear about some examples if any are not too private/fraught to share.

Curated. This post crisply and cleanly points to a real problem and it introduces terminology which if it takes off (decent chance, given Raemon's record of introducing terminology), might actually let us make progress on the problem. I'm excited to see what comes of this post. And I'm glad that we can all reference it now.

Elinor Ostrom's work on collective management of common pool resources doesn't get enough credit.

Transaction and monitoring costs and capability are critical, but are usually handwaved; there's no mention at all in the OP of the importance of building an arrangement that is easily monitored and tested by all participants, even though in real world case studies, the presence or absence of that factor is often the difference between success and failure.

Rereading this 2 years later, I'm still legit-unsure about how much it matters. I still think coordination capacity is one of the most important things for a society or for an organization. Coordination Capital is one of my few viable contenders for a resource that might solve x-risk

The questions here, IMO, are:

  • Is coordination capacity a major bottleneck?
  • Are novel coordination schemes an important way to reduce that bottleneck, or just a shiny distraction? (i.e. maybe there's just a bunch of obvious wisdom we should be following, and if we just did a good job following it that'd be sufficient)
  • Is the problem of "coordination innovators bumping into each other in frustrating ways?" an important bottleneck on innovating novel coordination schemes?

Examples

To help me think about that, here are some things that have happened in the past couple years since writing this, that feel relevant:

A bunch of shared offices have been cropping up in the past couple years. Lightcone and Constellation were both founded in Berkeley. The offices reduce a lot of barrier to forming collaborations or hashing out disagreements. (I count this as "doing the obvious things", not "coordination innovation").

Impact Equity Trade. Lightcone and Constellation attempted some collaborations and negotiations over who would have ownership over some limited real-estate. Some complex disagreements ensued. Eventually it was negotiated that Constellation would give .75% of it's Impact Equity to Lightcone, as a way for everyone to agree "okay, we can move on from dispute feeling things were handled reasonably well." (This does definitely count as "weird coordination innovation)

Prediction markets, and relating forecasting aggregators, feel a lot more real to me now than they did in 2021. When Russia was escalating in the Ukraine and a lot of people were worried about nuclear war, it was extremely helpful to have Metaculus, Manifold and Polymarket all hosting predictions on whether Russia would launch a tactical nuke. Habryka whipped up didrussialaunchnukesyet.discordius.repl.co (which at the time was saying "9%" and now says "0-1%". This also feels like an example of weird coordination innovation helping

I've had fewer annoying coordination fights. I think since around the time of this post, the people who were bumping into each other a lot mostly just... stopped. Mostly by sort of retreating, and engaging with each other less frequently. This feels sad. But, I've still successfully worked together with many Coordination Pioneers on smaller, scoped projects.

The Lightcone team's internal coordination has developed. Fleshing out the details here feels like a whole extra task, but, I do think Lightcone succeeds at being a high trust team that punches above it's weight at coordination.

Within Lightcone, I've had the specific experience of getting mad at someone for coordinating wrong, and remembering "oh right I wrote a sequence about how this was dumb", which... helped at least a little.

There are still some annoying coordination fights about AI strategy, EA strategy, epistemics, etc. This isn't so much a "coordination frontier" problem as a "coordination" problem (i.e. people want different things and have different beliefs about what strategies will get them the things they way). 

Negotiations during pandemic. This was a primary instigator for this sequence. See Coordination Skills I Wish I Had For the Pandemic as a general writeup of coordination skills useful in real life. I list:

  • Knowing What I Value
  • Negotiating under stress
  • Grieving
  • Calibration
  • Numerical-Emotional Literacy / Scope Sensitivity
  • Turning Sacred Values into Trades

I think those are all skills that are somewhat available in the population-at-large, but not super common. "Knowing what I value" and "grieving" I think both benefit from introspection skill. Calibration and Scope Sensitivity require numerical skills. Turning Sacred Values into Trades kinda depends on all the other skills as building blocks.

Microcovid happened. I think microcovid already had taken off by the time I wrote this post, but I came to appreciate it more as a coordination tool. I think it required having a number of the previous aforementioned skills latent in the rationality community. 

Evan posted on AI coordination needs clear wins. This didn't go anywhere AFAICT, but I do think it's a promising direction. It seems like "business as usual coordination."

The S-Process exists. (This was to be fair, already true when this post was posted). The S-Process is (at face value), a tool for high fidelity negotiation about how to allocate grant money. In practice I'm not sure if it's more than a complex game that you can use to get groups of smart people to exchange worldmodels about what's important to fund and strategize about. It's pretty good at this goal. I think it has aspirations of having cool mechanism designs that are more directly helpful but I'm not sure when/how those are gonna play out. (See Zvi's writeup of what it was like to participate in the current system)

The FTX Regranting Program was tried. Despite the bad things FTX did, and even despite some chaos I'm worried the FTX Regranting Program caused, it sure was an experiment in how to scale grantmaking, which I think was worth trying. This also feels like a whole post.

I made simpler voting UI for the LessWrong Review Quadratic Voting. (Also, I've experimented with quadratic voting in other contexts). I feel like I've gotten a better handle at how to distill a complex mechanism-design under-the-hood into a simple UI.

On a related note, creating the Quick Review Page was also a good experiment in distilling a complex cognitive operation into something more scalable.

Okay, so now what?

Man, I dunno, I'm running out of steam at the moment. I think my overall take is "experimenting in coordination is still obviously quite good", and "the solution to 'the coordination frontier paradox' is something like 'idk chill out a bit?'".

Will maybe have more thoughts after I've digested this a bit.

Sometimes you are on the coordination frontier, and unfortunately that means it's either your job to explain a principle to other people, or you have to sadly watch value get destroyed. Often, this is in the middle of a heated conflict...

I'm not really following either of these sentences. It sounds like "when you are on the frontier, and fail to explain a principle, value gets destroyed", but that doesn't really match the earlier definition of "Coordination Frontier". Could you maybe reword this, and give an example or two? "Heated conflict" sounds exciting. Definitely give an example of that.

Other times, you might think you are on the coordination frontier, but actually you're wrong – your principles are missing something important and aren’t actually an improvement. Maybe you’re just rationalizing things that are convenient for you.

This also needs an example. In fact, I will request examples everywhere. Human communication, and human thought itself, generally need examples to work.

This seems to be in a similar category to Game Theory, but perhaps you're placing an emphasis on cooperation-first, whereas my impression of Game Theory is that cooperation is a strategy that exists within a wider context of optimization for the outcome of 'my' side.

How would Coordination Schemes surpass the strategies that have been already found and explored by Game Theory?  In part of your piece it seems like you would like to reach a place where all sides have perfect knowledge of how to negotiate/coordinate for maximum value to all sides.  Many times there isn't an environment of perfect cooperation, and thus perfect knowledge isn't feasible.  What say ye?

Very interested in this, especially looking out for how to balance or resolve trade-offs between high inner coordination (people agree fast and completely on actions and/or beliefs) and high "outer" coordination (with reality, i.e. converging fast and strongly on the right things), aka how to avoid echo-chambers/groupthink without devolving into bickering and splintering into factions.