In light of a recent post and comment, and several months of thinking, I have come to the position that one of our (humanity's) biggest problems is that we suck at precise coordination at every level.
This is not very specifically defined but I am trying to gesture at a problem area I think is super important. Some thoughts to convey my intuition here:
Broadly, I think there are two cases of problems with coordination:
I think the first problem is workable, and if improved sufficiently, makes progress on the second problem by clearly exposing parties that are avoiding productive exchange.
Hopefully I'm making this line of thought clear enough. Key points:
I am interested in situating my thinking better here. Who is working on this sort of thing? I know TsviBT has explored improvements to debate, Richard Ngo / Samo Burja are exploring broader political manifestations, Forethought has published adjacent work. Is there anything I'm missing? Very interested in contributing here and think it's a clear place where the ball is being dropped.
Broadly, I think there are two cases of problems with coordination:
- Two people/groups genuinely agree to honest, rigorous exchange of information, but can't effectively coordinate.
- Someone is withholding information or doesn't really want to coordinate in the first place.
This does not comprehensively cover all coordination problems. I wouldn't actually call "doesn't want to coordinate in the first place" a coordination problem, but given that you have called it that, you would probably also call a coordination problem a situation where two people are communicating in a very incompetent/non-reflective/non-self-conscious manner, constantly getting annoyed/retaliating that the other is not doing what they think they should be doing but they've never taken time to communicate it clearly. They want to coordinate, they're not (intentionally[1]) withholding information, but there is no mutual agreement to "honest, rigorous exchange of information", because they have a skill issue.
¿Did you mean to partition it into something like: (1) coordination-relevant information flows properly between the parties, but the parties cannot properly act upon that information (for whatever reason: skill issues, intelligence issues, [the situation sucks and we realistically can't do much] issues); (2) coordination-relevant information doesn't flow properly, so even if they are in a position to coordinate if informed, they can't, because they're not informed.
but I think the word "withhold" connotes some significant amount of intention regardless
There are different subproblems when it comes to coordination. One is a general problem of media. If the average effective altruist would spend a good chunk of their donation budget for Substack subscription of journalists that they believe to be very valuable for the public conversation, this might be better than them donating to big causes that billionaires can finance effectively without corrupting the cause.
As a society we do need some people to spend a lot of intellectual effort into doing research and thinking, and it would be great if that would be a higher priority for those who want to engage in Effective Altruism.
When it comes to your analysis of two groups, you miss the dynamic of scout and soldier mindset that Julia Galef describes. For the issues that are really important to us we are usually in the soldier mindset even if make a decision to agree to honest, rigorous exchange of information.
Maybe because most users here are already conscientious and strongly value truth-seeking, which makes improvement seem less necessary.
That's not the case. CFAR was funded to help people to reason better and largely failed at that. It's a problem that matter but it's not an easy problem to solve.
Certainly agree with your point about donating to Substacks / journalists. Could be very impactful to have a writeup of that somewhere here or on the EA forum.
I'm familiar with Galef's ideas; I would place "soldiers" in category 2. But yes, the distinction is very subtle and I did not specify it well enough.
I believe that sufficiently well designed UI for navigating debates/arguments/discussions can make it very difficult for people to disguise soldier mindsets via obfuscated (intentional or unintentional) communication and reasoning.
Imagine, for example:
This could be seen as an enhanced version of "community notes" aimed at situating shallow, under-researched takes within a larger "map of human thought."
Whether this can scale and outcompete current systems is unknown, but it does truly seem promising for the enhancement of public discourse and like a step in the right direction.
Appreciate the comment, was helpful in clarifying my thoughts.
I'm familiar with Galef's ideas; I would place "soldiers" in category 2.
And you would be wrong about that. Soldier mindset comes from caring about the outcome going a certain way. You can agree to be honest and rigorous but that doesn't mean that you leave the soldier mindset because of that.
Probably, you are in soldier mindset yourself about this very issue. You have an idea of how debate should work and then have a motivation to have the details fit neatly and be able to be resolved just by specifying it better.
I think the disagreement stems from a lack of specificity on my part; ignore the specific description of the categories.
Probably, you are in soldier mindset yourself about this very issue.
I hold beliefs on it, sure. I am now interested in seeing if they reflect reality, and learning why/why not. Is this mindset inadequate, and what would make it more rational?
Separately - do you think there is promise in tools of the type I describe to combat soldier mindset at scale? I will definitely be reading into some of the CFAR resources, just curious to hear from you.
Cf. https://tsvibt.blogspot.com/2023/01/hyperphone.html Could be nearby to some key software for interfacing deeply between minds. Someone might be able to code up a reasonable version in a few weeks using gippities, though that could fall prey to lock-in to wrong architecture choices. (I imagine, in particular, that handling audio in nonstandard ways might be an important early architectural choice. But maybe it's fine.)
Hmm, this does look interesting but I hadn't really considered depth of 1-on-1 communication as a significant bottleneck. I also think the concept slightly falls apart when the users are not already knowledgeable / quick thinking / good at rigorous communication, as I'd guess there would be a steep learning curve.
Software wise, I think I'm aiming closer to better debates and debate tools, mainly because these things could be made public-facing and seem like an obvious use case for even present-day LLMs.
If you have any further thoughts regarding implementation of those things, I'd be eager to know. I'll be trying to make my plans more specific.
1-on-1 communication as a significant bottleneck.
I think it's one of the bottlenecks inter alia. What happens with two people with strong opinions / lots of knowledge / different backgrounds is that they fail to converge at all, often even on reference / ontology / communication, let alone actual facts / propositions / principles / plans. Because of this, there's often no "ultimate recourse"; there's ~no amount of research and thinking and problem solving that you can do in your head, such that you can then go out and spread the truth by just saying "take me to your leader" and then convincing the leader with facts and logic.
I also think the concept slightly falls apart when the users are not already knowledgeable / quick thinking / good at rigorous communication
Plausible, though I'm genuinely unsure, and would guess not. That is, I'd guess there are several kinds of uses for a hyperphone, and some of them would work across various gaps. For example, it's plausible to me that speaking through a hyperphone would be my preferred way both of teaching and of learning something.
I'd guess there would be a steep learning curve.
There'd be a very steep learning curve at the beginning of development. It's plausible that you could work things out so that there's not that much learning curve after development. E.g. async text conversations are quite confusing if you're stuck in the framework of normal live speech, and they do add some important kinds of friction, but mostly people get along.
Software wise, I think I'm aiming closer to better debates and debate tools, mainly because these things could be made public-facing and seem like an obvious use case for even present-day LLMs.
Possibly. One issue would be that debates are a bit more tense / higher stakes / constrained on average, I think, compared to 1 on 1 collaborative convos. For that reason, specifically for iterating on software, hyperphone convos might be easier to develop.
If you have any further thoughts regarding implementation of those things, I'd be eager to know. I'll be trying to make my plans more specific.
If someone gets deep into this, I'd be happy to be one volunteer consultant among others about directions / ideas / considerations.
One strong recommendation I'd make is that, if you're somewhat serious about it, GET DATA ASAP. E.g. set up some debates. About anything, between anyone who wants to debate. See what happens and see if there's something truly hopeful there for you / the world, and then try to nourish that hopeworthy something. I wouldn't begrudge anyone the a priori dreaming, which is more fun and can also be a very important ingredient in making cool stuff, but my guess would be that it wouldn't go anywhere unless you feel hungry for data. You have to ask Thinking what it needs, not tell Thinking what it needs.
Lately I've been most concerned about coordination for governance, and having the ability to nicely merge values from lots of people to determine what policies a democratic government should take. I think current methods of voting leave a lot to be desired, and it should be possible to use AI to help people build personal models of their preferences that can be diffed/merged with other preferences, enabling easy creation of coalitions or identifying points of agreement. I think Andrey Tang's post on Plurality is pretty useful for getting an idea of how similar things could work.
There are a lot of other dimensions for coordination though, getting accurate information from distorted sources, figuring out how to align representatives, better voting, etc. Mechanism design and social choice theory are good places to look for academic research on this stuff.