Not sure the extent to which this falls under “coordination tech” but are you familiar with work in collective intelligence? This article has some examples of existing work and future directions: https://www.wired.com/story/collective-intelligence-democracy/. Notably, it covers enhancements in expressing preferences (quadratic voting), prediction (prediction markets), representation (liquid democracy), consensus in groups (Polis), and aggregating knowledge (Wikipedia).
As you reference above, there’s non-AI collective action tech: https://foresight.org/a-simple-secure-coordination-platform-for-collective-action/
In the area of cognitive architectures, the open agency proposals contain governance tech, like Drexler’s original Open Agency model (https://www.lesswrong.com/posts/5hApNw5f7uG8RXxGS/the-open-agency-model), Davidad’s dramatically more complex Open Agency Architecture (https://www.lesswrong.com/posts/jRf4WENQnhssCb6mJ/davidad-s-bold-plan-for-alignment-an-in-depth-explanation), and the recently proposed Gaia Network (https://www.lesswrong.com/posts/AKBkDNeFLZxaMqjQG/gaia-network-a-practical-incremental-pathway-to-open-agency).
The main way I look at this is that software can greatly boost collective intelligence (CI), and one part of collective intelligence is coordination. Collective intelligence seems really under explored and I think there are very promising ways to improve it. More on my plan for CI + AGI here if of interest: https://www.web10.ai/p/web-10-in-under-10-minutes
While I think CI can be useful for things like AI governance, I think collective intelligence is actually very related to AI safety in the context of a cognitive architecture (CA). CI can be used to federate responsibilities in a cognitive architecture, including AI systems reviewing other AI systems as you mention. It can be used to enhance human control and participation in a CA, including allowing humans to set the goals of a cognitive architecture–based system, allow humans to perform the thinking and acting in a CA, and allow humans to participate in the oversight and evaluation of the granular and high-level operation of a CA. I write more on the safety aspects here if you’re interested: https://www.lesswrong.com/posts/caeXurgTwKDpSG4Nh/safety-first-agents-architectures-are-a-promising-path-to
In my view, it is most optimal to integrate CI and AI together in the same federated cognitive architecture, but CI systems can themselves be superintelligent, and that could be useful for developing and working with safe artificial super intelligence (including AI to help with primarily human-orchestrated CI, which blurs the line between CI and a combined human-AI cognitive architecture).
I see certain AI developments as boosting the same underlying tech required for next-level collective intelligence (modeling reasoning, for example, which would fall under symbolic AI) and augmenting collective intelligence (e.g. helping to identify areas of consensus in a more automated manner, like: https://ai.objectives.institute/talk-to-the-city).
I think many examples of AI engagement in CI and CA boil down to translating information from humans into various forms of unstructured, semi-structured, and structured data (my preference is for the latter, which I view is pretty crucial in next-gen cognitive architecture and CI systems) which are used to perform many functions from identifying each person’s preferences and existing beliefs to performing planning to conducting evaluations.
Love the thoughts so far from jacobjacob and Brendon Wong. A small addition: a good representation of your preference state space across research topics could help you find and connect with potential collaborators. Similarly, a state space over skills and interests and needs/desires could be used to find potential business deals or contractors/employers, etc.
I'm currently thinking through the tech tree of developments in human coordination that would increase the probability that the development of systems that are superintelligent (in some to all domains), goes well.
Just to start off the brainstorming, here are some:
It is an open question which, if any, coordination tech we will be able to build with the level of AI that will be available at different times in the future, and which of it will be available before its too late to be useful.
It's also an open question whether there's any specific tech that will be more helpful than just the normal bitter lesson of "GPT-n+1 doing with a good prompt what GPT-n did with all kinds of bells and whistles".
Nonetheless, whether it be nation state level agreements, inter-lab agreements, or just teams within a lab deciding what to work on: the potential upside from improved human coordination is pretty massive.
So, I'm curious for people's thoughts:
What are potential ways AI could be used to significantly improve human coordination, between now and the leadup to superintelligence?
(Sidenote 1: I'll also accept answers for non-AI coordination tech that would still serve as an important enabler here, such as "something something better commitment mechanisms")
(Sidenote 2: My interest in this is more from the angle of a potential founder, than a researcher: I want to know if there are any potentially extremely promising technologies that I or someone else should try building)