Mod note: this post violates our LLM Writing Policy for LessWrong and was incorrectly approved, so I have delisted the post to make it only accessible via link. I've not returned it to your drafts, because that would make the comments hard to access.
@Information Project, please don't post more direct LLM output, or we'll remove your posting permissions.
I think there's another dimension that we're not exploring here and I hope it provides an angle for solving the problem: the size of the community. I very much agree the problem is intractable as you describe it if we consider the social space as one, but we can perhaps construct a Darwinian path of incremental improvements if we start with the local and gradually make our way up. From 1:1, small groups, larger groups, small communities to larger communities and maybe (finally?) the sociosphere.
What perhaps is missing in terms of implementation is a semantics for merging. What does it mean for a community to merge with another? Currently, that's either very hairy or unanswerable. My wish is that we create something that has enough traction at the bottom and a means of joining up collectives voluntarily.
We already know how to build amazing systems: private medical data sharing, reliable truth-checking tools, fair collective decision-making platforms. Good designs exist on GitHub and in papers. Neural networks can generate even more in minutes.
..Yet almost none of them are actually used by millions of people.
The reason is simple: these systems are worthless until a large number of people join at the same time.
No users → no value → no one wants to be first → still no users.
This “ghost town” trap kills almost every good project. You need explosive growth to escape it, but normal growth is slow and steady, so most die.
Two common fixes don’t work:
Instead we get "infrastructural Darwinism": the winners are usually the projects with:
- the biggest marketing budget
- the best timing / hype wave
- the most aggressive growth tricks
- the strongest connections
..not the technically best ones.
What’s missing is a neutral “consensus sandbox”: a shared space where promising protocols are fairly tested(!), the best ones get proven(!), and then many aligned people adopt them together at once — without relying on money, hype, or manipulation.
Right now we’re stuck between cynical funders and chaotic markets that reward budget over quality.
The cost of staying stuck is huge: we keep running civilization on mediocre rules when far better ones are ready on the shelf.
Can the rationalist \ EA(Effective Altruism) communities build that missing meta-layer?
P.S:
I had the AI whip up some possible fixes (these are just a bunch of words, but perhaps they will give someone something to think about). Looked pretty decent so I picked the best ones: