The Archipelago Model of Community Standards

byRaemon2y21st Nov 201726 comments


Epistemic Status: My best guess. I don't know if this will work but it seems like the obvious experiment to try more of.

Epistemic Effort: Spent several months thinking casually, 25ish minutes consolidating earlier memories and concerns, and maybe 10ish minutes thinking about potential predictions. See comment.

Building off:

Claim 1 - If you are dissatisfied with the norms/standards in a vaguely defined community, a good first step is to refactor that community into sub-groups with clearly defined goals and leadership.

Claim 2 - People have different goals, and you may be wrong about what norms are important even given a certain goal. So, also consider proactively cooperating with other people forming alternate subgroups out of the same parent group, with the goal of learning from each other.

Refactoring Into Subcommunities

Building groups that accomplish anything is hard. Building groups that prioritize independent thinking to solve novel problems is harder. But when faced with a hard problem, a useful technique is to refactor it into something simpler.

In "Open Problems in Group Rationality", Conor lists several common tensions. I include them here for reference (although any combination of difficult group rationality problems would suffice to motivate this post).

  1. Buy-in and retention.
  2. Defection and discontent.
  3. Safety versus standards.
  4. Productivity versus relevance.
  5. Sovereignty versus cooperation.
  6. Moloch and the problem of distributed moral action.

These problems don't go away when you have clearly defined goals. A corporation with a clearcut mission and strategy (i.e maximize profit by selling widgets) still has to navigate the balance of "hold their employees to a high standards to increase performance" and "make sure employees feel safe enough to do good work without getting wracked with anxiety" (or, just quit).

Such a corporation might make different tradeoffs in different situations - if there's a labor surplus, they might be less worried about employees quitting because they can just find more. If the job involves creative knowledge work, anxiety might have greater costs to productivity. Or maybe they're not just profit-maximizing: maybe the CEO cares about employee mental health for its own sake.

But well defined goals, with leaders who can enforce them, at least makes it possible to figure out what tradeoffs to make and actually make them.

Whereas if you live in a loosely defined community where people show up and leave whenever they want, and nobody can even precisely agree on what the community is, you'll have a lot more trouble.

People who care a lot about, say, personal sovereighty, will constantly push for norms that maximize freedom. People that care about cooperation will push for norms encouraging everyone to work harder and be more reliabl at personal freedom's expense.

Maybe one group can win - possibly by persuading everyone they are right, or simply by being more numerous.


A) You probably can't win every cultural battle.

B) Even if you could, you'd spend a lot of time and energy fighting that might be better spent actually accomplishing whatever these norms are actually for.

So if you can manage to avoid infighting while still accomplishing your goals, all things being equal that's preferable.

Considering Archipelago

Once this thought occured to me, I was immediately reminded of Scott Alexander's Archipelago concept. A quick recap:

Imagine a bunch of factions fighting for political control over a country. They've agreed upon the strict principle of harm (no physically hurting or stealing from each other). But they still disagree on things like "does pornography harm people", "do cigarette ads harm people", "does homosexuality harm the institution of marriage which in turn harms people?", "does soda harm people", etc.

And this is bad not just because everyone wastes all this time fighting over norms, but because the nature of their disagreement incentivizes them to fight over what harm even is.

And this in turn incentivizes them to fight over both definitions of words (distracting and time-wasting) and what counts as evidence or good reasoning through a politically motivated lens. (Which makes it harder to ever use evidence and reasoning to resolve issues, even uncontroversial ones)


Imagine someone discovers an archipelago of empty islands. And instead of continuing to fight, the people who want to live in Sciencetopia go off to found an island-state based on ideal scientific processes, and the people who want to live in Libertopia go off and found a society based on the strict principle of harm, and the people who want to live in Christiantopia go found a fundamentalist Christian commune.

They agree on an overarching set of rules, paying some taxes to a central authority that handles things like "dumping pollutants into the oceans/air that would affect other islands" and "making sure children are well educated enough to have the opportunity to understand why they might consider moving to other islands."

Practical Applications

There's a bunch of reasons the Archipelago concept doesn't work as well in practice. There are no magical empty islands we can just take over. Leaving a place if you're unhappy is harder than it sounds. Resolving the "think of the children" issue will be very contentious.

But, we don't need perfect-idealized-archipelago to make use of the general concept. We don't even need a broad critical mass of change.

You, personally, could just do something with it, right now.

If you have an event you're running, or an online space that you control, or an organization you run, you can set the norms. Rather than opting-by-default into the generic average norms of your peers, you can say "This is a space specifically for X. If you want to participate, you will need to hold yourself to Y particular standard."

Some features and considerations:

You Can Test More Interesting Ideas. If a hundred people have to agree on something, you'll only get to try things that you can can 50+ people on board with (due to crowd inertia, regardless of whether you have a formal democracy)

But maybe you can get 10 people to try a more extreme experiment. (And if you share knowledge, both about experiments that work and ones that don't, you can build the overall body of community-knowledge in your social world)

I would rather have a world where 100 people try 10 different experiments, even if I disagree with most of those experiments and wouldn't want to participate myself.

You Can Simplify the Problem and Isolate Experimental Variables. "Good" science tests a single variable at the time so you can learn more about what-causes-what.

In practice, if you're building an organization, you may not have time to do "proper science" - you may need to get a group working ASAP, and you may need to test a few ideas at once to have a chance at success.

But, all things being equal it's still convenient to isolate factors as much as possible. One benefit to refactoring a community into smaller pieces is you can pick more specific goals. Instead of reinventing every single wheel at once, pick a few specific axes you're trying to learn about.

This will both make the problem easier, as well as make it easier to learn from.

You Can 'Timeshare Islands'. Maybe you don't have an entire space that you can control. But maybe you and some other friends have a shared space. (Say, a weekly meetup).

Instead of having the meetup be a generic thing catering to the average common denominator of members, you can collectively agree to use it for experiments (at least sometimes). Make it easier for one person to say 'Okay, this week I'd like to run an activity that'll require different norms than we're used to. Please come prepared for things to be a bit different.'

This comes with some complications - one of the benefits of a recurring event is people roughly know what to expect, so it may not be good to do this all the time. But generally, giving the person running a given event the authority to try some different norms out can get you some of the benefits of the Archipelago concept.

You Can Start With Just One Meetup

Viliam in the comments made a note I wanted to include here:

It is important to notice that the "island" doesn't have to be fully built from start. "Let's start a new subgroup" sounds scary; too much responsibility and possibly not enough status. "Let's have one meeting where we try the norm X and see how it works" sounds much easier; and if it works, people would be more willing to have another meeting like that, possibly leading to the creation of a new community.

Making It Through the 'Unpleasant Valley' of Group Experimentation.

I think this graph was underappreciated in its original post. When people try new things (a new diet or exercise program, studying a new skill, etc), the new thing involves effort and challenges that in some ways make it seem worse than whatever their default behavior was.

Some experiments are just duds. But oftentimes it feels like it'll turn out to be a dud, when you're in the Unpleasant Valley, and in fact you just haven't stuck with it long enough for it to bear fruit.

This is hard enough for solo experiments. For group experiments, where not just one but many people must all try a thing at once and get good at it, all it takes is a little defection to spiral into a mass exodus.

Refactoring communities into smaller groups with clear subgoals can make it possible for a group to make it through the Valley of Unpleasantness together.

Overlapping Social Spheres

Sharing Islands and Cross Pollination

In the end, I don't think "Islands" is quite the right metaphor here. One of the things that makes social archipelago different from the canonical example is that the islands overlap. People may be a member of multiple groups and sub-groups.

A benefit of this is cross pollination - it's easier to share information and grow if you have people who exist in multiple subcultures (sub-subcultures?) and can translate ideas between them.

How much benefit this yields depends on how mindfully people are approaching the concept, and how much of their ideas they are sharing (making both the object-level-idea and the underlying reasons accessible to others).

This post is primarily intended as reference - I have more specific ideas on what kinds of communities I want to participate in, and thoughts on "underexplored social niches" that I think others might consider experimenting with. Some of those thoughts will be on the LessWrong front page, others on my private profile or the Meta section.

But meanwhile, I hope to see more groups of people in my filter bubble self organizing, carving out spaces to try novel concepts.