A few years ago, the rationalsphere was small, and it was hard to get funding to run even one organization. Spinning up a second one with the same focus area might have risked killing the first one.
By now, I think we have the capacity (financial, coordinational and human-talent-wise) that that's less of a risk. Meanwhile, I think there are a number of benefits to having more, better, friendly competition.
Diversity of worldviews is better.
Two research orgs might develop different schools of thought that lead to different insights. This can lead to more ideas as well as avoiding the tail risks of bias and groupthink.
When there's only one org doing A Thing, criticizing that org feels sort of like criticizing That Thing. And there may be a worry that if the org lost funding due to your criticism, That Thing wouldn't get done at all. Multiple orgs can allow people to think more freely about the situation.
Competition forces people to shape up.
If you're the only org in town doing a thing, there's just less pressure to do a good job.
"Healthy" competition enables certain kinds of integrity.
Sort of related to the previous two points. Say you think Cause X is real important, but there's only one org working on it. If you think Org A isn't being as high integrity as you'd like, your options are limited (criticize them, publicly or privately, or start your own org, which is very hard. If you think Org A is overall net positive you might risk damaging Cause X by criticizing it. But if there are multiple Orgs A and B working on Cause X, there are less downsides of criticizing it. (Alternate framing is that maybe criticism wouldn't actually damage cause X but it may still feel that way to a lot of people, so getting a second Org B can be beneficial). Multiple orgs working on a topic makes it easier to reward good behavior.
In particular, if you notice that you're running the only org in town, and you want to improve you own integrity, you might want to cause there to be more competition. This way, you can help set up a system that creates better incentives for yourself, that remain strong even if you gain power (which may be corrupting in various ways)
Some types of jobs benefit from concentration.
This suggests it'd be pro-social to:
I wrote this up awhile ago as a shortform post on the EA forum. My motivation to post it now was because of an interesting conversation contrasting https://roamresearch.com with lesswrong.com.
My conversation partner was excited about Roam as a multi-purpose tool for intellectual progress and collaboration, and listed some features they were considering that were moving in directions that were similar to directions the LessWrong team was considering. "Roam is gonna do all the things! Collaboration! Blogposts! It could replace google docs as a collaborative thinking tool"
[note: this was a friend who doesn't work at Roam, I'm not sure how concrete those plans are]
And I had a few flinch reactions of "Aww, LessWrong was gonna do all the things!", followed immediately by "okay every writing platform that tries to do all the things right-off-the-bat fails and probably both of the teams should be focused a bit more". Both of these were followed by a more interesting observation:
One of LessWrong's primary focuses is being an attention allocation platform. Whether it follows the simple "show latest posts, ordered by date" or a hackernews algorithm, or things like curated and recommendations, it's fundamentally aiming to be a place where centralized conversation of some sort happens.
There's a thing in EA/rationalsphere space where people notice that a thing isn't being coordinated on, and their first impulse is to build a coordination platform to fix it. And this is usually a mistake because it's real costly to get everyone to switch to a new platform, and if you screw it up you not only waste everyone's time but make them less likely to switch to the next platform that might actually be good enough.
I think it's somewhat dangerous (albeit necessary) that LessWrong is natural-attention-monopoly shaped, which makes it hard to directly compete with. I think this gives us something of an obligation to do a good job, and to enable things like GreaterWrong, and to be careful taking on too many different domains that we won't have the capacity to be good at.
But there's something sort of nice and valuable about having other writing platforms whose primary focus is on the "singleplayer" aspects of writing/thinking/intellectual progress. (My underrstanding is that this is the general advice to startup founders trying to corner a market that requires network effects – start with something that doesn't require network effects)
Right now Roam is young, I'm not sure how serious their plans are for adding collaboration and blogpost-type features (this was mentioned to me second-hand and might have just been a "interesting idea to consider" thing rather than a "concrete plan.") But after some reflection I found it actually kind of reassuring that there were ways to build up competing platforms interacting with the same ecosystem, not via initially starting as similar products, but by starting from pretty different vantage points and then gradually adding various supporting social features.
FWIW I think Roam and LW have carved out separate parts of the space, and would love to see a collaboration experiment where Roam is the editor for LW comments and and posts, allowing for the referencing/transcluding aspects of Roam and the voting, discovery, collaboration features of LW.
I still think this is true, and important. Honestly, I'd like to bid for it being required-reading among org-founders in the rationalsphere (alongside Habryka's Integrity post)
I think healthy competition is particularly important for a (moderately small) constellation of orgs and proto-orgs to have in mind if they are trying to scale up and impact the world at large, while maintaining integrity. (i.e. the rationality/x-risk/EA ecosystem).
I think this is one of the key answers to "what safeguards do we have against evolving into a moral maze?"
Meanwhile the piece is pretty short, which makes me feel better about saying "hey guys, please actually read this."
This post does not make a comprehensive case for its claims (in part because it's aiming to be short). I would definitely appreciate someone who has differing intuitions, or who thinks I'm missing something major, doing a substantive response.
I ended up chatting with Habryka (who had originally inspired this post, and now wasn't sure he agreed with it).
One key additional point here is "gee, doing anything at all is really goddamn hard, and you might not want people to feel any additional disincentive from doing anything, including building monopolies."
There's the frustrating "consider reversing all advice you hear, because you maybe filter yourself to hear advice that reinforces your own biases" thingy. I think I endorse the specific phrasings used in this post (which I think were properly caveated). But, I wouldn't want the takeaway to be people being too overly worried about ensuring they have competitors when they themselves haven't gotten off the ground.
There's an additional confusing issue where...
And the problem is that figuring out "am I competent or not?" is one off the hardest things you can try to figure out.
Note: I don't think objectively figuring out "am I competent or not?" is that hard of a question. It's just one that the people who are incompetent will very likely get wrong in a highly predictable direction, so building norms that start with "if you think you are competent do X, if you don't do Y" are hard to make work.
This topic seems very important!
Another potential consideration: In some cases, not having any competition can expose an org to a ~%50 probability of having net-negative impact, simply due to the possibility that a counterfactual org (founded by someone else) would have done a better job.
Note that many of these things can be accomplished with intra-organizational competition - you don't necessarily need a separate group entity for each set of ideas and behaviors, just an acknowledgement that there are different values and beliefs in play.
The question of how best to cooperate with partially-aligned agents, in a world with MANY agents that are less aligned than that, is important and under-modeled.
In practice the innovation literature tends to view this is hard to achieve. Most often radically different values or approaches are succesful when spun out from the original organization.
Nominating this post as much for the main body as well as Ray's top-level comment. I guess maybe this post is somewhat downstream of me, so it's not super surprising I like it, but I do think many many parts of the world could really benefit from more healthy competitions, and I've set many plans into motion that try to create more competition in ways that I think improves things quite a bit.
This seems basically right to me.
Research orgs benefit from having a number of smart people bouncing ideas around.
Probably most of (more of?) this benefit can also be unlocked by an ecosystem of multiple orgs in friendly competition who regularly talk to each other in ways that feel psychologically safe.
If you want to challenge a monopoly with a new org, there's likewise a particular burden to do a good job.
(This seems to depend on whether the job/project in question benefits from concentration.)
Agreed. The comment was meant to be referring to orgs that made particular sense as a monopoly)