I'm asking because I can think of arguments going both ways.
Note: this post is focused on the generic question "what to expect from an AI lab splitting into two" more than on the specifics of the OpenAI vs Anthropic case.
Here's the basic argument: after splitting, the two groups of individuals are now competing instead of cooperating, with two consequences:
However, there are some possible considerations against this frame:
I expect that there exists empirical literature that could help dig deeper into points 2 and 3.
Is there an existing more thorough analysis of this, or can you give other arguments in one way or the other?