I see it said sometimes that open models contribute to AI race dynamics. My guess is that they don't, and if anything, reduce AI race dynamics.

I will consider a simplified model that only takes into account the cost of training a model, not the cost to deploy it (which tends to be small relative to revenue anyway). Let f(x) map a training expense x to a "value per day per customer" of the trained model, under the assumption that the training makes efficient use of the cost. That is, a customer values using an AI model trained with x compute at $f(x) per day.

I assume there are n identical customers here; of course, there are complexities where some customers value AI more than others, incentivizing price discrimination, but I'm abstracting this consideration out. (In general, variation in how much customers value a product will tend to increase consumer surplus while reducing revenue, as it makes it harder to charge customers just under the maximum amount they're willing to pay.)

I'm also assuming there is only one company that trains closed models for profit. This assumption is flawed because there is competition between different companies that train closed models. However, perfect competition assumptions would tend to reduce the incentive to train models. Suppose two companies have closed models of equivalent expense x. They each want to charge slightly less than the minimum of f(x) and the competitor's price, per customer per day. If each competitor undercuts the other slightly, the cost will approach 0. See the Traveler's Dilemma for a comparison. The reasons why this doesn't happen have to do with considerations like differences in models' performance on different tasks, e.g. some models are better for programming than others. If models are sufficiently specialized (allowing this sort of niche-monopolization), each specialized type of model can be modeled independently as a monopoly. So I'll analyze the case of a closed model monopoly, noting that translation to the real world is more complex.

Suppose the best open model has compute x and a company trains a closed model with compute y > x. Each customer will now spend up to f(y) - f(x) per day for the model; I'll assume the company charges f(y) - f(x) and the customers purchase this, noting that they could charge just below this amount to create a positive incentive for customers. So the company's revenue over m days is nm(f(y) - f(x)). Clearly, this is decreasing in x. So the better the open model is, the less expected revenue there is from training a closed model.

But this is simply comparing doing nothing to training a model of a fixed cost y. So consider instead comparing expected revenue between two different model costs, y and z, both greater than x. The revenue from y is nm(f(y) - f(x)), and from z it is nm(f(z) - f(x)). The difference between the z revenue and the y revenue is nm(f(z) - f(y)). This is unaffected by x.

This can model a case where the company has already trained a model of cost y and is considering upgrading to z. In this case, the open model doesn't affect the expected additional revenue from the upgrade.

Things get more complex when we assume there will be a future improvement to the open model. Suppose that, for k days, the open model has training cost x, and for the remaining m-k days, it has training cost x' > x.

Now suppose that the closed AI company has already trained a model of cost y, where x < y < x'. They are considering upgrading to a model of cost z, where z > x'.

Suppose they do not upgrade. Then they get nk(f(y) - f(x)) revenue from the first k days and nothing thereafter.

Suppose they do upgrade, immediately. Then they get nk(f(z) - f(x)) revenue from the first k days, and n(m-k)(f(z) - f(x')) from the remaining days.

Clearly, increasing x' past y will result in less revenue for the upgrade in comparison to not upgrading. So the announcement of the upgrade of the open model to x' compute will reduce the incentive to race by training a closed model with z compute.

So in this simplified analysis, release of better open models reduces the incentive to race, or does nothing. This is overall not surprising, as intellectual property laws are motivated by incentivizing production of intellectual property, and open content tends to reduce the value of intellectual property.

There are a number of factors that could be taken into account in other analyses, including:

  • Effects of open models on ease of training closed models

  • Substitution effects between different model niches (i.e. a model with an absolute advantage at mathematics may still be useful for writing essays)

  • Effects of uncertainty over open model releases

  • Different customers valuing the AI differently, driving price discrimination

  • Non-straightforward incentives such as prestige/recruitment from releasing models

  • Oligopoly dynamics

  • Time discounting

  • Changes in customer demand over time

It should go without saying that effects on race dynamics are not the only relevant effect of open model releases. Isolating and estimating different effects, however, will help in making an overall evaluation.

I suggest that someone who still believes that open models increase race dynamics clarify what economic assumptions they are using and how they differ from this model.

New Comment
13 comments, sorted by Click to highlight new comments since:

I'm not sure I personally endorse the model I'm proposing, but imagine a slightly less spherical AGI lab which has more than one incentive (profit maximization) driving its behavior.  Maybe they care at least a little bit about not advancing the capabilities frontier as fast as possible.  This can cause a preference ordering like:

  1. don't argmax capabilities, because there's no open-source competition making it impossible to profit from current-gen models
  2. argmax capabilities, since you need to stay ahead of open-source models nipping at your heels
  3. don't argmax capabilities; go bankrupt because open-source catches up to you (or gets "close enough" for enough of your customers)

 

ETA: But in practice most of my concerns around open-source AI development are elsewhere.

I think you are assuming something like a sublinear utility function in the difference (quality of own closed model - quality of best open model). Which would create an incentive to do just a bit better than the open model.

I think if there is a penalty term for advancing the frontier (say, for the quality of one's released model minus the quality of the open model) that can be modeled as dividing the revenue by a constant factor (since, revenue was also proportional to that). Which shouldn't change the general conclusion.

Yeah, there needs to be something like a nonlinearity somewhere.   (Or just preference inconsistency, which humans are known for, to say nothing of larger organizations.)

I think open source models probably reduce profit incentives to race, but can increase strategic (e.g., national security) incentives to race. Consider that if you're the Chinese government, you might think that you're too far behind in AI and can't hope to catch up, and therefore decide to spend your resources on other ways to mitigate the risk of a future transformative AI built by another country. But then an open model is released, and your AI researchers catch up to near state-of-the-art by learning from it, which may well change your (perceived) tradeoffs enough that you start spending a lot more on AI research.

It seems this is more about open models making it easier to train closed models than about nations vs corporations? Since this reasoning could also apply to a corporation that is behind.

Hmm, open models make it easier for a corporation to train closed models, but also make that activity less profitable, whereas for a government the latter consideration doesn't apply or has much less weight, so it seems much clearer that open models increase overall incentive for AI race between nations.

For corporations I assume their revenue is proportional to f(y) - f(x) where y is cost of their model and x is cost of open source model. Do you think governments would have a substantially different utility function from that?

A government might model the situation as something like "the first country/coalition to open up an AI capabilities gap of size X versus everyone else wins" because it can then easily win a tech/cultural/memetic/military/economic competition against everyone else and take over the world. (Or a fuzzy version of this to take into account various uncertainties.) Seems like a very different kind of utility function.

I mostly agree. And I think when people say race dynamics they often actually mean speed of progress and especially "Effects of open models on ease of training closed models [and open models]," which you mention.

But here is a race-dynamics story:

Alice has the best open model. She prefers for AI progress to slow down but also prefers to have the best open model (for reasons of prestige or, if different companies' models are not interchangeable, future market share). Bob releases a great open model. This incentivizes Alice to release a new state-of-the-art model sooner.

This fails to account for one very important psychological fact: the population of startup founders who get a company off the ground is very heavily biased toward people who strongly believe in their ability to succeed. So it'll take quite a while for "it'll be hard to make money" to flow through and slow down training. And, in the mean time, it'll be acceleratory from pushing companies to stay ahead.

"Suppose that, for k days, the closed model has training cost x..."

I think you meant to say "open model", not "closed model", here.

Thanks, fixed.

  • If the new Llama is comparable to GPT-5 in performance, there’s much less short-term economic incentive to train GPT-5.
  • If an open model allows some of what people would otherwise pay a close model developer for, there’s less incentive to be a close model developer.
  • People work on frontier models without trying to get to AGI. Talent is attracted to work at a lab that releases models and then work on random corporate ML instead of building AGI.

But:

  • Sharing information on frontier models architecture and/or training details, which inevitably happens if you release an open-source model, gives the whole field insights that reduce the time until someone knows how to make something that will kill everyone.
  • If you know a version of Llama comparable to GPT-4 is going to be released, you want to release a model comparable to GPT4.5 before your customers stop paying you as they can switch to open-source.
  • People gain experience with frontier models and the talent pool for racing to AGI increases. If people want to continue working on frontier models but their workplace can’t continue to spend as much as frontier labs on training runs, they might decide to work for a frontier lab instead.
  • Not sure, but maybe some of the infrastructure powered by open models might be switchable to close models, and this might increase profits for close source developers if customers become familiar with/integrate open-source models and then want to replace them with more capable systems, when it’s cost-effective?
  • Mostly less direct: availability of open-source models for irresponsible use might make it harder to put in place regulation that’d reduce the race dynamics (vis various destabilizing ways they can be used).