I agree that separately from its direct boost to performance at the same inference-compute, RL training also helps enable more inference scaling. I talk about that above when I say "this RL also unlocked the ability to productively use much longer chains of thought (~30x longer in this example). And these longer chains of thought contributed a much larger boost."
A key thing I'm trying to get across is that I think this is where most of the benefit from RL is coming from. i.e. that while you pay the RL scaling costs at training time, you also need to pay the inference scaling costs at deployment time in order to get the benefit. Thus, RL is not an alternative to a paradigm where we cost-per-use is going to 10x, 100x, 1000x etc in order to keep seeing benefits.
Many people have said the opposite. The reason I looked into this is that people such as Dan Hendrycks and Josh You pushed back on my earlier statements that most of the gain has been in enabling longer CoT and the scaling of inference, saying respectively that it makes models much better at the same token-budget and that we're witnessing a one-off scale up of token budget but further gains will come from RL scaling. I think I've delivered clear evidence against those takes.
You'd probably enjoy my post on Inference scaling reshapes AI governance and the whole series on my website. I think they paint a picture where compute scaling is becoming a smaller tailwind for AI progress and where it is changing in character.
Yes, you would get an optimal allocation with non-zero amounts to each. A simple calculation suggests 1:2 ratio of RL-OOMs : Inference-OOMs. e.g. scaling up RL by 100x and inference by 10,000x. So it could easily lead to RL compute becoming an ever-smaller fraction of FLOPs. But there are additional complications from the fact that inference is a flow of costs and also increases with the number of users, while RL is a fixed cost.
On the simple model and with my scaling numbers, the contribution of RL to capabilities (keeping token-use fixed) would be 20% — a 1:4 ratio with inference because half as many OOMs and half the effect per OOM.
The main relevance of all this to me is that even if people keep doing RL, RL alone won't contribute much to benchmark performance. I think it would need to 100,000x current total training compute to gain the equivalent of just 100x on pretraining in the early years. So if pre-training is slowing, AI companies lack any current method of effective compute scaling based solely around training compute and one-off costs.
Actually, here is a slightly simpler way to think about it. How many more training steps do you do with RL when you 100x the compute? Given the linear episode length growth, you only do root(100) = 10x the number of training steps. So if capability gain were linear in the log of the number of training steps, it would grow as log(root(compute)) = log(compute)/2, whereas for pretraining it would grow as log(compute). So if inference-scaling were going as well as pre-training scaling (contra the 3/2 estimate I appealed to in my piece) then the information inefficiency theoretical explanation could exactly account for the observed scaling behaviour.
I'm not sure this is right (there were a couple of biggish assumptions there) but it does feel closer to being able to be a larger part of the actual explanation.
Thanks Jacob. It is less of a mathematical mistake and more me trying to make a qualitative connection between the observed poor scaling of RL training and theoretical mechanism I'd just written about of poor information efficiency, both of which look very big. I agree that the theoretical explanation doesn't seem to be quite the right shape to explain the empirical issue.
Of your potential reasons, I do think longer episodes is part of it. The R1 paper has a chart on page 8 showing that without trying to affect episode lengths, they increased linearly from 500 tokens to ~9000 tokens over 8000 episodes, suggesting pretty much 1 token increase per episode on average. Thus the information efficiency was going down linearly with episodes during training. It is a bit tricky to compare this with the o1 chart, whose x-axis is both logarithmic and also measuring training compute rather than episode number. I think this means it should be declining as 1/episodes = 1/root(compute) — since training compute would be growing as the square of the number of episodes. And I think that just accounts for the power being 0.5 lower than pretraining, rather than the 3 that I'm claiming. (But its late, and I haven't checked this through on paper.)
I do think you are right that the information inefficiency can't explain the whole issue, but it might be able to explain part of it. i.e. a shift to the left can't explain a line with a different slope, but the slope changing part of the way there could be part of the explanation.
Thanks for this Jacob — excellent analysis.
I'm a huge fan of Bradley-Terry models. I'm quite sure they are the natural way of representing noisy contests like chess ability and that Elo is an inferior way. They key thing with Bradley-Terry is that each competitor has a raw ability score (e.g. A and B) and that then when they have a contest the odds of A beating B is just A:B. I think of it as each player puts a number of tickets of their colour into a hat and then one is drawn at random determining the winner. This is an even simpler interpretation than the one from the Hex paper and makes the 2/3 result even more intuitive.
Elo then takes the log base 10 and multiplies by 400 and then adds 1200 or so to make the numbers usually positive, injecting three (!) arbitrary constants into the mix in order to give an additive scale that matched the pre-Elo chess rating scale — but the natural interpretation of these contests is a multiplicative scale (the ratio of the numbers is the odds ratio of winning) so it should have been left alone. Linear progress in Elo is really exponential progress in the raw quantity.
I like your idea of assuming random difficulties for the different tasks (from some distribution that could be tweaked), as clearly that is part of the real underlying phenomenon. However, it is weird that you compare the highest number the agent draws from the hat to the highest number the task draws. More natural would be to have to take on the tasks one by one in a gauntlet of challenges of varying difficulty. e.g. that the probability of success is instead of my where is a random variable drawn from some natural distribution over [0,1] that is modified by the agent's skill and represents the probability of succeeding at that subtask. There should be limiting cases where all are equal (my case) and where it is driven by the hardest one. But I'm not sure what distribution this creates.
That said, I like where you are going with this and how you eliminate one of the parameters.
I definitely see my constant hazard rate model as a first order approximation to what is going on, and not the full story. I'm surprised it works as well as it does because the underlying phenomenon has more structure than this. So I see it just as something of a null hypothesis for other approaches to beat, and do expect it to eventually be beaten.
Thanks — this looks promising.
One thing I noticed is that there is an interesting analogy between your model and a fairly standard model in economics where society consists of a representative agent in each time period (representing something like a generation, but without overlap) each trying to maximise its own utility. They can plan based on the utilities of subsequent generations (e.g. predicting that the next generation will undo this generation's policies on some topic) but they don't inherently value those utilities. This is then understood via the perspective of a planner who wants to maximise the (discounted) sum of future utilities, even though each agent in the model is only trying to maximise their own utility.
This framework is rich enough to exhibit various inter-generational policy challenges, such as an intergenerational prisoner's dilemma where you can defect or cooperate on the following generation or the possibility of the desire of a generation to tie the hands of future generations or even the desire to stop future generations tying the hands of generations that follow them.
This is an interesting theorem which helps illuminate the relationship between unbounded utilities and St Petersburg gambles. I particularly appreciate that you don't make an explicit assumption that the values of gambles must be representable by real numbers which is very common, but unhelpful in a setting like this. However, I do worry a bit about the argument structure.
The St Petersburg gamble is a famously paradox-riddled case. That is, it is a very difficult case where it isn't clear what to say, and many theories seem to produce outlandish results. When this happens, it isn't so impressive to say that we can rule out an opposing theory because in that paradox-riddled situation it would lead to strange results. It strikes me as similar to saying that a rival theory leads to strange result in variable population-size cases so we can reject it (when actually, all theories do), or that it leads to strange results in infinite population cases (when again, all theories do).
Even if one had a proof that an alternative theory doesn't lead to strange conclusions in the St Petersburg gamble, I don't think this would count all that much in its favour. As it seems plausible to me that various rules of decision theory that were developed in the cleaner cases of finite possibility spaces (or well-behaved infinite spaces) need to be tweaked to account for more pathological possibility spaces. For a simple example, I'm sympathetic to the sure thing principle, but it directly implies that the St Petersburg Gamble is better than itself, because an unresolved gamble is better than a resolved one, no matter how the latter was resolved. My guess is that this means the sure thing principle needs to have its scope limited to exclude gambles whose value is higher than that of any of their resolutions.
Regarding your question, I don't see theoretical reasons why one shouldn't be making deals like that (assuming one can and would stick to them etc). I'm not sure which decision theory to apply to them though.
The Moral Parliament idea generally has a problem regarding time. If it is thought of as making decisions for the next action (or other bounded time period), with new distribution of votes etc when the next choice comes up, then there are intertemporal swaps (and thus pareto improvements according to each theory) that it won't be able to achieve. This is pretty bad, as it at least appears to be getting pareto dominated by another method. However, if it is making one decision for all time over all policies for resolving future decisions, then (1) it is even harder to apply in real life than it looked, and (2) it doesn't seem to be able to deal with cases where you learn more about ethics (i.e. update your credence function over moral theories) -- at least not without quite a bit of extra explanation about how that works. I suppose the best answer may well be that the policies over which the representatives are arguing include branches dealing with all ways the credences could change, weighted by their probabilities. This is even more messy.
My guess is that of these two broad options (decide one bounded decision vs decide everything all at once) the latter is better. But either way it is a bit less intuitive than it first appears.
I do think that progress will slow down, though its not my main claim. My main claim is to do with the tailwind of compute scaling will become weaker (unless some new scaling paradigm appears or a breakthrough saves this one). That is a piece in the puzzle of whether overall AI progress will accelerate or decelerate and I'd ideally let people form their own judgments about the other pieces (e.g. whether recursive self improvement will work, or whether funding will collapse in a market correction, taking away another tailwind of progress). But having a major boost to AI progress (compute scaling) become less of a boost is definitely some kind of an update towards lower AI progress than you were otherwise expecting.
Part of the issue with inference scaling as the main surviving form of scaling depends on how many more OOMs are needed. If it is 100x, there isn't so much impact. If we need to 1,000x or 1,000,000x it from here, it is more of an issue.
In that prior piece I talked about inference-scaling as a flow of costs, but it also scales with things beyond time:
This is a big deal. If you want to 100x the price of inference going into each query, how can you make that up and still be profitable? I think you need to 100x the willingness-to-pay from each user for each query. That is very hard. My guess is that the WTP doesn't scale with inference compute in this way, and thus that inference can only be 10x-ed when algorithmic efficiency gains and falling chip costs have divided the cost per token by 10. So I think that while previous rounds of training compute scaling could pay for themselves in the marketplace, I think that will stop for most users soon, and for specialist users a bit later.
The idea here is that the changing character of scaling affects the business model, making it so that it is no longer self-propelling to keep scaling, and that this will mean the compute scaling basically stops.
PS
Thanks for pointing out that second quote "Now that RL-training…" — I think that does come across a bit stronger than I intended.