Every serious AI lab wants to automate themselves. I believe this sentence to hold predictive power over AI timelines and all other predictions about the future. In particular, I believe taking the AI-lab-centric view is the right way to think about automation.
In this post, I want to present the different levels of abstraction at which AI automation can be thought of:
All views except the lab view have important drawbacks:
The AI lab is the unit with the most power over its own level of acceleration, because:
In the future, a larger and larger fraction of decisions inside the lab will get taken over by the model. This is due to two separate processes:
As we get closer to certain notions of AI self-improvement, the model will do actions to improve itself. For instance, in the end, most actions of Anthropic, internal or external, will be taken by Claude. In an efficient world where AI labs are aiming to automate AI R&D, "researcher" view goes away, and metrics tracking "model" and "AI lab" views start tracking the same thing.
Lots of people are trying to make AI good at AI research. How are they doing?
One way to measure this is to assume AIs are gradually doing more and more complex tasks independently. Eventually it would grow to doing whole research projects. Something like this, but for software engineering instead of research, is captured in the METR "time horizons" benchmark.
I think extending this line of thinking to forecasting progress in AI research is wrong. Instead, a better way to accelerate AI research for the time being is combining AI and people to do research together, in a way that uses the complementary strengths of each; with the goal of the researcher's feedback loops shortening.
What is difficult to automate in AI research?
If you decompose a big AI research project into tasks, there's lots of "dark matter" that does not neatly fit into any category. Some examples are given in Large-Scale Projects Stress Deep Cognitive Skills, which is a much better post than mine.
But I think that the most central argument is: the research process involves taste, coming up with ideas, and various such intangibles that we don't really know how to train for.
The labs are trying to make superhuman AI researchers. We do not yet know how to do it, which means at least some of our ideas are lacking. To improve our ideas, we need either:
Measuring which ideas are good is difficult; it requires sparse empirical outcomes that happen long after the idea is formulated. How can we accelerate this process?
I want to make two claims:
The careful reader might ask, ok, this sounds fine in the abstract, but I don't understand what exactly the lab is doing then, if not "automate AI research as a whole"? How is this different from making autonomous AI researchers directly?
Here is a list of tasks that would be extremely valuable if we wanted to make the research feedback loops faster.
I believe all of these tasks possess properties that make them attractive to attack directly.
This seems easier than automating the full research process. If labs have the goal of speeding up the lab's ability to do AI research as opposed to other goals, they are probably doing these things; and measuring the ability of AIs to do research autonomously is not going to give a good grasp on how quickly the lab is accelerating.
Why is AI progress so much more apparent in coding than everywhere else?
Among people who have "AGI timelines", most do not set their timelines based on data, but rather update them based on their own day-to-day experiences and social signals.
As of 2025, my guess is that individual perception of AI progress correlates with how closely someone's daily activities resemble how an AI researcher spends their time. The reason why users of coding agents feel a higher rate of automation in their bones, whereas people in most other occupations don't, is because automating engineering has been the focus of the industry for a while now. Despite the expectations for 2025 to be the year of the AI agent, it turns out the industry is small and cannot have too many priorities, hence basically the only competent agents we got in 2025 so far are coding agents.
Everyone serious about winning the AI race is trying to automate one job: AI R&D.
To a first approximation, there is no point yet in automating anything else, except to raise capital (human or investment), or to earn money. Until you are hitting diminishing returns on your rate of acceleration, unrelated capabilities are not a priority. This means that a lot of pressure is being applied to AI research tasks at all times; and that all delays in automation of AI R&D are, in a sense, real in a way that's not necessarily the case for tasks unrelated to AI R&D. It would be odd if there were easy gains to be made in accelerating the work of AI researchers on frontier models in addition to what is already being done across the industry.
I don't know whether automating AI research is going to be smooth all the way there or not; my understanding is that slow vs fast takeoff hinges significantly on how bottlenecked we become by non-R&D factors over time. Nonetheless, the above suggests a baseline expectation: AI research automation will advance more steadily compared to automation of other intellectual work.
For other tasks, especially for less immediately lucrative ones, it will make more sense to automate them quickly after we're done with automating AI research. Hence, a teacher's or a fiction writer's experience of automation will be somewhat more abrupt than a researcher's. In particular, I anticipate there will be a period of a year or two in which publicly available models are severely underelicited in tasks unrelated to AI R&D, as top talent is increasingly incentivized to work on capabilities that compound in R&D value.
This "differential automation" view naturally separates the history of AI capabilities into three phases:
There will likely be another phase, after say a GPT-3 moment for RL, where RL is going to generalize somewhat further, and we will get gains on tasks that we do not directly train for; but I think the sheer amount of "unintentional" increase of capabilities across the board is less likely, because the remaining capabilities are inherently more specialized and unrelated to each other than they were in the pretraining scaling phase.
GPT-4o's drawings of itself as a person are remarkably consistent: it's more or less always a similar-looking white male in his late 20s with brown hair, often sporting facial hair and glasses, unless you specify otherwise. All the men it generates might as well be brothers. I reproduced this on two ChatGPT accounts with clean memory.
On the contrary, its drawings of itself when it does not depict itself as a person are far more diverse: a wide range of robot designs and abstract humanoids, often featuring OpenAI logo as a head or on the word "GPT" on the chest.
I think the labs might well be rational in focusing on this sort of "handheld automation", just to enable their researchers to code experiments faster and in smaller teams.
My mental model of AI R&D is that it can be bottlenecked roughly by three things: compute, engineering time, and the "dark matter" of taste and feedback loops on messy research results. I can certainly imagine a model of lab productivity where the best way to accelerate is improving handheld automation for the entirety of 2025. Say, the core paradigm is fixed; but inside that paradigm, the research team has more promising ideas than they have time to implement and try out on smaller-scale experiments; and they really do not want to hire more people.
If you consider the AI lab as a fundamental unit that wants to increase its velocity, and works on things that make models faster, it's plausible they can be aware how bad the model performance is on research taste, and still not be making a mistake by ignoring your "dark matter" right now. They will work on it when they are faster.
N = #params, D = #data
Training compute = const .* N * D
Forward pass cost (R bits) = c * N, and assume R = Ω(1) on average
Now, thinking purely information-theoretically:
Model stealing compute = C * fp16 * N / R ~ const. * c * N^2
If compute-optimal training and α = β in Chinchilla scaling law:
Model stealing compute ~ Training compute
For significantly overtrained models:
Model stealing << Training compute
Typically:
Total inference compute ~ Training compute
=> Model stealing << Total inference compute
Caveats:
- Prior on weights reduces stealing compute, same if you only want to recover some information about the model (e.g. to create an equally capable one)
- Of course, if the model is producing much fewer than 1 token per forward pass, then model stealing compute is very large
The one you linked doesn't really rhyme. The meter is quite consistently decasyllabic, though.
I find it interesting that the collection has a fairly large number of songs about World War II. Seems that the "oral songwriters composing war epics" meme lived until the very end of the tradition.
With Greedy Coordinate Gradient (GCG) optimization, when trying to force argmax-generated completions, using an improved objective function dramatically increased our optimizer’s performance.
Do you have some data / plots here?
Oh so you have prompt_loss_weight=1, got it. I'll cross out my original comment. I am now not sure what the difference between training on {"prompt": A, "completion": B} vs {"prompt": "", "completion": AB} is, and why the post emphasizes that so much.
On the apocalyptic residual, or: the secret third way to save the world.
Bostrom's Vulnerable World Hypothesis says our world is vulnerable. The most commonly discussed threat model are his "Type-1 vulnerabilities": destructive technologies such as bioengineered pandemics.
There are three steps required for our world to be destroyed in this way:
If all three are satisfied at any point in time, we die.
Most work on AI safety has been focused on (1) and (2). To be precise, here is a non-exhaustive list of efforts towards mitigating the vulnerable world hypothesis, focused on biorisk:
I cannot think of a single person or paper working on managing the apocalyptic residual (number of people who would want to destroy the world and have enough agency to try to do so) recently.
Bostrom calls this "preference modification" and argues it is infeasible. Here are relevant quotes from the VWH paper:
"Radically re-engineering human nature on a fully global scale" seems unrealistic until we realize we're about to do totally realistic things such as "automate all remote work" on the same timeframe; that technology is re-engineering human nature on a global scale; and that many people's lives are increasingly a side quest to their main task of being fed video content from variously sized screens. Status-disaffected men used to go to war or become cult leaders or something; soon they will all waste their lives on video games, their attention on short-form video feeds, their salaries on parlay bets on Robinhood or parasocial relationships with OnlyFans models; living their lives vicariously through the Content they consume and egregores that control their brain. If the forces of technocapital can coordinate to do anything, they can coordinate on preference modification of every monetizable eyeball they can reach, to make their lives as inane and inconsequential and as glued to the screen as possible.
Now, of course, the timelines of preference modification and dangerous technology might not match. Market forces might only drain the agency of a fraction of the relevant population by the critical period. But it's worth thinking about ways in which the impact of AGI actually expands society's toolset to fight x-risk.