It's also important that the company does something useful during a pause.
The duration of a pause will likely be limited, and it's not useful unless we make meaningful technical safety and/or governance progress. How much the company contributes to this depends greatly on quality of leadership.
In particular, I'd like to see them:
If the company is getting close to the pause criteria, shift to >10% safety and do daily, active planning for a pause. This serves at least three purposes:
No, I think the point of a pause is to create a huge amount of surface area for the whole world to think about what to do with AI, and whether to go further. If you successfully negotiate a multi-year pause, you will end up with an enormous number of people trying to make the systems safer (in part to unlock the economic value if you can do so).
I think it's a mistake to try to have a frontier model company do all things at the same time. Other people can pick up other parts of the ball, when we have the time to do any of that. If there is a pause many other players will quickly get access to models that would be at the current frontier, and so can do whatever research this lab was doing.
Yes, but also the people who are working in the frontier labs are going to be the people who are best positioned of anyone, in the whole world, to make progress on the time sensitive alignment and control problems. They have the talent, and the expertise, and a few months lead in working with the most capable models. And you'll probably have private and info hazardous information which is relevant to solving some of the sub-problems.
They're not responsible for doing everything themselves, but I also if you're going to make the play of starting a scaling lab to 1) learn more about the nature of the AIs we're building, 2) try and solve alignment, and 3) advocate for a pause, I think it's reasonable to assign them moral responsibility for executing well on that plan.
They would be taking a risk with all of our lives, in the hopes of saving all our lives. If you decide to take that risk on your shoulders, you have a responsibility to be more competent than the average person who might have done it instead of you, and to step aside (or step back to a less powerful role) if better people show up.
I'm imagining a pause to be much more chaotic than this. How would you get everyone the latest models while allowing them to do safety but not capabilities research? How would people capture economic value from making systems safer under a pause? A pause is already a tremendous achievement; by default it will be fragile and not multi-year.
If we can, the company having attributes that makes it able to successfully implement a multi-year pause is important-- maybe willingness to fire 3/4 of its staff and turn into an inference provider serving open-source models, or pivot to some other business model.
If they need to do safety research on frontier models, they're not doing research that has strong generalization onto models arbitrarily stronger than next-gen. The point of a pause is to squeeze a few more hours of "people doing work on potentially-strongly-generalizing training setups" out of the world.
Most of these read to me as "make good, competent choices to spend a pause well, taking for granted that the company is realistically committed to a pause." And they seem like good suggestions to me!
I would hope that a company that had as part of it's corporate culture "there will maybe/probably come a day where we stop all capability developments", and was otherwise competent, would make plans like these.
But I think I don't require these specific points for me to basically feel good endorsing an AI company.
They're going to have tons more context than I will about the situation, and will have to make a bunch of judgement calls. There will be lots of places where some choice looks obvious from the outside, but doesn't actually make sense for those who are in the loop.
I don't want to withhold an endorsement because they don't do some specific things that seem like good things to me. But I do want withhold an endorsement if they're not doing some specific legible things that seem to me to be deontologically necessary for a company that is doing the default-evil thing of adding fire to the AI capabilities race.
That said, there is maybe a big missing thing on my list which is "the company generally seems to exhibit good judgement, such that I can trust them to make reasonable calls about extremely important questions.
In particular their capabilities staff will not just continue to do capabilities research without technically training frontier models, nor leave for a capabilities startup
I agree with this one. The pause has to be an actual pause of capabilities progress, not just a nominal pause of capability progress.
Oh! An important thing that I forgot: The company has generally extremely high operational security, such that they can prevent other companies and other countries from stealing their research or model weights.
I’ve been feeling more positive about “the idea of Anthropic” lately, as distinct from the actual company of Anthropic.
An argument for a safety-focused, science-focused commercial frontier scaling lab
I largely buy the old school LessWrong arguments of instrumental convergence and instrumental opacity that suggest catastrophic misalignment, especially of powerful superintelligences. However, I don’t particularly think that those arguments meet the standard of evidence necessary for the world to implement approximately unprecedented policies like “establish an international treaty that puts a global moratorium on frontier AI development.” [1]
If I were king of the world, those arguments would be sufficient reason to shape the laws of my global monarchy. Specifically, I would institute a policy in which we approach Superintelligence much more slowly and carefully, including, many separate pauses in which we thoroughly test the current models before moving forward with increasing frontier capabilities. But I’m not the king of the world, and I don’t have the affordance to implement nuanced policies that reflect the risks and uncertainties of the situation.
Given the actual governance machinery available, it seems to me that reducing our collective uncertainty about the properties of AI systems is at least helpful, and possibly necessary, for amassing political will behind policies that will prove to be good ex post.
Accordingly, I want more grounding in what kinds of beings the AIs are, to inform my policy recommendations. It is imperative to get a better empirically-grounded understanding of AI behavior.
Some of the experiments for gleaning that understanding require doing many training runs, varying parameters of those training runs, and learning how differences in training lead to various behavioral properties.
As a very simple example, most of the models from across the AI labs have a “favorite animal”. If you ask them “what’s your favorite animal, answer in one word”, almost all of them will answer “octopus” almost all of the time. Why is this? Where in the training process does that behavioral tendency (I’m not sure that it’s appropriate to call it a preference), appear? Do the base models exhibit that behavior, or is it the result of some part of post-training? Having identified where in the training process that bias is introduced, I would want to run variations on the training from that checkpoint onward, and learn which differences in training correlate with changes in this simple behavioral outcome.
"What makes AIs disproportionately answer 'octopus' as their favorite animal" is the kind of very simple question that I think we should be able to answer, as part of a general theory of how training shapes behavior. I want to try this basic approach with tons and tons of observed behaviors (including some directly relevant safety properties, like willingness to lie and shutdown-resistance). The goal would be to be able to accurately predict model behaviors, including out-of-distribution behaviors, from the training.
Experiments like these require having access to a whole spectrum of model checkpoints, and the infrastructure to do many varied training runs branching from a given checkpoint. You might even need to go back to 0, and redo pretraining (though hopefully you don’t need to completely redo pretraining, multiple times).
Doing this kind of research requires having the infrastructure and talent for doing model training, and (possibly) a lot of cash to burn on training runs. Depending on how expensive this kind of research needs to be, and on how much you can learn from models that are behind the frontier, you might need to be a frontier scaling lab to do this kind of work.[2]
This makes me more sympathetic to the basic value proposition of Anthropic: developing iteratively more capable AI systems, attending to developing those systems such that they broadly have positive impacts on the world, shipping products to gain revenue and investment, and then investing much of your producer surplus into studying the models and trying to understand them. I can see why I might run more-or-less that plan.
But that does NOT necessarily mean that I am in favor of Anthropic the company as it actually exists.
This prompts me to consider: What would I want to see from an AGI lab, that would cause me to endorse it?
Features that an AGI lab needs to have to win my endorsement
[note: I am only listing what would cause me to be in favor of a hypothetical AGI lab. I’m explicitly not trying to evaluate whether Anthropic, or any other AGI lab, actually meets these requirements.]
Something that isn’t on this list is that the company pre-declare that they would stop AI development now, if all other leading actors also agreed to stop. Where on the capability curve is a good place to stop is a judgement call, given the scientific value of continued scaling (and as a secondary, but still real consideration, the humanitarian benefit). I don't currently feel inclined to demand that a company that had otherwise done all of the above tie their hands in that way. Publicly and credibly making this commitment might or might not make a big difference for whether other companies will join in the coordination effort, but I guess that if "we we will most likely need to pause, at some point" is really part of the company's brand, one of their top recurring talking points, that should do about the same work for moving towards the coordinated equilibrium.
I’m interested in…
Though perhaps AI will just be legibly freaky and scary to enough people, that a coalition of a small number of people who buy the arguments and a large number of people who are freaked out by the world changing in ways that are both terrifying and deeply uncomfortable, will be sufficient to produce a notable slowdown, even in spite of the enormous short and medium term profit incentives.
Those are not forgone conclusions. I would be pretty interested in a company that specialized in training and studying only GPT-4-level models. I weakly guess that we can learn most of what we want to learn about how training impacts behavior from models that are that capable. That would still require tens to hundreds of millions of dollars a year, but probably not billions.