Capabilities being more jagged reduces p(doom), less jagged increases it.
Ceteris paribus, perhaps, but I think the more important factor is that more jagged capabilities imply a faster timeline. In order to be an existential threat, an AI only needs to have one superhuman ability that suffices for victory (be that superpersuasion or hacking or certain kinds of engineering etc), rather than needing to exceed human capabilities across the board.
I don't think either of these possibilities are really justified. We don't necessarily know what capabilities are required to be an existential threat, and probably don't even have a suitable taxonomy for classifying them that maps to real-world risk. What looks to us like conjunctional requirements may be more disjunctional than we think, or vice versa.
"Jagged" capabilities relative to human are bad if the capability requirements are more disjunctional than we think, since we'll be lulled by low assessments in some areas that we think of as critical but actually aren't.
They're good if high risk requires more conjunctional capabilities than we think, especially if the AIs are jaggedly bad in actually critical areas that we don't even know that we should be measuring.
Capabilities being more jagged reduces p(doom)
Do they actually reduce p(doom)? If capabilities end up more and more jagged, then would the companies adopt neuralese architectures faster or slower?
Consider this largely a follow-up to Friday’s post about a statement aimed at creating common knowledge around it being unwise to build superintelligence any time soon.
Mainly, there was a great question asked, so I gave a few hour shot at writing out my answer. I then close with a few other follow-ups on issues related to the statement.
A Great Question To Disentangle
There are some confusing wires potentially crossed here but the intent is great.
I went through three steps interpreting this (where p(doom) = probability of existential risk to humanity, either extinction, irrecoverable collapse or loss of control over the future).
All three questions are excellent distinct questions, in addition to the related fourth excellent question that is highly related, which is the probability that we will be capable of building superintelligence or sufficiently advanced AI that creates 10% or more existential risk.
The 18 month timeframe seems arbitrary, but it seems like a good exercise to ask only within the window of ‘we are reasonably confident that we do not expect an AGI-shaped thing.’
Agus offers his answers to a mix of these different questions, in the downward direction – as in, which things would make him feel safer.
Scott Alexander Gives a Fast Answer
Scott Alexander offers his answer, I concur that mostly I expect only small updates.
Giving full answers to these questions would require at least an entire long post, but to give what was supposed to be the five minute version that turned into a few hours:
Question 1: What events would most shift your p(doom | ASI) in the next 18 months?
Quite a few things could move the needle somewhat, often quite a lot. This list assumes we don’t actually get close to AGI or ASI within those 18 months.
The list could go on. This is a complex test and on the margin everything counts. A lot of the frustration with discussing these questions is different people focus on very different aspects of the problem, both in sensible ways and otherwise.
That’s a long list, so to summarize the most important points on it:
Imagine we have a distribution of ‘how wicked and impossible are the problems we would face if we build ASI, with respect to both alignment and to the dynamics we face if we handle alignment, and we need to win both’ that ranges from ‘extremely wicked but not strictly impossible’ to full Margaritaville (as in, you might as well sit back and have a margarita, cause it’s over).
At the same time as everything counts, the core reasons these problems are wicked are fundamental. Many are technical but the most important one is not. If you’re building sufficiently advanced AI that will become far more intelligent, capable and competitive than humans, by default this quickly ends poorly for the humans.
On a technical level, for largely but not entirely Yudkowsky-style reasons, the behaviors and dynamics you get prior to AGI and ASI are not that informative of what you can expect afterwards, and when they are often it is in a non-intuitive way or mostly informs this via your expectations for how the humans will act.
Note that from my perspective, we are here starting the conditional risk a lot higher than 10%. My conditional probability here is ‘if anyone builds it, everyone probably dies,’ as in a number (after factoring in modesty) between 60% and 90%.
My probability here is primarily different from Scott’s (AIUI) because I am much more despairing about our ability to muddle through or get success with an embarrassingly poor plan on alignment and disempowerment, but it is not higher because I am not as despairing as some others (such as Soares and Yudkowsky).
If I was confident that the baseline conditional-on-ASI-soonish risk was at most 10%, then I would be trying to mitigate that risk, it would still be humanity’s top problem, but I would understand wanting to continue onward regardless, and I wouldn’t have signed the recent statement.
Question 1a: What would get this risk down to acceptable levels?
In order to move me down enough to think that moving forward would be a reasonable thing to do any time soon out of anything other then desperation that there was no other option, I would need at least:
In a sufficiently dire race condition, where all coordination efforts and alternatives have failed, of course you go with the best option you have, especially if up against an alternative that is 100% (minus epsilon) to lose.
Question 2: What would shift the amount that stopping us from creating superintelligence for a potentially extended period would reduce p(doom)?
Everything above will also shift this, since it gives you more or less doom that extra time can prevent. What else can shift the estimate here within 18 months?
Again, ‘everything counts in large amounts,’ but centrally we can narrow it down.
There are five core questions, I think?
One could summarize this as:
I expect to learn new information about several of these questions.
Question 3: What would shift your timelines to ASI (or to sufficiently advanced AI, or ‘to crazy’)?
(My current median time-to-crazy in this sense is roughly 2031, but with very wide uncertainty and error bars and not the attention I would put on that question if I thought the exact estimate mattered a lot, and I don’t feel I would ‘have any right to complain’ if the outcome was very far off from this in either direction. If a next-cycle model did get there I don’t think we are entitled to be utterly shocked by this.)
This is the biggest anticipated update because it will change quite a lot. Many of the other key parts of the model are much harder to shift, but timelines are an empirical question that shifts constantly.
In the extreme, if progress looks to be stalling out and remaining at ‘AI as normal technology,’ then this would be very good news. The best way to not build superintelligence right away is if building it is actually super hard and we can’t, we don’t know how. It doesn’t strictly change the conditional in questions one and two, but it renders those questions irrelevant, and this would dissolve a lot of practical disagreements.
Signs of this would be various scaling laws no longer providing substantial improvements or our ability to scale them running out, especially in coding and research, bending the curve on the METR graph and other similar measures, the systematic failure to discover new innovations, extra work into agent scaffolding showing rapidly diminishing returns and seeming upper bounds, funding required for further scaling drying up due to lack of expectations of profits or some sort of bubble bursting (or due to a conflict) in a way that looks sustainable, or strong evidence that there are fundamental limits to our approaches and therefore important things our AI paradigm simply cannot do. And so on.
Ordinary shifts in the distribution of time to ASI come with every new data point. Every model that disappoints moves you back, observing progress moves you forward. Funding landscape adjustments, levels of anticipated profitability and compute availability move this. China becoming AGI pilled versus fast following or foolish releases could move this. Government stances could move this. And so on.
Time passing without news lengthens timelines. Most news shortens timelines. The news item that lengthens timelines is mostly ‘we expected this new thing to be better or constitute more progress, in some form, and instead it wasn’t and it didn’t.’
To be clear that I am doing this: There are a few things that I didn’t make explicit, because one of the problems with such conversations is that in some ways we are not ready to have these conversations, as many branches of the scenario tree involve trading off sacred values or making impossible choices or they require saying various quiet parts out loud. If you know, you know.
That was less of a ‘quick and sloppy’ answer than Scott’s, but still feels very quick and sloppy versus what I’d offer after 10 hours, or 100 hours.
Bonus Question 1: Why Do We Keep Having To Point Out That Building Superintelligence At The First Possible Moment Is Not A Good Idea?
The reason we need letters explaining not to build superintelligence at the first possible moment regardless of the fact that it probably kills us is that people are advocating for building superintelligence regardless of the fact that it probably kills us.
He is the CEO of Palantir literally said this is an ‘arms race.’ The first rule of an arms race is you don’t loudly tell them you’re in an arms race. The second rule is you don’t win it by building superintelligence as your weapon.
Once you build superintelligence, especially if you build it explicitly as a weapon to ‘determine the rules,’ humans no longer determine the rules. Or anything else. That is the point.
Until we have common knowledge of the basic facts that goes at least as far as major CEOs not saying the opposite in public, job one is to create this common knowledge.
I also enjoyed Tyler Cowen fully Saying The Thing, this really is his position:
That’s right. If you want to say that not building superintelligence as soon as possible is a good idea, first you have to write an 80-page paper on the political economy of a particular implementation of a ban on that idea. That’s it, he doesn’t make the rules. Making a statement would otherwise be irresponsible, so until such time as a properly approved paper comes out on these particular questions, we should instead be responsible by going ahead not talking about this and focus on building superintelligence as quickly as possible.
I notice that a lot of people are saying that humanity has already lost control over the development of AI, and that there is nothing we can do about this, because the alternative to losing control over the future is even worse. In which case, perhaps that shows the urgency of the meddling kids proving them wrong?
Alternatively…
Bonus Question 2: What Would a Treaty On Prevention of Artificial Superintelligence Look Like?
How dare you try to prevent the building of superintelligence without knowing how to prevent this safely, ask the people who want us to build superintelligence without knowing how to do so safely.
Seems like a rather misplaced demand for detailed planning, if you ask me. But it’s perfectly valid and highly productive to ask how one might go about doing this. Indeed, what this would look like is one of the key inputs in the above answers.
One key question is, are you going to need some sort of omnipowerful international regulator with sole authority that we all need to be terrified about, or can we build this out of normal (relatively) lightweight international treaties and verification that we can evolve gradually over time if we start planning now?
The default method one would actually implement is an international treaty, and indeed MIRI’s TechGov team wrote one such draft treaty, although not also an 80 page paper on its political economy. There is also a Financial Times article suggesting we could draw upon our experience with nuclear arms control treaties, which were easier coordination problems but of a similar type.
Will Marshall points out that in order to accomplish this, we would need extensive track-two processes between thinkers over an extended period to get it right. Which is indeed exactly why you can offer templates and ideas but to get serious you need to first agree to the principle, and then work on details.
Tyler John also makes a similar argument that multilateral agreements would work. The argument that ‘everyone would have incentive to cheat’ is indeed the main difficulty, but also is not a new problem.
What was done academically prior to the nuclear arms control treaties? Claude points me to Schelling & Halperin’s “Strategy and Arms Control” (1961), Schelling’s “The Strategy of Conflict” (1960) and “Arms and Influence” (1966), and Boulding’s “Conflict and Defense” (1962). So the analysis did not get so detailed even then with a much more clear game board, but certainly there is some work that needs to be done.