I’ve been thinking about a class of AI-takeoff scenarios where a very large number of people can build dangerous, unsafe AGI before anyone can build safe AGI. This seems particularly likely if:

  • It is considerably more difficult to build safe AGI than it is to build unsafe AGI.
  • AI progress is software-constrained rather than compute-constrained.
  • Compute available to individuals grows quickly and unsafe AGI turns out to be more of a straightforward extension of existing techniques than safe AGI is.
  • Organizations are bad at keeping software secret for a long time, i.e. it’s hard to get a considerable lead in developing anything.
    • This may be because information security is bad, or because actors are willing to go to extreme measures (e.g. extortion) to get information out of researchers.

Another related scenario is one where safe AGI is built first, but isn’t defensively advantaged enough to protect against harms by unsafe AGI created soon afterward.

The intuition behind this class of scenarios comes from an extrapolation of what machine learning progress looks like now. It seems like large organizations make the majority of progress on the frontier, but smaller teams are close behind and able to reproduce impressive results with dramatically fewer resources. I don’t think the large organizations making AI progress are (currently) well-equipped to keep software secret if motivated and well-resourced actors put effort into acquiring it. There are strong openness norms in the ML community as a whole, which means knowledge spreads quickly. I worry that there are strong incentives for progress to continue to be very open, since decreased openness can hamper an organization’s ability to recruit talent. If compute available to individuals increases a lot, and building unsafe AGI is much easier than building safe AGI, we could suddenly find ourselves in a vulnerable world.

I’m not sure if this is a meaningfully distinct or underemphasized class of scenarios within the AI risk space. My intuition is that there is more attention on incentives failures within a small number of actors, e.g. via arms races. I’m curious for feedback about whether many-people-can-build-AGI is a class of scenarios we should take seriously and if so, what things society could do to make them less likely, e.g. invest in high-effort info-security and secrecy work. AGI development seems much more likely to go existentially badly if more than a small number of well-resourced actors are able to create AGI.

By Asya Bergal

New to LessWrong?

New Comment
21 comments, sorted by Click to highlight new comments since: Today at 6:53 PM

I think many-people-can-build-AGI scenarios are unlikely because before they happen, we'll be in a situation where a-couple-people-can-build-AGI, and probably someone will at that point. And once there is at least one AGI running around, things will either get a lot worse or a lot better very quickly.

I think many-people-can-build-AGI scenarios are still likely enough to be worth thinking about, though, because they could happen if there is a huge amount of hardware overhang (and insufficient secrecy about AGI-building techniques) or if there is a successful-for-some-time policy effort to ban or restrict AGI research.

I think the second scenario you bring up is also interesting. It's sorta a rejection of my "things will either get a lot worse or a lot better very quickly" claim above. I think it is also plausible enough to think more about.

Hmm, I find it plausible that we will have that on average p(build unaligned AGI | can build unaligned AGI) is about 0.01, which implies that unaligned AGI is built when there are ~100 actors that can build AGI, which seems to fit many-people-can-build-AGI.

The 0.01 probability could happen because of regulations / laws, as you mention, but also if the world has sufficient common knowledge of the risks of unaligned AGI (which seems not implausible to me, perhaps because of warning shots, or because of our research, or because of natural human risk aversion).

I guess you are more optimistic than me about humanity. :) I hope you are right!

Good point about the warning shots leading to common knowledge thing. I am pessimistic that mere argumentation and awareness-raising will be able to achieve an effect that large, but combined with a warning shot it might.

But I am skeptical that we'll get sufficiently severe warning shots. I think that by the time AGI gets smart enough to cause serious damage, it'll also be smart enough to guess that humans would punish it for doing so, and that it would be better off biding its time.

I guess you are more optimistic than me about humanity. :) I hope you are right!

Out of the two people I've talked to who considered building AGI an important goal of theirs, one said "It's morally good for AGI to increase complexity in the universe," and the other said, "Trust me, I'm prepared to walk over bodies to build this thing."

Probably those weren't representative, but this "2 in 2" experience does make me skeptical about "1 in 100" figure.

(And those strange motivations I encountered weren't even factoring in doing the wrong thing by accident – which seems even more common/likely to me.) 

I think some people are temperamentally incapable of being appropriately cynical about the way things are, so I find it hard to decide if non-pessimistic AGI researchers (of which there are admittedly many within EA) happen to be like that, or whether they accurately judge that people at the frontier of AGI research are unusually sane and cautious.

[-][anonymous]4y20
And once there is at least one AGI running around, things will either get a lot worse or a lot better very quickly.

I don't expect the first AGI to have that much influence (assuming gradual progress). Here's an example of what fits my model: there is one giant-research-project AGI that costs $10b to deploy (and maybe $100b to R&D), 100 slightly worse pre-AGIs that cost perhaps $100m each to deploy, and 1m again slightly worse pre-AGIs that cost $10k to each copy. So at any point in time we have a lot of AI systems that, together, are more powerful than the small number of most impressive systems.

[-][anonymous]4y10

This reasoning can break if deployment turns out to be very cheap (i.e. low marginal cost compared to fixed cost); then there will be lots of copies of the most impressive system. Then it matters a lot who uses the copies. Are they kept secret and only deployed for internal use? Or are they sold in some form? (E.g. the supplier sells access to its system so customers can fine-tune e.g. to do financial trading.)

I think AGIs which are copies of each other -- even AGIs which are built using the same training method -- are likely to coordinate very well with each other even if they are not given information about each other's existence. Basically, they'll act like one agent, as far as deception and treacherous turns and decisive strategic advantage are concerned.

EDIT: Also, I suspect this coordination might extend further, to AGIs with different architectures also. Thus even the third-tier $10K AGIs might effectively act as co-conspirators with the latest model, and/or vice versa.

[-][anonymous]4y30
Also, I suspect this coordination might extend further, to AGIs with different architectures also.

Why would you suppose that? The design space of AI is incredibly large and humans are clear counter-examples, so the question one ought to ask is: Is there any fundamental reason an AGI that refuses to coordinate will inevitably fall off the AI risk landscape?

[-][anonymous]4y10

I agree that coordination between mutually aligned AIs is plausible.

I think such coordination is less likely in our example because we can probably anticipate and avoid it for human-level AGI.

I also think there are strong commercial incentives to avoid building mutually aligned AGIs. You can't sell (access to) a system if there is no reason to believe the system will help your customer. Rather, I expect systems to be fine-tuned for each task, as in the current paradigm. (The systems may successfully resist fine-tuning once they become sufficiently advanced.)

I'll also add that two copies of the same system are not necessarily mutually aligned. See for example debate and other self-play algorithms.

I agree about the strong commercial incentives, but I don't think we will be in a context where people will follow their incentives. After all, there are incredibly strong incentives not to make AGI at all until you can be very confident it is perfectly safe -- strong enough that it's probably not a good idea to pursue AI research at all until AI safety research is much more well-established than it is today -- and yet here we are.

Basically, people won't recognize their incentives, because people won't realize how much danger they are in.

[-][anonymous]4y10

Hmm, in my model most of the x-risk is gone if there is no incentive to deploy. But I expect actors will deploy systems because their system is aligned with a proxy. At least this leads to short-term gains. Maybe the crux is that you expect these actors to suffer a large private harm (death) and I expect a small private harm (for each system, a marginal distributed harm to all of society)?

[-][anonymous]4y10

It makes no difference if the marginal distributed harm to all of society is so overwhelmingly large that your share of it is still death.

[-][anonymous]4y10

I'm using the colloquial meaning of 'marginal' = 'not large'.

I put non-trivial probability mass (>10%) on a relitivisticly expanding bubble of Xonium (computronium, hedonium ect) within 1 second of AGI.

While big jumps are rarer than small jumps, they cover more distance, so it is quite possible we go from a world like this one, except with self driving cars, and a few other narrow AI applications to something smart enough to bootstrap very fast.

One second is preposterous! It'd take at least a minute to get up to relativistic speeds; keep in mind it'll have to build infrastructure as it goes along, and it'll start off using human-built tools which aren't capable of such speeds. No way it can build such powerful tools with human tools in the space of a second.

I'd be surprised if it managed to convert the surface of the planet in less than 10 minutes, to be honest. It might get to the moon in an hour, and have crippled our ability to fight back within 20 seconds, but it's just intelligent; not magical. Getting to relativistic speeds still requires energy, and Xonium still needs to be made of something.

Ye cannae change the laws of physics, Jim.
[-][anonymous]4y10

The actual bootstrapping takes months, years or even decades, but it might only take 1 second for the fate of the universe to be locked in.

It seems like large organizations make the majority of progress on the frontier, but smaller teams are close behind and able to reproduce impressive results with dramatically fewer resources.

I'd be surprised if that latter part continued for several more years. At least for ImageNet, compute cost in dollars has not been a significant constraint (I expect the cost of researcher time far dominates it, even for the non-optimized implementations), so it's not that surprising that researchers don't put in the work needed to make it as fast and cheap as possible. Presumably there will be more effort along these axes as compute costs overtake researcher time costs.

I think the more important hypothesis is:

If something is just barely possible today with massive compute, maybe it will be possible with much much less compute very soon (e.g. <1 year).

I don't think it really matters for the argument whether it's a "small team" that improves the compute-efficiency, or the original team, or a different big team, or whatever. Just that it happens.

Anyway, is the hypothesis true? I would say, it's very likely if we're talking about a pioneering new algorithm, because with pioneering new algorithms, we don't yet have best practices for parallelization, GPU-acceleration, clever shortcuts, etc. etc. On the other hand, if a known, widely-used algorithm is just barely able to do something on the world's biggest GPU cluster, then it might take longer before it becomes really easy and cheap for anyone to do that thing. Like, maybe it will take a couple years, instead of <1 year :-P

[-][anonymous]4y50

Small teams can also get cheap access to impressive results by buying it from large teams. The large team should set a low price if it has competitors who also sell to many customers.

Agreed, and this also happens "for free" with openness norms, as the post suggests. I'm not strongly disagreeing with the overall thesis of the post, just the specific point that small teams can reproduce impressive results with much fewer resources.