It seems like large organizations make the majority of progress on the frontier, but smaller teams are close behind and able to reproduce impressive results with dramatically fewer resources.

I'd be surprised if that latter part continued for several more years. At least for ImageNet, compute cost in dollars has not been a significant constraint (I expect the cost of researcher time far dominates it, even for the non-optimized implementations), so it's not that surprising that researchers don't put in the work needed to make it as fast and c... (read more)

I think the more important hypothesis is:

If something is just barely possible today with massive compute, maybe it will be possible with much much less compute very soon (e.g. <1 year).

I don't think it really matters for the argument whether it's a "small team" that improves the compute-efficiency, or the original team, or a different big team, or whatever. Just that it happens.

Anyway, is the hypothesis true? I would say, it's very likely if we're talking about a pioneering new algorithm, because with pioneering new algorithms, we don't yet have best pr

... (read more)
5SoerenMind11dSmall teams can also get cheap access to impressive results by buying it from large teams. The large team should set a low price if it has competitors who also sell to many customers.
2rohinmshah11dAgreed, and this also happens "for free" with openness norms, as the post suggests. I'm not strongly disagreeing with the overall thesis of the post, just the specific point that small teams can reproduce impressive results with much fewer resources.

AGI in a vulnerable world

by AI Impacts, abergal AI Impacts1 min read26th Mar 202020 comments

30


I’ve been thinking about a class of AI-takeoff scenarios where a very large number of people can build dangerous, unsafe AGI before anyone can build safe AGI. This seems particularly likely if:

  • It is considerably more difficult to build safe AGI than it is to build unsafe AGI.
  • AI progress is software-constrained rather than compute-constrained.
  • Compute available to individuals grows quickly and unsafe AGI turns out to be more of a straightforward extension of existing techniques than safe AGI is.
  • Organizations are bad at keeping software secret for a long time, i.e. it’s hard to get a considerable lead in developing anything.
    • This may be because information security is bad, or because actors are willing to go to extreme measures (e.g. extortion) to get information out of researchers.

Another related scenario is one where safe AGI is built first, but isn’t defensively advantaged enough to protect against harms by unsafe AGI created soon afterward.

The intuition behind this class of scenarios comes from an extrapolation of what machine learning progress looks like now. It seems like large organizations make the majority of progress on the frontier, but smaller teams are close behind and able to reproduce impressive results with dramatically fewer resources. I don’t think the large organizations making AI progress are (currently) well-equipped to keep software secret if motivated and well-resourced actors put effort into acquiring it. There are strong openness norms in the ML community as a whole, which means knowledge spreads quickly. I worry that there are strong incentives for progress to continue to be very open, since decreased openness can hamper an organization’s ability to recruit talent. If compute available to individuals increases a lot, and building unsafe AGI is much easier than building safe AGI, we could suddenly find ourselves in a vulnerable world.

I’m not sure if this is a meaningfully distinct or underemphasized class of scenarios within the AI risk space. My intuition is that there is more attention on incentives failures within a small number of actors, e.g. via arms races. I’m curious for feedback about whether many-people-can-build-AGI is a class of scenarios we should take seriously and if so, what things society could do to make them less likely, e.g. invest in high-effort info-security and secrecy work. AGI development seems much more likely to go existentially badly if more than a small number of well-resourced actors are able to create AGI.

By Asya Bergal

30