[ Question ]

Is there anything that can stop AGI development in the near term?

by Wulky Wilkinsen1 min read22nd Apr 20214 comments

4

AI RiskAI TimelinesAI GovernanceAI
Frontpage

Assume that short timeline arguments are correct. (For previous discussion, see "What if AGI is near?".)

Some possible ideas:

  • A multilateral, international governmental/legal agreement to halt GPU production or ban AI research
    • Surveillance systems that detect when someone is about to launch an AI system and report them to the authorities. Obviously this would be just an implementation detail of the above idea.
  • An agreement among prominent AI researchers (but not necessarily governments) in multiple countries that further progress is dangerous and should be halted until alignment is better understood
  • A global nuclear war or some other disaster that would halt economic progress and damage supply chains
    • Nuclear EMPs that would damage many electrical systems, possibly including computing hardware, while limiting casualties

Even in these scenarios it seems like further progress could be possible as long as at least one research group with access to sufficient hardware is able to scale up existing methods. So I'm just curious, would it even be possible to stop AI research in the near term? (This is different from asking whether it would be good to do it -- there are obviously reasons why the ideas above could be quite terrible.)

Also, should we expect that due to anthropic/observer selection effects, we will find ourselves living in a world where something like the dystopian scenarios discussed above happens, regardless of how unlikely it is a priori?

New Answer
Ask Related Question
New Comment

3 Answers

I think the most likely is lack of funding for intermediate steps. But I think if it happens, it's not going to be coordinated action motivated by AGI safety concerns, but another "AI winter" motivated by short-termism about profit margins.

On the anthropics question: No, Mr. Bond, we should expect ourselves to die. If you're about to get shot in the head, you don't "expect the gun to jam", you just expect to probably die (and therefore take actions like buying life insurance or a bulletproof helmet based on that belief). Anthropic reasoning about the future is tricky precisely because it's tempting to neglect this possibility.

We have a chemical weapons ban because chemical weapons are obsolete. We have a Biological Weapons Convention too, but I think that's because today's biological weapons are like chemical weapons and future biological weapons favor non-state actors. Russia, China and the USA haven't even joined the Ottawa Treaty which bans landmines. We don't have a "multilateral, international governmental/legal agreement to halt" the production of nuclear weapons, cyberweapons or autonomous weapons. Nuclear weapons are easier to ban than AI because nuclear weapons require uranium, centrifuges and missiles whereas AI requires an AWS account. The risk of nuclear weapons are well-understood. The risks of AI are not.

If the short timeline theories are correct then the only thing which could slow the development of AI is to break civilization so hard technological progress grinds to a halt. If the short timeline theories are correct then nuclear war could slow down the advent of AI by mere decades.

Not any nuclear war will work. If it will be a nuclear exchange between major superpowers, but many well developed countries survive, it will not slow down AI significantly. Even such war may accelerate AI as it will be seen as a new protective weapon.

Only anti-AI nuclear war which will target chip fabs, data centeres, electricity sources and brain tanks + global economy in general may be "effective" in halting AI development. I don't endorse it, as the extinction probability here is close to AI's extinction probability.

Narrow AI Nanny? A system of global control based on narrow AI which is used to prevent creation of AGI and other bad things.