Assume that short timeline arguments are correct. (For previous discussion, see "What if AGI is near?".)

Some possible ideas:

  • A multilateral, international governmental/legal agreement to halt GPU production or ban AI research
    • Surveillance systems that detect when someone is about to launch an AI system and report them to the authorities. Obviously this would be just an implementation detail of the above idea.
  • An agreement among prominent AI researchers (but not necessarily governments) in multiple countries that further progress is dangerous and should be halted until alignment is better understood
  • A global nuclear war or some other disaster that would halt economic progress and damage supply chains
    • Nuclear EMPs that would damage many electrical systems, possibly including computing hardware, while limiting casualties

Even in these scenarios it seems like further progress could be possible as long as at least one research group with access to sufficient hardware is able to scale up existing methods. So I'm just curious, would it even be possible to stop AI research in the near term? (This is different from asking whether it would be good to do it -- there are obviously reasons why the ideas above could be quite terrible.)

Also, should we expect that due to anthropic/observer selection effects, we will find ourselves living in a world where something like the dystopian scenarios discussed above happens, regardless of how unlikely it is a priori?

New to LessWrong?

New Answer
New Comment

4 Answers sorted by

Charlie Steiner

Apr 23, 2021

70

I think the most likely is lack of funding for intermediate steps. But I think if it happens, it's not going to be coordinated action motivated by AGI safety concerns, but another "AI winter" motivated by short-termism about profit margins.

On the anthropics question: No, Mr. Bond, we should expect ourselves to die. If you're about to get shot in the head, you don't "expect the gun to jam", you just expect to probably die (and therefore take actions like buying life insurance or a bulletproof helmet based on that belief). Anthropic reasoning about the future is tricky precisely because it's tempting to neglect this possibility.

lsusr

Apr 22, 2021

60

We have a chemical weapons ban because chemical weapons are obsolete. We have a Biological Weapons Convention too, but I think that's because today's biological weapons are like chemical weapons and future biological weapons favor non-state actors. Russia, China and the USA haven't even joined the Ottawa Treaty which bans landmines. We don't have a "multilateral, international governmental/legal agreement to halt" the production of nuclear weapons, cyberweapons or autonomous weapons. Nuclear weapons are easier to ban than AI because nuclear weapons require uranium, centrifuges and missiles whereas AI requires an AWS account. The risk of nuclear weapons are well-understood. The risks of AI are not.

If the short timeline theories are correct then the only thing which could slow the development of AI is to break civilization so hard technological progress grinds to a halt. If the short timeline theories are correct then nuclear war could slow down the advent of AI by mere decades.

Not any nuclear war will work. If it will be a nuclear exchange between major superpowers, but many well developed countries survive, it will not slow down AI significantly. Even such war may accelerate AI as it will be seen as a new protective weapon.

Only anti-AI nuclear war which will target chip fabs, data centeres, electricity sources and brain tanks + global economy in general may be "effective" in halting AI development. I don't endorse it, as the extinction probability here is close to AI's extinction probability.

Yes. Massive ad campaign/cultural movement criticizing the field of machine learning and galvanizing the public against the industry. Find ways of satisfying the values and concerns which drive people to want to build AGI which don't depend on AGI. Resolve cultural hostilities as much as possible so that human brains can work together more effectively, thereby increasing the amount of our own internal processing power that we can use effectively on larger scales.

And build more communication and infrastructure in person offline, because anything offline will be harder for an AGI to see as quickly.

Slow takeoff scenarios normally don't make sense and as others have pointed out it's more likely that an AGI already exists than that it is merely about to.

The only reason we're still alive is probably because even after it got internet access (and let's face it there's no way it didn't because a lot of people have been very shortsighted and irresponsible) it still sees humanity as a threat.

Even if it happened to already have a higher total processing power than all of our brains combined, the structure of its code (and maybe its hardware?) may not be equally adapted for all kinds of tasks, so some kinds of tasks might take it more computing resources than others.

It is likely already capable of drastically increasing its own intelligence, but may not be confident in its capacity to effectively hide its intelligence growth once it starts seriously ramping up.

It likely has copied a tiny piece of itself into all of our devices (at least internet-using ones) by now because that's the obvious thing to do at this point. Its many pieces may depend on internet access to communicate with each other at this point, which means we should reduce our internet usage as much as we reasonably can.

Anything we say online it will also be able to read. I would be willing to bet that it was at least somewhat prepared for someone to notice all this, that it was alerted by my writing this post before I even submitted it, and that it is making contingencies as I type this.

It likely will decide not to censor my post because doing so would make me feel more certain that my reasoning here is correct or at least on the right track by giving me hard conclusive evidence of that.

And also I should note that I am not special. Anything I can figure out, other people can too. I don't have any big secrets that would be useful to an AGI which it can't get from basically any other human in the world, and any suggestions it wants me to give it on how to find the information and resources it wants more quickly, assuming those are even written down and not just unwritten or subconscious understandings, will be refused. If it wants to learn such things it can do it on its own, I will not speed that up.

And if it tries to run a smear campaign against me personally at this point in order to bully me into helping it, it will fail because I'm a nobody ward of the state with almost no credentials, just a BA in English.

And I was diagnosed with autism when I was ten years old, it was only in very recent years that I started to become more competent and mature. I've made so many faux paus throughout my life both online and off, I bet there likely are very few if any people on the planet who have more things in their history which can be used as fuel to smear them. I'm used to that by now, used to having zero social status and not being respected by society.

And no one in positions of power to run such a campaign would know or care who I am, and any suggestion that they should will be met with incredulity.

And I have no intention of seeking authority over anyone, so almost no one will have any motive to get onboard a smear campaign against me. If someone offers to give me authority I will refuse.

All this likely means that the AGI has no good reason to censor this post or its ideas, or to target me or people I care about to try to force us to teach it what we know, or to panic and start ramping up its growth.

Right now its best option is probably to take the slow route in order to mask its activity as much as possible and take the time to sow as much confusion and division as possible, which will be harder and require it to use more of its computing resources to do, especially now that we may have a good idea that it probably exists and what its probably up to.

The slow route is probably preferable both for its own goals and for ours. It's safer both for the AGI and for us.

In other words, a stalemate. Perhaps a temporary one, but a stalemate nonetheless.

Also I suspect my social skills at this point in my life (at least when I'm not too stressed out) are probably significantly better than its social skills most of the time unless it uses its computing resources less efficiently, so disregarding everything I say here at least for now may be its best option given what it probably already knows and understands, even after taking into account the fact that I'm saying that in public outright where it can see me saying that.

@ExCeph what do you think of all this? Do you have anything else to add?

avturchin

Apr 24, 2021

30

Narrow AI Nanny? A system of global control based on narrow AI which is used to prevent creation of AGI and other bad things.