Wiki Contributions

Comments

keti3y10

yet my probability of success would be absolutely tiny – like 0.01% even if I tried my absolute hardest. That's what I mean when I say that most people would have a near-zero chance. There are maybe a few hundred (??) people in the world who we even need to consider

Could you explain how you come to this conclusion? What do you think your fundamental roadblock would be? Getting the code for AGI or beating everyone else to superintelligence?]

 

How many people on the planet do you think meet the following conditions?

  1. Have > 1% of obtaining AGI.
  2. Have malevolent intent.

It's important to remember that there may be quite a few people who would act somewhat maliciously if they took control of AGI, but I best the vast majority of these people would never even consider trying to take control of the world. I think trying to  control AGI would just be far too much work and risk for the vast majority of people who want to cause suffering.

However, there still may be a  few people want to harm the world enough to justify trying. They would need to be extremely motivated to cause damage. It's a big world, though, so I wouldn't be surpized if there were a few people like this.

I think that a typical, highly motivated malicious actor would have much higher than 1% probability of succeeding. (If mainstream AI research starts taking security against malicious actors super seriously, the probability of the malicious actors' success would be very low, but I'm not sure it will be taken seriously enough.)

I disagree. Theft and extortion are the only two (sort of) easy ones on the list imo. Most people can't hack or build botnets at all, and only certain people are in the right place to eavesdrop. 

A person might not know how to hack, building botnets, or eavesdrop, but they could learn. I think a motivated, reasonably capable individual would be able to become proficient in all those things. And they potentially will have decades of training before they would need to use it.

keti3y10

I'm not sure most people would have a near-zero chance of getting anywhere.  

If AGI researchers took physical security super seriously, I bet this would make a malicious actors quite unlikely to succeed. But it doesn't seem like they're doing this right now, and I'm not sure they will start. 

Theft, extortion, hacking, eavesdropping, and building botnets are things a normal person could do, so I don't see why they wouldn't have a fighting chance. I've been thinking about how someone could currently acquire private code from Google or some other current organization working on AI, and it sounds pretty plausible to me. I'm a little reluctant to go into details here due to informational hazards.

What do you think the difficulties would that make most people have a near-zero chance of getting anywhere? Is it from the difficulty in acquiring the code for the AGI? Or getting a mass of hacked computers big enough to compete with AGI researchers? Both seem pretty possible to me for a dedicated individual.

keti3y10

Has this been discussed in detail elsewhere? I only saw one other article relating to this.

I'm not sure if a regular psychopath would do anything particularly horrible if they controlled AGI. Psychopaths tend to be selfish, but I haven't heard of them being malicious. At least, I don't think a horrible torture outcome would occur. I'm more worried about people who are actually sadistic.

Could you explain what the 1% chance refers to when talking about a corrupt businessman? Is it the probability that a given businessman could cause a catastrophe? I think the chance would be a lot higher if the businessman tried. Such a person could potentially just hire some criminals to do the theft, extortion, or hacking. Do you think such criminals would also just be very unlikely to succeed? Attackers just need to find a single opening, and a non-malicious organization would need to defend against many?

keti3y00

Even if there's just one such person, I think that one person still has a significant chance of succeeding.

However, more importantly, I don't see how we could rule out that there are people who want to cause widespread destruction and are willing to sacrifice things for it, even if they wouldn't be interested in being a serial killer or mass shooter.

I mean, I don't see how we have any data. I think that for almost all of history, there has been little opportunity for a single individual to cause world-level destruction. Maybe during the time around the Cold War someone could manage to trick the USSR and USA to start a nuclear war. Other than that, I can't think of much other opportunities.

There are eight billion people in the world, and potentially all it would take is one, with sufficient motivation, to bring a about a really bad outcome. Given we need a conjunction with eight billion, I think it would be hard to show that there is no such person.

So I'm still quite concerned about malicious non-state actors.

And I think there are some reasonably doable,  reasonably low-cost things someone could do about this. Potentially just having very thorough security clearance before allowing someone to work on AGI-related stuff could make a big difference. And increasing there physical security of the AGI organization could also be helpful. But currently, I don't think people at Google and other AI place is worrying about this. We could at least tell them about this.

keti3y10

This is a good point. I didn't know this. I really should have researched things more.

keti3y10

I'm not worried about the sort of person who would become a terrorist. Usually, they just have a goal like political change, and are willing to kill for it. Instead, I'm worried about the sort of person who become a mass-shooter or serial killer.

I'm worried about people who value hurting others for its own sake. If a terrorist group took control of AGI, then things might not be too bad. I think most terrorists don't want to damage the world, they just want their political change. So they could just use their AGI to enact whatever political or other changes they want, and after that not be evil. But if someone who just terminally values harming others, look a mass-shooter, took over the world, things would probably be much worse.

Could you clarify what you're thinking of when saying "so any prospective murderer who was "malicious [and] willing to incur large personal costs to cause large amounts of suffering" would already have far better options than a mass shooting"? What other, better options would they have that the don't do?