People like Eliezer are annoyed when people suggest their rhetoric hints at violence against AI labs and researchers. But even if Eliezer & co don't advocate violence, it does seem like violence is the logical conclusion of their worldview - so why is it a taboo?

(By violence I don't necessarily mean physical violence - I mean more general efforts to disrupt AI progress, including e.g. with coordinated cyber attacks.)

New to LessWrong?

New Answer
New Comment

15 Answers sorted by

jimrandomh

May 26, 2023

5961

I think there is some value in exploring the philosophical foundations of ethics, and LessWrong culture is often up for that sort of thing. But, it's worth saying explicitly: the taboo against violence is correct, and has strong arguments for it from a wide variety of angles. People who think their case is an exception are nearly always wrong, and nearly always make things worse.

(This does not include things that could be construed as violence but only if you stretch the definition, like supporting regulation through normal legal channels, or aggressive criticism, or lawsuits. I think those things are not taboo and would support some of them.)

[-]ArisC11mo4-13

Here's my objection to this: unless ethics are founded on belief in a deity, they must step from humanity. So an action that can wipe out humanity makes any discussion of ethics moot; the point is, if you don't sanction violence to prevent human extinction, when do you ever sanction it? (And I don't think it's stretching the definition to suggest that law requires violence).

(This does not include things that could be construed as violence but only if you stretch the definition, like supporting regulation through normal legal channels, or aggressive criticism, or lawsuits. I think those things are not taboo and would support some of them.)

Can you clarify on this?  I think most people would agree that lawsuits do count as explicitly sanctioned violence beyond some low threshold, especially in a loser-pays jurisdiction. As in that's the intended purpose of the idea, to let the victor rely on the state's monopoly on violence instead of their private means.

1Timothy Underwood11mo
You don't become generally viewed by society as a defector when you file a lawsuit. Private violence defines you in that way, and thus marks you as an enemy of ethical cooperators, which is unlikely to be a good long term strategy.
1M. Y. Zuo11mo
Someone or some group with objectionable moral/ethical/social positions, seen by a substantial fraction of society, can nonetheless win lawsuits and rely on the state's violent means for enforcement of the awards.  e.g. a major oil company winning a lawsuit against activists, forcing environmental degradation of some degree. or vice versa,  e.g. nudist lifestyle and porn activists winning lawsuits against widely supported restrictions on virtual child porn, forcing a huge expansion in the effective grey area of child porn. The losing side being punished may even be the more efficient, effective, engaged, etc., 'ethical cooperators' in relative comparison, and yet they nonetheless receive the violence without any noticeable change in public sentiment regarding the judiciary.

lmaowell

May 26, 2023

2011

It's not that violence against AI labs is a taboo... it's that violence is a taboo.

This is a commonly cited failure of deontology and in particular classical liberalism. Whether physical violence is morally justified, whether it's justified by local law, whether it's justified by international rules of war, whether it's effective, and whether it's a mechanistically understandable response from victims of harm a behaviorist perspective, are all different questions. I typically answer that most violence is ineffective and yet that the motivations can be mechanistically understood as arising from locally reasonable mechanisms of thought; mo... (read more)

[-]ArisC11mo00

So, you would have advocated against war with Nazi Germany?

1lmaowell11mo
I'm sorry if my point wasn't made clearly. Things are taboos because of social customs & contexts, my point wasn't meant to be normative — just point out that the taboo isn't against violence against ai labs, it's against violence more broadly. 
1ArisC11mo
Yes but what I'm saying is that this isn't true - few people are absolute pacifists. So violence in general isn't taboo - I doubt most people object to things like laws (which ultimately rely on the threat of violence). So why is it that violence in this specific context is taboo?
1Waldvogel11mo
Because it's illegal.
8ArisC11mo
This is a pedantic comment. So the idea is you should obey the law even when the law is unjust?
1Waldvogel11mo
You asked why this sort of violence is taboo, not whether we should break that taboo or not. I'm merely answering your question ("Why is violence in this specific context taboo?"). The answer is because it's illegal. Everyone understands, either implicitly or explicitly, that the state has a monopoly on violence. Therefore all extralegal violence is taboo. This is a separate issue from whether that violence is moral, just, necessary, etc.
1M. Y. Zuo11mo
Not true.  For example, many organizations in Mexico do not recognize that the Mexican state has a monopoly on violence. And they actively bring violence upon those who try to claim it on behalf of the state, sometimes successfully.

Jayson_Virissimo

May 26, 2023

132

Consider the following rhetorical question:

Ethical vegans are annoyed when people suggest their rhetoric hints at violence against factory farms and farmers. But even if ethical vegans don't advocate violence, it does seem like violence is the logical conclusion of their worldview - so why is it a taboo?

Do we expect the answer to this to be any different for vegans than for AI-risk worriers?

[-]ArisC11mo21

Er, yes. AI risk worriers think AI will cause human extinction . Unless they believe in God, surely all morality stems from humanity, so the extinction of the species must be the ultimate harm - and preventing it surely justifies violence (if it doesn't, then what does?)

1simon11mo
If you hypothetically have a situation where it's a 100% clear that the human race will go extinct unless a violent act is committed, and it's seems likely that the violent act would prevent human extinction, then, in that hypothetical case, that would be a strong consideration in favour of committing the violent act. In reality though, this clarity is extremely unlikely, and unilateral actions are likely to have negative side effects. Moreover, even if you think you have such clarity, it's likely that you are mistaken, and the negative side effects still apply no matter how well justified you personally thought your actions were, if others don't agree.
1ArisC11mo
OK, so then AI doomers admit it's likely they're mistaken? (Re side effects, no matter how negative they are, they're better than the alternative; and it doesn't even have to be likely that violence would work: if doomers really believe P(doom) is 1, then any action with a non-zero probability of success is worth pursuing.)
[-]Raemon11mo129

You're assuming "the violence might or might not stop extinction, but then there will be some side-effects (that are unrelated to extinction)". But, my concrete belief is that most acts of violence you could try to commit would probably make extinction more likely, not less, because a) they wouldn't work, and b) they destroy the trust and coordination mechanisms necessary for the world to actually deal with the problem.

To spell out a concrete example: someone tries bombing an AI lab. Maybe they succeed, maybe they don't. Either way, they didn't actually stop the development of AI because other labs will still continue the work. But now, when people are considering who to listen to about AI safety, the "AI risk is high" people get lumped in with crazy terrorists and sidelined.

1ArisC11mo
But when you say extinction will be more likely, you must believe that the probability of extinction is not 1.
3the gears to ascension11mo
Well... Yeah? Would any of us care to build knowledge that improves our odds if our odds were immovably terrible?
1ArisC11mo
I don't know! I've certainly seen people say P(doom) is 1, or extremely close. And anyway, bombing an AI lab wouldn't stop progress, but would slow it down - and if you think there is a chance alignment will be solved, the more time you buy the better.
3Timothy Underwood11mo
If you think P(doom) is 1, you probably don't believe that terrorist bombing of anything will do enough damage to be useful. That is probably one of EYs cruxes on violence.
3simon11mo
I am not an extreme doomer, but part of that is that I expect that people will face things more realistically over time - something that violence, introducing partisanship and division, would set back considerably. But even for an actual doomer, the "make things better through violence" option is not an especially real option. You may have a fantasy of choosing between these options: * doom * heroically struggle against the doom through glorious violence But you are actually choosing between: * a dynamic that's likely by default to lead to doom at some indefinite time in the future by some pathway we can't predict the details of until it's too late * make the situation even messier through violence, stirring up negative attitudes towards your cause, especially among AI researchers but also among the public, making it harder to achieve any collective solution later, sealing the fate of humanity even more thoroughly Let me put it this way. To the extent that you have p(doom) = 1 - epsilon, where is epsilon coming from? If it's coming from "terrorist attacks successully stop capability research" then I guess violence might make sense from that perspective but I would question your sanity. If relatively more of that epsilon is coming from things like "international agreements to stop AI capabilities" or "AI companies start taking x-risk more seriously", which I would think would be more realistic, then don't ruin the chances of that through violence.
1ArisC11mo
Except that violence doesn't have to stop the AI labs, it just has to slow them down: if you think that international agreements yada yada have a chance of success, and given this takes time, then things like cyber attacks that disrupt AI research can help, no?
4simon11mo
I think you are overestimating the efficacy and underestimating the side effects of such things. How much do you expect a cyber attack to slow things down? Maybe a week if it's very successful? Meanwhile it still stirs up opposition and division, and puts diplomatic efforts back years. As the gears to ascension notes, non-injurious acts of aggression share many game theoretic properties as physical violence. I would express the key issue here as legitimacy; if you don't have legitimacy, acting unilaterally puts you in conflict with the rest of humanity and doesn't get you legitimacy, but once you do have legitimacy you don't need to act unilaterally, you can get a ritual done that causes words to be written on a piece of paper where people with badges and guns will come to shut down labs that do things forbidden by those words. Cool huh? But if someone just goes ahead and takes illegitimate unilateral action, or appears to be too willing to do so, that puts them into a conflict position where they and people associated with them won't get to do the legitimate thing. 
2the gears to ascension11mo
Everyone has been replying as though you mean physical violence; non-injurious acts of aggression don't qualify as violence unambiguously, but share many game theoretic properties. If classical liberal coordination can be achieved even temporarily it's likely to be much more effective at preventing doom.
2the gears to ascension11mo
Even in a crowd of ai doomers, no one person speaks for ai doomers. But plenty think it likely they're mistaken somehow. I personally just think the big labs aren't disproportionately likely to be the cause of an extinction strength ai, so violence is overdeterminedly off the table as an effective strategy, before even considering whether it's justified, legal, or understandable. The only way we solve this is by constructing the better world.
1ArisC11mo
If it's true AI labs aren't likely to be the cause of extinction, why is everyone upset at the arms race they've begun? You can't have it both ways: either the progress these labs are making is scary - in which case anything that disrupts them (and hence slows them down even if it doesn't stop them) is good - or they're on the wrong track, in which case we're all fine.
2the gears to ascension11mo
I refer back to the first sentence of the message you're replying to. I'm not having it both ways, you're confusing different people's opinions. My view is the only thing remarkable about labs is that they get to this slightly sooner by having bigger computers; even killing everyone at every big lab wouldn't undo how much compute there is in the world, so it at most buys a year at an intense cost to rule morality and to knowledge of how to stop disaster. If you disagree with an argument someone else made, lay it out, please. I probably simply never agreed with the other person's doom model anyway.

David Hornbein

May 27, 2023

90

You can imagine an argument that goes "Violence against AI labs is justified in spite of the direct harm it does, because it would prevent progress towards AGI." I have only ever heard people say that someone else's views imply this argument, and never actually heard someone actually advance this argument sincerely; nevertheless the hypothetical argument is at least coherent.

Yudkowsky's position is that the argument above is incorrect because he denies the premise that using violence in this way would actually prevent progress towards AGI.  See e.g. here and the following dialogue. (I assume he also believes in the normal reasons why clever one-time exceptions to the taboo against violence are unpersuasive.)

[-]ArisC11mo00

Well, it's clearly not true that violence would not prevent progress. Either you believe AI labs are making progress towards AGI - in which case, every day they're not working on it, because their servers have been shut down, or more horrifically, because some of their researchers have been incapacitated is a day that progress is not being made - or you think they're not making progress anyway, so why are you worried?

3Steven Byrnes11mo
I strongly disagree with "clearly not true" because there are indirect effects too. It is often the case that indirect effects of violence are much more impactful than direct effects, e.g. compare 9/11 with the resulting wars in Afghanistan & Iraq.

shminux

May 26, 2023

97

I addressed a general question like that in https://www.lesswrong.com/posts/p2Qq4WWQnEokgjimy/respect-chesterton-schelling-fences 

Basically, guardrails exist for a reason, and you are generally not smart enough to predict the consequences of removing them. This applies to most suggestions of the form "why don't we just <do some violent thing> to make the world better". There are narrow exceptions where breaking a guardrail has actual rather than imaginary benefits, but finding them requires a lot of careful analysis and modeling.

[-]ArisC11mo10

Isn't the prevention of the human race one of those exceptions?

4shminux11mo
You don't know enough to accurately decide whether there is a high risk of extinction. You don't know enough to accurately decide whether a specific measure you advocate would increase or decrease it. Use epistemic modesty to guide your actions. Being sure of something you cannot derive from first principles, as opposed to from parroting select other people's arguments is a good sign that you are not qualified.  One classic example is the environmentalist movement accelerating anthropogenic global climate change by being anti-nuclear energy. If you think you are smarter now about AI dangers than they were back then about climate, it is a red flag.
1ArisC11mo
But  AI doomers do think there is a high risk of extinction. I am not saying a call to violence is right: I am saying that not discussing it seems inconsistent with their worldview.
3shminux11mo
Eliezer discussed it multiple times, quite recently on Twitter and on various podcasts. Other people did, too. 
2the gears to ascension11mo
I think you accidentally humanity
2Waldvogel11mo
If you have perfect foresight and you know that action X is the only thing that will prevent the human race from going extinct, then maybe action X is justified. But none of those conditions apply.
1ArisC11mo
That's not true  - we don't make decisions based on perfect knowledge. If you believe the probability of doom is 1, or even not 1 but incredibly high, then any actions that prevent it or slow it down are worth pursuing - it's a matter of expected value.

Dagon

May 26, 2023

42

Umm, because individual non-government-sanctioned violence is horrific, and generally results in severe punishment which prevents longer-term action.  Oh, wait, that's why it's not used, not why it's taboo to even disuss.

It's taboo for discussion because serious planning for violence is a direct crime (conspiracy) itself.  Don't do that.  Open advocacy of violence also signals that, by your rules, it's OK for others to target you for violence if they disagree strongly enough.  I don't recommend that, either (especially if you think your opponents are better at violence than you are).

[-]ArisC11mo-1-2

Is all non-government-sanctioned violence horrific? Would you say that objectors and resistance fighters against Nazi regimes were horrific?

2the gears to ascension11mo
Do you think this comparison to be a good specific exemplar for the ai case, such that you'd suggest they should have the same answer, or do you bring it up simply to check calibration? I do agree that it's a valid calibration to check, but I'm curious whether you're claiming capabilities research is the same order of magnitude horrific.
1ArisC11mo
I am bringing it up for calibration. As to whether it's the same magnitude of horrific: in some ways, it's higher magnitude, no? Even Nazis weren't going to cause human extinction - of course, the difference is that the Nazis were intentionally doing horrific things, whereas AI researchers, if they cause doom, will do it by accident; but is that a good excuse? You wouldn't easily forgive a drunk driver who runs over a child...
2the gears to ascension11mo
No, but intentional malice is much harder to dissuade nonviolently.

Christopher King

May 26, 2023

32

Because it's anti-social (in most cases; things like law enforcement are usually fine), and the only good timelines (by any metric) are pro-social.

Consider if it became like the Irish troubles. Do you think alignment gets solved in this environment? No. What you get is people creating AI war machines. And they don't bother with alignment because they are trying to get an advantage over the enemy, not benefit everyone. Everyone is incentivised to push capabilities as far as they can, except past the singularity threshold. And there's not even a disincentive for going past it, you're just neutral on it. So the dangerous bit isn't even that the AI are war machines, it's that they are unaligned.

It's a general principle that anti-social acts tend to harm utility overall due to second-order effects that wash out the short-sighted first-order effects. Alignment is an explicitly pro-social endeavor!

Roko

Nov 24, 2023

21

I think violence helps unaligned AI more than it helps aligned AI.

If the research all goes underground it will slow it down but it will also make it basically guaranteed that there's a competitive, uncoordinated transition to superintelligence.

May 26, 2023

2-1

When Eliezer proposes "turn all the GPUs to Rubik's cubes", this pivotal act I think IS outright violence. Nanotechnology doesn't work that way (something something local forces dominate). What DOES work is having nearly unlimited drones because they were manufactured by robots that made themselves exponentially, making ASI equipped parties have more industrial resources than the entire worlds capacity right now.

Whoever has "nearly unlimited drones" is a State, and is committing State Sponsored Violence which is OK. (By the international law of "whatcha gonna do about it")

So the winners of an AI race with their "aligned" allied superintelligence actually manufactured enough automated weapons to destroy everyone else's AI labs and to place the surviving human population under arrest.

That's how an AI war actually ends. If this is how it goes (and remember this is a future humans "won") this is what happens.

The amount of violence before the outcome depends on the relative resources of the warring sides.

ASI singleton case : nobody has to be killed, billions of drones using advanced technology attack everywhere on the planet at once. Decision makers are bloodlessly placed under arrest, guards are tranquilized, the drones have perfect aim so guns are shot out of hands and engines on military machines hit with small shaped charges. The only violence where humans die is in the assaults on nuclear weapons facilities, since math.

Some nukes may be fired on the territory of the nation hosting the ASI, this kills a few million tops, "depending on the breaks".

Two warring parties case, one party's ASI or industrial resources are significantly weaker : nuclear war and prolonged endless series of battles between drones. Millions or billions of humans killed as collateral damage, battlefields littered with nuclear blast craters and destroyed hardware. "Minor inconvenience" for the winning side since they have exponentially built robotics, the cleanup is rapid.

Free for all, everyone gets ASI, it's not actually all that strong in utility terms : Outcomes range from a world of international treaties similar to now and a stable equilibria or a world war that consumes the earth, most humans don't survive. Again, it's a minor inconvenience for the winners. No digital data is lost, exponentially replicated robotics mean the only long term cost is a few years to clean up.

I'd suggest reading deepmind's recent inter-org paper on model evaluation for extreme risks. What you describe as the success case I agree is necessary for success but without sufficient alignment of each person's personal asi to actually guarantee it will in fact defend against malicious and aggressive misuse of ai by others, you're just describing filling the world with loose gunpowder.

Rika

May 26, 2023

10

If someone thinks that violence against AI labs is bad, then they will make it a taboo because they think it is bad, and they don't want violent ideas to spread. 
There are a lot of interesting discussions to be had on why one believes this category of violence to be bad, and you can argue against these perspectives in a fairly neutral-sounding, non-stressful way, quite easily, if you know how to phrase yourself well. 
A lot (although not all) people are fairly open to this.

 

If someone thinks that violence against AI labs is good, then they probably really wouldn't want you talking about it on a publicly accessible, fairly well-known website. It's a very bad strategy from most pro-violence perspectives.

 

I'm going to quite strongly suggest, regardless of anyone's perspectives on this topic, that you probably shouldn't discuss it here - there are very few angles from which this could be imagined to be a good thing for any rationalism-associated person/movement. Or at least that you put a lot of thought into how you talk about it. Optics are a real and valuable thing, as annoying as that is.
Even certain styles of discussing anti-violence can come across as optically weird if you phrase yourself in certain ways.

Chinese Room

May 26, 2023

-20

Perhaps they prefer not to be held responsible when it happens

11 comments, sorted by Click to highlight new comments since: Today at 5:05 AM

I try to adhere to the principle that "there are no stupid questions", but this question, if not necessarily stupid, is definitely annoying. 

Do you ask the same question of opponents of climate change? Opponents of open borders? Opponents of abortion? Opponents of gun violence? 

The world is full of things which are terrible, or which someone believes to be terrible. If someone, whether through action or inaction, is enabling a process that you think might kill you or cripple you or otherwise harm you, or people you care about - et cetera - then yes, violence naturally comes to mind. 

But there are obvious reasons to be cautious about it, and to be cautious about talking about it. If you do it, you may end up dead or in jail. Despite your emotions, your reason may tell you that a single act of violence won't actually make any difference. You may be afraid of unleashing something that goes in a completely different direction - violence, once unleashed, has a way of doing that. 

On top of that, if you're a civilized person, you don't ever want to resort to violence in the first place. 

... OK, with that off my chest: if I do try to empathize with the spirit in which this question might have been asked, I imagine it as a young man's question, someone for whom the world is still their oyster, and someone who, while not an aggressive thug, is governed more by their private ethical code and their private sense of what is right and wrong, than by fear of the law or fear of social judgment or fear of unintended consequences. Willing to consider anything, and trusting their own discernment. 

And then they stumble into this interesting milieu where people are really worked up about something. And the questioner, while remaining agnostic about the topic, is willing to think about it. But they notice that in all the discussion about this supposedly world-threatening matter, no one is talking about just killing the people who are the root of the problem, or blowing up their data centers, or whatever. And so the questioner says, hey guys, if this thing is really such a great danger, why aren't you brainstorming how to carry out these kind of direct actions too? 

I've already provided a few reasons why one might not go down that path. But the other side of the coin is, if there are people on that path, they won't be talking about it in public. We'll just wake up one day, and the "unthinkable" will have happened, the same way that we all woke up one day and Russia had invaded Ukraine, or the ex PM of Japan had been assassinated. 

[-]ArisC11mo10

Do you ask the same question of opponents of climate change? Opponents of open borders? Opponents of abortion? Opponents of gun violence? 

They're not the same. None of these are extinction events; if preventing the extinction of the human race doesn't legitimise violence, what does? (And if you say nothing, does that mean you don't believe in the enforcement of laws?)

Basically, I can't see a coherent argument against violence that's not predicated either on a God, or on humanity's quest for 'truth' or ideal ethics; and the latter is obviously cut short if humans go extinct, so it wouldn't ban violence to prevent this outcome.
 

[-]niplav11mo30

Some people definitely say they believe climate change will kill all humans.

OK, well, if people want to discuss sabotage and other illegal or violent methods of slowing the advance of AI, they now know to contact you. 

As do law enforcement.

You write:

it does seem like violence is the logical conclusion of their worldview

It's not expected to be effective, as has been repeatedly pointed out, it's not a valid conclusion. Only state-backed law/treaty enforcement has the staying power to coerce history. The question of why it's taboo is separate, but before that there is an issue with the premise.

[-]ArisC11mo00

The assassination of Archduke Ferdinand certainly coerced history, and it wasn't state-backed. So did that of Julius Ceasar, as would have Hitler's, had it been accomplished. 

This site and community generally operates on classical liberal principles, in particular a heavy focus on norms about individual acts which are forbidden. Whether that's good is up to debate; folks here are very consequentialist within some constraints. There are also consequentialist arguments for nonviolence I've heard, in particular check out critch's recent post.

https://www.lesswrong.com/posts/gZkYvA6suQJthvj4E/my-may-2023-priorities-for-ai-x-safety-more-empathy-more

Because surviving worlds don't look like someone cyberattacking AI labs until AI alignment has been solved, they look like someone solving AI alignment in time before the world has been destroyed.

[-]ArisC11mo-2-3

Successful attacks would buy more time though

Related, but, I've talked with multiple rats who, after some convincing, basically admitted, "Yeah, assuming it would actually work, I suppose I actually would push the nuclear button, but I would never admit it, because saying so would have various negative effects."