"AI safety" and "AI alignment" now often mean "making AI not say racist/illegal/etc things". The term "AI notkilleveryoneism" is now sometimes being used as a replacement that refers specifically to ASI-related risks.

I think "notkilleveryoneism" is the wrong term to use, for reasons including:

  • The basic structure of simple words mashed together with a suffix is associated with fantasy, media for kids, and signalling lack of effort.
  • The "ism" suffix brings up associations to religions. You don't want to say "we are the tribe of people whose thing is opposition to X" - you just want to say "we're opposed to X".
  • It's awkward to say, not catchy.
  • There are broader concerns than AI literally killing everyone, and you probably want as big an alliance as possible.

What, then, might be better? Perhaps an analogy to biological or nuclear safety would be good, eg:

  • Stop AI Gain-of-Function Research
  • Stop AI Proliferation

Another angle is "enjoy the AI summer" framing. The ability of GPT-4 to fill out forms, the art generation by the various LoRA models...even just the ability to do good sentence embeddings and fast vector search - all these things will have big impacts in businesses, but it will take some time for people to figure out how to use them most effectively. You could say things like:

  • Enjoy the AI summer.
  • Taste the fruit before planting more.

Yet another angle is the "figure out how to distribute the gains equitably before moving on" and "AI power dynamics" framing. For example:

  • Stop (Chinese-style) AI surveillance.
  • Don't let them make you obsolete. (like horses)

There are a number of possibilities, and I'm not proposing a single thing in particular, I'm just saying that people should use other phrases than "AI notkilleveryoneism".

I thought of the following terms for technical discussion:

  • U-al = user-alignment
  • O-al = owner-alignment
  • S-al = society-alignment
  • H-al = humanity-alignment
  • I-al = intelligence-alignment

But for public-facing statements, such abbreviations are obviously unsuitable.

New Comment
27 comments, sorted by Click to highlight new comments since:
[-]Raemon9788

So, on one hand, yes, it totally sounds dumb. But this seems to be missing the point of calling it "AI notkilleveryoneism", which is to draw attention to the fact that the last few times people tried naming this thing, people shifted to using it in a more generic way that didn't engage with the primary cruxes of the original namers*.

One of the key proposed mechanisms here is that the word is both specific enough and sounds low-status-enough that you can't possibly try to redefine it in a vague applause-lighty way that people will end up Safetywashing.

And, sure, there should also be a name that is also, like, prestigious and reasonable sounding and rolls off the tongue. But most of the obvious words are kind a long and a mouthful and are likely to have syllables dropped for convenience (i.e. AI Existential Safety is harder to say than AI Safety). One of the points is to have a name that actively leans into outrageousness of it's length.

Another part of the point here is to deliberately puncture people's business-as-usual attitude, via outrageousness/humor.

And, also sure, you can disagree with all of this and think it's not a useful goal, or think that, as a joke-name, things went overboard and it's getting used more often than it should. But if you're actually trying to get the people using the word to stop you need to engage more with the actual motivation.

*FWIW I do think "AI Safety" and "AI Alignment" aren't sufficiently specific names, and I think you really can't complain when those names end up getting used to mean things other than existential safety, and this was predictable in advance.

the last few times people tried naming this thing, people shifted to using it in a more generic way that didn't engage with the primary cruxes of the original namers

Yes, but, that's because:

"AI Safety" and "AI Alignment" aren't sufficiently specific names, and I think you really can't complain when those names end up getting used to mean things other than existential safety

(Which I agree with you about.)


the word is both specific enough and sounds low-status-enough that you can't possibly try to redefine it in a vague applause-lighty way that people will end up Safetywashing

OK, but now it's being used on (eg) Twitter as an applause light for people who already agree with Eliezer, and the net effect of that is negative. Either it's used internally in places like LessWrong, where it's unnecessary, or it's used in public discourse, where it sounds dumb which makes it counterproductive.

And, sure, there should also be a name that is also, like, prestigious and reasonable sounding and rolls off the tongue. But most of the obvious words are kind a long and a mouthful and are likely to have syllables dropped for convenience

Yes, that's what I'm trying to make a start on getting done.

as a joke-name, things went overboard and it's getting used more often than it should

Yes, that is what I think. Here's a meme account on Twitter. Here's Zvi using it. These are interfaces to people who largely think it sounds dumb.

I agree it's getting used publicly. And, to be clear, I don't have a that strong an opinion on this, I'm not defending the phrase super hard. But, you haven't actually justified that a bad thing is definitely happening from my perspective. 

Some people on the internet think a thing sounds dumb, sure. The thing is that pushing an overton window basically always has people laughing at you and thinking you're dumb, regardless. People say AI concerns are a weird silly outlandish doomer cult no matter how everything is phrased. 

The goal here (on the part of the people saying the phrase) is not "build the biggest tent", nor is it "minimize sounding dumb". It's "speak plainly and actually convey a particular really bad thing that is likely to happen. Ensure enough / the-right people to notice that an actual really bad thing is likely to happen, which people don't gloss over and minimize." 

Your post presumes "we're trying to build a big tent movement, and it should include things other than AI killing everyone." But, that's in fact we spent several years where most of the public messaging was big-tent-ish. And it seemed like this did not actually succeed strategically.

Put another way – I agree that maybe it's correct to not sound dumb here. But I absolutely think you need to be willing to sound dumb, if that turns out to be the correct strategy. When I see posts like this I think they are often driven by a generator that is not actually about optimizing for winning at a strategic goal, but about avoiding social stigma (which is a very scary thing).

(I think there are counter-problems within the LW sphere of being too willing to be contrarian and edgy. But you currently haven't done any work to justify that the problem here is being too edgy rather than not enough)

(Meanwhile I super endorse trying to come up with non-dumb-sounding things that actually achieve the various goals. But, note that the people-saying-AI-notkilleveryonism are specifically NOT optimizing for "build the biggest tent")

People say AI concerns are a weird silly outlandish doomer cult no matter how everything is phrased.

No, you're dead wrong here. Polls show widespread popular concern about AI developments. You should not give up on "not seeming like a weird silly outlandish doomer cult". If you want to actually get things done, you cannot give up on that.

Hmm. So I do agree the recent polls that showed support for "generally worried" and "the Pause open letter" are an important strategic consideration here. I do think it's fairly reasonable to argue "look man you actually have the public support, please don't fuck it up."

So, thank you for bringing that up.

It still feels like it's not actually a counterargument to the particular point I was making – I do think there are (many) people who respond to taking AI extinction risk seriously with ridicule, no matter how carefully it's phrased. So if you're just running the check of "did anyone respond negatively to this?" the check will basically always return "yes", and it takes a more careful look at the situation to figure out what kind of communications strategy actually works.

I think we're on the same page here. Sorry if I was overly aggressive there, I just have strong opinions on that particular subtopic.

[-]gilch1017

Poor Faulkner. Does he really think big emotions come from big words? He thinks I don't know the ten-dollar words. I know them all right. But there are older and simpler and better words, and those are the ones I use.
--Ernest Hemingway

"Notkilleveryonism" is apt. Sounding "dumb" might actually help it catch on. Surprising, outrageous, and controversial things tend to spread more on social media. Weirder things are more memorable. It's why many commercials are weird on purpose.

There are broader concerns than AI literally killing everyone, and you probably want as big an alliance as possible.

I think this is specifically what the AI notkilleveryonism term is trying to distinguish, though.

There are other concerns with other terms, but people wanted a term specifically for the concern that we're all going to be disassembled into tiny molecular squiggles, or other variations of the "unconscious meh" outcome.

Maybe a better term for this would be "squiggle safety"? "avert the squiggle outcome"? "anti squiggleism"? "stop the squiggle"?!

I think "squiggle" is the wrong word here since the whole point is to just be clear-at-a-glance what you're talking about.

It also implies a very particular worldview, even more narrow than AI notkilleveryoneism, which is kind of the opposite of what the OP was asking for. But I think it's even more un-co-optable and unambiguous, to people familiar with the jargon.

And I couldn't resist sharing "stop the squiggle" :)

A slightly more serious idea for capturing the above worldview in a semi-comprehensible-at-a-glance phrase: "molecular disassembly safety"? "molecular AI safety"? "Stop AI atomics"? ¯_(ツ)_/¯

I’ve personally been saying “AI Doom” as the topic identifier since it’s clear and catchy and won’t be confused with smaller issues.

How 'bout "non-omnicidality"?

That sounds like a good option for the fancy version to use in academic papers. Not so useful for wide audience public communication though.

Antiomnicidality? Antiomnicidism?

How about existential alignment/existential safety, or x-alignment/x-safety?

'x' was already kinda taken by XAI meaning 'explainable AI'. https://en.wikipedia.org/wiki/Explainable_artificial_intelligence 

I think AI x-safety is probably distinguishable enough from XAI that I don't think there would be much confusion. It is also does not seem very susceptible to safetywashing, is easy to say, and has the counterpart of AI x-risk which is already in common use.

I used "AI notkilleveryoneism" a few times in public comments. Since I was never attached to the term, I am willing to switch to to "AI extinction risk".

I'm happy that the term doesn't seem to be catching on in the circles I frequent - I've seen it mentioned here a few times, mostly in a negative or questioning stance (like this post), but I haven't seen it used in a way that's direct and non-aware-of-oddity.

I suspect that's because it's intentionally chosen to be awkward and childish, so it's unlikely to get co-opted or misinterpreted.  

A few other possible terms to add to the brainstorm:

  • AI massive catastrophic risks
  • AI global catastrophic risks
  • AI catastrophic misalignment risks
  • AI catastrophic accident risks (paired with "AI catastrophic misuse risks")
  • AI weapons of mass destruction (WMDs) - Pro: a well-known term, Con: strongly connotes misuse so may be useful for that category but probably confusing to try and use for misalignment risks

Also, I can't resist a little meme reference here... https://knowyourmeme.com/memes/thats-the-joke 

I sort of think of this movement as "anti-extinctionism".

I like the wording "AI global safety" which means that AI will not cause global catastrophes. 

"notkilleveryoneism" may be technically true in the world where only 5 people survive.

may be technically true in the world where only 5 people survive

Like Harlan Ellison's short story, "I Have No Mouth, And I Must Scream".

Exactly 

Another issue with "AI notkilleveryoneism" is that it is most easily accomplished by never building AI.

Maximizing strictly against that utility function means that we are guaranteed to never build AI, because however low the risk from AI may be, the risk from AI involved in not building AI is lower. (And at least some people around here have said things to the effect of "yeah, our world just shouldn't ever build AI, we cannot handle it.")

If you think that a world where AI is never built sucks compared to one where it is built and makes the world better, and that the later is possible, it would make sense to object to the terminology for that reason -- it would make sense to be reluctant to join a movement beneath that banner.

utopia notkilleveryoneism

there I fixed it