[Epistemic Status: I'm confident that the individual facts I lay out support the main claim, but I'm not fully confident its enough evidence to make a true or useful framework for understanding the world.]

I'm going to give seven pieces of evidence to support this claim[1]:

AI Doomerism helps accelerate AI capabilities, and AI capabilities in turn proliferate the AI Doomerism meme. 

If these dynamics exist, they'd be not unlike the Toxoplasma of Rage. Here's my evidence:

  1. Sam Altman claims Eliezer "has IMO done more to accelerate AGI than anyone else":
  2. Technical talent who hear about AI doom might decide capabilities are technically sweet, or a race, or inevitable, and decide to work on it for those reasons (doomer -> capabilities transmission).
  3. Funders and executives who hear about AI doom might decide capabilities are a huge opportunity, or disruptive, or inevitable, and decide to fund it for those reasons (doomer -> capabilities transmission).
  4. Capabilities amplifies the memetic relevance of doomerism (capabilities -> doomer transmission).
  5. AI Doomerism says we should closely follow capabilities updates, discuss them, etc.
  6. Capabilities and doomerism gain and lose social status together - Eliezer Yudkowsky has been writing about doom for a long time, but got a Time article and TED talk only after significant capabilities advances.
  7. Memes generally benefit from conflict, and doomerism and capabilities can serve as adversaries for this purpose.

I've been trying to talk about "AI doomerism" here as a separate meme than "AI safety", respectively something like "p(doom) is very large" and "we need to invest heavily into AI safety work", though these are obviously related and often cooccur. One could no doubt make a similar case for AI safety and capabilities supporting each other, but I think the evidence I listed above applies mostly to AI doom claims (if one uses Eliezer as synecdoche for AI doomerism, which I think is reasonable). 

I hope with this post I'm highlighting a something that is a combination of true and useful. Please keep in mind that the truth values of "AI doom is in a toxoplasma relationship with AI capabilities" and "AI doom is right" are independent.

  1. ^

    This post was inspired by one striking line Jan_Kulveit's helpful Talking publicly about AI risk:

    - the AGI doom memeplex has, to some extent, a symbiotic relationship with the race toward AGI memeplex

New Comment
12 comments, sorted by Click to highlight new comments since: Today at 2:40 PM

My immediate thought is that the cat is already out of the bag and whatever risk there was of AI safety people accelerating capabilities is nowadays far outweighed by capabilities hype and in general, much larger incentives, and that the most we can do is to continue to build awareness of AI risk. Something about this line of reasoning strikes me as uncritical though.

I think something pretty close to this is true, and I'm worried about it. (I don't know that I buy Sam's implied background story in that tweet, but, I do think OpenAI is at least somewhat downstream of Superintelligence being published). I also have a recollection of Shane Legg from DeepMind being influenced by early Lesswrong, although I don't remember the details

I'm a bit confused about how to think about the current-day situation (i.e. I agree with DirectedEvolution there is just a lot of profit motive now), but I think it's been at least a relevant gear historically.

My model is that even if SamA is right about EY's important role in catalyzing OpenAI, it's not clear to me that OpenAI's work was pivotal for capabilities (relative to whatever would have happened if OpenAI hadn't been created), or that doomerism will be an important force for moving people into AGI capabilities research going forward. Now that AGI is suddenly well within the Overton window, people will just pursue capabilities because of the profit, power and status motives.

On the one hand I see the argument. On the other hand, do you think the outcomes would  be better in the worlds where EY never wrote about AGI risk, Bostrom never wrote Superintelligence, etc.? That instead of writing publicly he could have become the Hari Seldon of the story and ensured future history worked out well?

I think that whatever the arguments made here, my initial rejection does boil down to something like, "You don't get to blame Einstein-1905 and the publishing of special relativity for Hiroshima, Nagasaki, and the cold war.  You don't even get to blame the letter to FDR for that without carefully considering the wide scope of alternative timelines, like how maybe without centralization and military involvement you might get Szilard and Teller not stopping Fermi from publishing on graphite reactors."

I mean, I guess it could be... but on the other hand, if talking about how thing X may doom us all leads people to think "oh, X sounds sweet! Gonna work to build it faster!", the fuck are we supposed to do, exactly? That's Duck Season, Rabbit Season levels of reverse psychology. It's not like not talking about thing X dooming us all makes people not build thing X; they still will, maybe just a bit slower, and since no one talked about the risk of doom, it'll doom us with 100% certainty. Like, if this mechanic is indeed how it works and at no point the freaking out factor surpasses the enhancement one to actually produce some decent reaction, then we're constitutionally unable to escape doom or even preserve a shred of dignity in the face of it.

if talking about how thing X may doom us all leads people to think "oh, X sounds sweet! Gonna work to build it faster!", the fuck are we supposed to do, exactly?

That's like the situation with Roko's Basilisk. We didn't find a good solution there either; everything seems to make things worse, including doing nothing.

EDIT:

I meant, just like the natural human reaction to hearing about basilisk is "cool, let's tell everyone", the same way the natural human reaction to hearing that AI could kill us all is "cool, let's build it".

I don't really consider that a big deal as it's indeed a very tiny subset of all possible AI futures, and IMO doesn't make a lot of sense in the current form (Yud said the same if I don't remember wrong, just that he'd avoid discussion of it to avoid someone actually making it work, which I'm happy to go along with).

But in this case it's a much broader class of problems. If we're going to make things better in any way, we need to communicate the problem. That could mean someone thinks instead that AI sounds cool and they want to contribute to it. If the resulting rate of improvement of capabilities outstrips the rate at which people then decide to do something to crack down on that, the problem was unsolvable to begin with. You may try to refine your communication strategy, but there is no way out of this in which you simply magically achieve it without talking about the problem. The only thing resembling it would be "downplay the AI's ability and treat them as empty hype to get people discouraged from investing in them" and that's such a transparent lie it wouldn't stick a second. Many artists seem to be toeing this weird "AI is a threat to our jobs but is also laughably incompetent" line and it's getting them absolutely nowhere.

This passes my vibe check, but I don't know if I'll agree with it after thinking about it. Right now, my rough thoughts are: 

0. What do we even mean by "Doomerism"? Just that P(everyone will die or worse in a few decades) is very high? If so, that's consistent with apocalyse-cults of all sorts, not just our own.[1] The people we call "Doomers" have a pretty different space of outcomes they're worried about, and models generating that space, than e.g. a UFO abductee. Or a climate-change "believer" who thinks we're dead in a couple of decades. 

1. Doomerism says we should discuss capabilities updates strongly per se: it says we should look for signs of a miracle. That may be located in capabilities advances, or in an unexpected chance for co-operation or in alignment advances, which are also capabilities advances. There will be some discussion about the big breakthroughs capabilities. But IMO, there is much less focus on capabilities here than e.g. an ML forum. But the amount of such discussion has greatly increased lately, leading to the next point.

2. Unfortunately, I am unsure if Doomerism as-it-exists in this community will be communicated accurately to the public. Whether or not the resulting memes will develop a symbiotic relationship with capbilities advances remains worrying plausible. Though I am unsure of how exactly things will develop, I do think they'll likely go badly. Look at the Covid rhetoric: did anyone anticipate it would evolve as it did? How many even anticipated that it would be as bad as it was? I certainly didn't. Most of the capabilities advances seem driven by people with bad models of Doomerism. 

3. What's the actual evidence that Doomerism is driving capabilities advances? There's the OpenAI, Anthropic, TruthAI exietence, which made co-ordination harder. I think OpenAI's actions counterfactually sped up capabilities. Is there anything else on that scale? 

 

  1. ^

    ;) 

[-]Mgp1y30

I believe you are missing the point. If AGI is possible, it will happen sooner or later due to market forces.

So the question is, "did you want AGI to happen without anyone warning about the end of humanity, or did you want someone to warn us even if the side effect was to accelerate it?"

If what Sam Altman says is true then I would agree the more it's talked about, the more it actually pushes forwards capabilities, and enhances interest in advancing these capabilities.

In that sense it seems like the real world second order effects are the opposite of the expressed intentions of notable personalities like Eliezer. It's ironic, but matches my sense of how common unintended effects are.

"A machine smarter than humans could kill us all!"

"Are you saying 'a machine smarter than humans'? That actually sounds like a business plan! If it is strong enough to kill us all, it is certainly also strong enough to make us billionaires!"

"But... what about the 'killing us' part?"

"Meh, if we don't build it, someone else will. Think about the money, and hope for the best!"

My post In favor of accelerating problems you're trying to solve suggests we should try to exploit this phenomenon, rather than just be passive bystanders of it.