Software engineer at the Nucleic Acid Observatory in Boston. Speaking for myself unless I say otherwise.

Wiki Contributions


Just to be clear and pull all of the Esvelt stuff together, are you saying he thinks that...

I can't speak for him, but I'm pretty sure he'd agree, yes.

When he gets into a room with people with similar qualifications, how do they react to those ideas? Have you talked it over with epidemiologists?

I don't know, sorry! My guess is that they are generally much less concerned than he is, primarily because they've spent their careers thinking about natural risks instead of human ones and haven't (not that I think they should!) spent a lot of time thinking about how someone might cause large-scale harm.

If one disease provokes widespread paranoia and/or an organized quarantine, that affects all of them. Same if the population gets so sparse that it's hard for any of them to spread.

Sorry, I was thinking about 'independence' in the sense of not everyone being susceptible to the same illnesses, because I've mostly been thinking about the stealth scenario where you don't know to react until it's too late. You're right that in a wildfire scenario reactions to one disease can restrict the spread of another (recently: covid lockdowns in 2020 cutting the spread of almost everything else).

Anybody with limited resources is going to dislike the idea of having the work multiplied.

Probably depends a lot on how the work scales with more pathogens?

The two don't seem incompatible, really. You could imagine something that played along asymptomatically (while spreading like crazy), then pulled out the aces when the time was right (syphilis).

I don't think they're incompatible; I wasn't trying to give an exclusive "or".

Which is not to say that you could actually create it. I don't know about that (and tend to doubt it). I also don't know how long you could avoid surveillance even if you were asymptomatic, or how much risk you'd run of allowing rapid countermeasure development, or how closely you'd have to synchronize the "aces" part. ... Doesn't that mean that every case has to "come out of incubation" at relatively close to the same time, so that the first deaths don't tip people off? That seems really hard to engineer.

I think this is all pretty hard to get into without bringing up infohazards, unfortunately.

It kind of depends on what kind of resources you need to pull off something really dramatic. If you need to be a significant institution working toward an official purpose, then the supply of omnicidal actors may be nil. If you need to have at least a small group and be generally organized and functional and on-task, I'd guess it'd be pretty small, but not zero. If any random nut can do it on a whim, then we have a problem.

If we continue not doing anything then I think we do get to where one smart and reasonably dedicated person can do it; perhaps another Kaczynski?

Full disclosure: Bostromian species potential ideas don't work for me anyhow. I think killing everybody alive is roughly twice as bad as killing half of them, not roughly infinity times as bad. I don't think that matters much; we all agree that killing any number is bad.

While full-scale astronomical waste arguments don't work for a lot of people, it sounds like your views are almost as extreme in the other direction? If you're up for getting into this, is it that you don't think we should consider people who don't exist yet in our decisions?

The idea that lay people using LLMs are worth worrying more about than people with expertise and access to top laboratories seems wrong to me.

I agree it's definitely wrong today. I'm concerned it may stop being wrong in the future if we don't get our act together, because biology is currently democratizing quickly while the number of people at top labs is relatively constant.

I think efforts to reduce insider risk are also really valuable, but these look less like the kind of technical work I've been focusing on and more like better policies at labs and not engaging in particular kinds of risky research. I'm excited for other people to work on these!

(Also, the second half of my list and Esvelt's "Detect" and "Defend" apply regardless of where the attack originates.)

I'm not assuming that the only person with Llama 5 is the one intent on causing harm. Instead, I unfortunately think the sphere of biological attacks is, at least currently, much more favorable to attackers than defenders.

If the biology-as-a-service orgs are following your suggestion to screen for pathogens

I'm not sure we get to assume that? Screening is far from universal today, and not mandatory.

their Llama-5 is going to say, ah, this is exactly what a terrorist would ask for if they were trying to trick us into making a pathogen

This only works if the screener has enough of the genome at once that Llama 5 can figure out what it does, but this is easy to work around.

In general, beyond just bioattack scenarios, any argument purporting to demonstrate dangers of open source LLMs must assume that the defenders also have access


If that's the case, it argues for an approach similar to delayed disclosure policies in computer security: if a new model enables attacks against some existing services, give them early access and time to fix it, then proceed with wide release.

I don't actually disagree with this! The problem is that the current state of biosecurity is so bad that we need to fix quite a few things first. Once we do have biology as a service KYC, good synthesis screening, restricted access to biological design tools, metagenomic surveillance, much better PPE, etc, then I don't see Llama 5 as making us appreciably less safe from bioattacks. But that's much more than 90d! I get deeper into this in Biosecurity Culture, Computer Security Culture.

Which does make me wonder whether we were correct that shellac is what was on there before! Since that did seem sturdy.

  1. You start out talking about "large scale" attacks, then segue into the question of killing everyone, as though it were the same thing. Most of the post seems to be about universal fatality.

The scale of the attacks I'm trying to talk about are ones aimed at human extinction or otherwise severely limiting human potential (ex: preventing off-world spread). Either directly, through infecting and killing nearly everyone, or indirectly through causing global civilizational collapse. You're right that I'm slightly sloppy in calling this "extinction", but the alternatives are verbosity or jargon.

  1. You haven't supported the idea that a recognizably biological pathogen that can kill everyone can actually exist. To do that, it has to ...

I agree the post does not argue for this, and it's not trying to. Making the full case is really hard to do without making us less safe through information hazards, but:

it has to be have a 100 percent fatality rate

Instead of one 100% fatal pathogen you could combine several, each with an ~independent lower rate.

keep the host alive long enough to spread to multiple other hosts

See Securing Civilisation Against Catastrophic Pandemics for the idea of "wildfire" and "stealth" pandemics. The idea is that to be a danger to civilization would likely either need to be so infectious that we are not able to contain it (consider a worse measles) or have a long enough incubation period that by the time we learn about it it's already too late (consider a worse HIV).

have modes of spread that work over long distances and aren't easily interrupted

In the wildfire scenario, one possibility is an extremely infectious airborne pathogen. In the stealth scenario, this is not required because the spread happens before people know there is something to interrupt.

probably be able to linger in the environment to catch isolated stragglers

This depends a lot on how much you think a tiny number of isolated stragglers would be able to survive and restart civilization.

either be immune to therapy or vaccination, or move fast enough to obviate them

In the wildfire scenario, this is your second one: moving very fast. In the stealth scenario, we don't know that we need therapy/vaccination until it's too late.

be genetically stable enough that the "kill everybody" variant, as opposed to mutants, is the one that actually spreads

I think this is probably not possible to answer without getting into information hazards. I think the best I can do here is to say that I'm pretty sure Kevin Esvelt (MIT professor, biologist, CRISPR gene drive inventor, etc) doesn't see this as a blocker.

(for the threat actor you posit) leave off-target species alone

This doesn't seem like much of a barrier to me?

  1. If it can exist, you haven't supported the idea that it can be created by intentional design.

This is another one where right now for information hazards reasons the best I can offer is that Esvelt thinks it can.

  1. If it can be created by intentional design, you haven't supported the idea that it can be created confidently without large-scale experimentation


  1. you haven't supported the idea that it can be manufactured or delivered without large resources, in such a way that it will be able to do its job without dying out or falling to countermeasures

This is the scary thing about a pandemic: once it is well seated it spreads on its own through normal human interaction. Most things where you might want to cause similar harm you would need to set up a massive distribution network, but not this.

  1. It isn't easy to come up with plausible threat actors who want to kill everybody.

In an LW context I think the easiest actors to imagine are suffering-focused ones. Consider someone who thinks that suffering matters far more than anything else, enough that they'd strongly prefer ending humanity to spreading life beyond earth.

why isn't 95 percent fatality bad enough to worry about? Or even 5 percent?

I also think those are quite bad, and worth working to prevent! And, note that everything I've proposed at the end of the post is the kind of thing that you would also do if you were trying to reduce the risk of something that kills 5%.

But the point I am arguing in the post is that something that might kill everyone, or close enough to end global civilization, is much more likely than you would get from extrapolating historical attacks by small groups.

Bioweapons in general are actually kind of lousy for non-movie-villains at most scales, including large scales, because they're so unpredictable, so poorly controllable, and so poorly targetable.

I don't think those apply for the kind of omnicidal actors I'm covering here?

It would be kind of sidetracking things to get into the reasons why, but just to put it on the record, I have serious doubts about your countermeasures, too.

Happy to get into these too if you like!

Overall, I do think folks who are skeptical of experts who won't share their full reasons or who trust different experts who don't think this is practical should end up with a much more skeptical view than I have. I think we can make some progress as we get a clearer idea of which concepts are too dangerous to share, but probably not enough.

The leadership team–Mira, Brad, Jason, Che, Hannah, Diane, Anna, Bob, Srinivas, Matt, Lilian, Miles, Jan, Wojciech, John, Jonathan, Pat, and many more–is clearly ready to run the company without me. They say one way to evaluate a CEO is how you pick and train your potential successors; on that metric I am doing far better than I realized. It’s clear to me that the company is in great hands, and I hope this is abundantly clear to everyone. Thank you all.

I read this as saying to investors: "don't worry: even if the new board's review process does determine I should no longer be CEO and I leave, the company will continue to run very well". I don't think he thinks that's a likely outcome of the review, but what matters is what investors might expect.

I think people mostly aren't imagining it, though you do see the metaphor in icons and hear it in explanations.

I have CMD+=" -S $1", so if $1 has any spaces in it the parsing will be wrong.

Now, I know this about my script and will be careful not to do that, but it's still a risk.

Load More