The main goal of my work these days is trying to reduce the chances of individuals or small groups causing large-scale harm through engineered pandemics, potentially civilizational collapse or extinction. One question in figuring out whether this is worth working on, or funding, is: how large is the risk?

One estimation approach would be to look at historical attacks, but while they've been terrible they haven't actually killed very many people. The deadliest one was the September 11 attacks, at ~3k deaths. This is much smaller scale than the most severe instances of other disasters like dam failure, 25k-250k dead after 1975's Typhoon Nina, or pandemics, 75M-200M dead in the Black Death. If you tighten your reference class even further to include only historical biological attacks by individuals or small groups, the one with the most deaths is just five, in the 2001 anthrax attacks.

Put that way, I'm making a pretty strong claim: while the deadliest small-group bio attack ever only killed five people, we're on track for a future where one could kill everyone. Why do I think the future might be so unlike the past?

Short version: I expect a technological change which expands which actors would try to cause harm.

The technological change is the continuing decrease in the knowledge, talent, motivation, and resources necessary to create a globally catastrophic pandemic. Consider someone asking the open source de-censored equivalent of GPT-6 how to create a humanity-ending pandemic. I expect it would read virology papers, figure out what sort of engineered pathogen might be appropriate, walk you through all the steps in duping multiple biology-as-a-service organizations into creating it for you, and give you advice on how to release it for maximum harm. And even without LLMs, the number of graduate students who would be capable of doing this has been increasing quickly as technological progress and biological infrastructure decrease the difficulty.

The other component is a shift in which actors we're talking about. Instead of terrorists, using terror as a political tool, consider people who believe the planet would be better off without humans. This isn't a common belief, but it's also not that rare. Consider someone who cares deeply about animals, ecosystems, and the natural world, or is primarily focused on averting suffering: they could believe that while the deaths of all living people would be massively tragic, it would still give us a much better world on balance. Note that they probably wouldn't be interested in smaller-scale attacks: if it doesn't have a decent chance of wiping out humanity then they'd just be causing suffering and chaos without making progress towards their goals; they're not movie villains! Once a sufficiently motivated person or small group could potentially kill everyone, we have a new kind of risk from people who would have seen smaller-scale death as negative.

Now, these people are not common. There's a trope where, for example, opponents of environmentalism claim that human extinction is the goal, even when most radical environmentalists would see human extinction as a disaster. But what makes me seriously concerned is that as the bar for causing extinction continues to lower, the chances that someone with these views does have the motivation and drive to succeed gets dangerously high. And since these views are disproportionately common among serious engineering-minded folks, willing to trust the moral math, I think some will be the kind of highly capable and careful people who could work in secret for years sustained by a clear conviction that they were doing the right thing.

Fortunately, I think this is a risk we can seriously lower. For example, we should:

  • Require biology-as-a-service companies to screen for pathogens and apply anti-money laundering-style customer screening ("KYC").

  • Ensure LLMs do not help people kill everyone.

  • Verify companies releasing open source LLMs have built them in a way where their safeguards can't be trivially removed.

  • Detect stealth pathogens, in a way that would give us warning while there is still time to do something about it (what I'm working on).

  • Develop much better and cheaper PPE so that once we detect pandemic we can keep the core functions of society functioning.

  • Improve our ability to evaluate new vaccines and other medicines much more quickly so we could potentially roll out a countermeasure in time to stop an in-progress pandemic.

If you want to read more in this direction I'd recommend. Kevin Esvelt's 80,000 Hours podcast appearance (transcript) and his Delay, Detect, Defend paper.

New Comment
15 comments, sorted by Click to highlight new comments since: Today at 8:14 AM

If you tighten your reference class even further to include only historical biological attacks by individuals or small groups, the one with the most deaths is just five, in the 2001 anthrax attacks.

It's worth noting that the attacks were either done by Bruce Edwards Ivins who was paid out of funds to defend against bioattacks or someone in his vicinity. 

It seems strange to me that the recommendations you make don't take that into account. 

The idea that lay people using LLMs are worth worrying more about than people with expertise and access to top laboratories seems wrong to me. It's just an easy position to hold because it's not inconvenient for people with power.

The idea that lay people using LLMs are worth worrying more about than people with expertise and access to top laboratories seems wrong to me.

I agree it's definitely wrong today. I'm concerned it may stop being wrong in the future if we don't get our act together, because biology is currently democratizing quickly while the number of people at top labs is relatively constant.

I think efforts to reduce insider risk are also really valuable, but these look less like the kind of technical work I've been focusing on and more like better policies at labs and not engaging in particular kinds of risky research. I'm excited for other people to work on these!

(Also, the second half of my list and Esvelt's "Detect" and "Defend" apply regardless of where the attack originates.)


I think efforts to reduce insider risk are also really valuable, but these look less like the kind of technical work I've been focusing on and more like better policies at labs and not engaging in particular kinds of risky research.

Out of your proposal, it seems to me that the LLM question is a policy question. Faster evaluation of vaccines also is a lot about policy. 

In general, that sentiment sounds a bit like "It's easy to search the keys under the lampost, so that's what I will do".

Esvelt doesn't have in his threat model "people who work on vaccines release the pathogen for their own gain" the way Bruce Edwards Ivins did according to the FBI.

Esvelt does say dangerous things like "Only after intense discussions at the famous Asilomar conference of 1975 did they correctly conclude that recombinant DNA within carefully chosen laboratory-adapted constructs posed no risk of spreading on its own."

While you might argue that the amount of risk is acceptable, pretending that it's zero makes Kevin Esvelt not have that much credibility when it comes to the actual act of reducing risk. He lists a bunch of interventions that EA funders can spend their money so that they can feel like they are taking action effective action about biorisk while not addressing the center of the risk. 

[-]jbash4mo16-11

Strong downvoted, because it assumes and perpetuates a deeply distorted threat picture which would be pretty much guaranteed to misdirect resources, but which also seems to be good at grabbing minds on Less Wrong.

Basically it's full of dangerous memes that could be bad for biosecurity, or security in general.

  1. You start out talking about "large scale" attacks, then segue into the question of killing everyone, as though it were the same thing. Most of the post seems to be about universal fatality.
  2. You haven't supported the idea that a recognizably biological pathogen that can kill everyone can actually exist. To do that, it has to be have a 100 percent fatality rate; and still keep the host alive long enough to spread to multiple other hosts; and have modes of spread that work over long distances and aren't easily interrupted; and probably be able to linger in the environment to catch isolated stragglers; and either be immune to therapy or vaccination, or move fast enough to obviate them; and be genetically stable enough that the "kill everybody" variant, as opposed to mutants, is the one that actually spreads; and (for the threat actor you posit) leave off-target species alone.
  3. If it can exist, you haven't supported the idea that it can be created by intentional design.
  4. If it can be created by intentional design, you haven't supported the idea that it can be created confidently without large-scale experimentation, regardless of how intelligent you are. This means that the barriers to entry do not get lower in the ways you describe. This objection also applies to creating pretty much any novel pathogen short of universal lethality. You truly can't just think them into being.
  5. If it can be created with low barriers to entry, you haven't supported the idea that it can be manufactured or delivered without large resources, in such a way that it will be able to do its job without dying out or falling to countermeasures. This one actually applies more to attacks that want to be large-scale, but sub-universal-lethality attacks, since the pathogens for those would presumably have to be limited somehow.
  6. It isn't easy to come up with plausible threat actors who want to kill everybody. You end up telling an uncritical just so story. For example, you ignore the fact that your hypothetical environmentalists would probably be very worried about evolution and blowback into other species. You also skip steps to assume that anybody who has the environmental concerns you describe would "probably" be unsatisfied with less than 100 percent human fatality.

Any time you find yourself talking about 100 percent fatality, or about anybody trying to achieve 100 percent fatality, I think it's a good idea to sit back and check your thought processes for dramatic bias. I mean, why isn't 95 percent fatality bad enough to worry about? Or even 5 percent?

Bioweapons in general are actually kind of lousy for non-movie-villains at most scales, including large scales, because they're so unpredictable, so poorly controllable, and so poorly targetable. Not to say that there aren't a few applications, or even that there aren't a few actual movie villians out there. But there are even more damned fools, and they might be a better place to concentrate your concerns.

It would be kind of sidetracking things to get into the reasons why, but just to put it on the record, I have serious doubts about your countermeasures, too.

  1. You start out talking about "large scale" attacks, then segue into the question of killing everyone, as though it were the same thing. Most of the post seems to be about universal fatality.

The scale of the attacks I'm trying to talk about are ones aimed at human extinction or otherwise severely limiting human potential (ex: preventing off-world spread). Either directly, through infecting and killing nearly everyone, or indirectly through causing global civilizational collapse. You're right that I'm slightly sloppy in calling this "extinction", but the alternatives are verbosity or jargon.

  1. You haven't supported the idea that a recognizably biological pathogen that can kill everyone can actually exist. To do that, it has to ...

I agree the post does not argue for this, and it's not trying to. Making the full case is really hard to do without making us less safe through information hazards, but:

it has to be have a 100 percent fatality rate

Instead of one 100% fatal pathogen you could combine several, each with an ~independent lower rate.

keep the host alive long enough to spread to multiple other hosts

See Securing Civilisation Against Catastrophic Pandemics for the idea of "wildfire" and "stealth" pandemics. The idea is that to be a danger to civilization would likely either need to be so infectious that we are not able to contain it (consider a worse measles) or have a long enough incubation period that by the time we learn about it it's already too late (consider a worse HIV).

have modes of spread that work over long distances and aren't easily interrupted

In the wildfire scenario, one possibility is an extremely infectious airborne pathogen. In the stealth scenario, this is not required because the spread happens before people know there is something to interrupt.

probably be able to linger in the environment to catch isolated stragglers

This depends a lot on how much you think a tiny number of isolated stragglers would be able to survive and restart civilization.

either be immune to therapy or vaccination, or move fast enough to obviate them

In the wildfire scenario, this is your second one: moving very fast. In the stealth scenario, we don't know that we need therapy/vaccination until it's too late.

be genetically stable enough that the "kill everybody" variant, as opposed to mutants, is the one that actually spreads

I think this is probably not possible to answer without getting into information hazards. I think the best I can do here is to say that I'm pretty sure Kevin Esvelt (MIT professor, biologist, CRISPR gene drive inventor, etc) doesn't see this as a blocker.

(for the threat actor you posit) leave off-target species alone

This doesn't seem like much of a barrier to me?

  1. If it can exist, you haven't supported the idea that it can be created by intentional design.

This is another one where right now for information hazards reasons the best I can offer is that Esvelt thinks it can.

  1. If it can be created by intentional design, you haven't supported the idea that it can be created confidently without large-scale experimentation

Ditto

  1. you haven't supported the idea that it can be manufactured or delivered without large resources, in such a way that it will be able to do its job without dying out or falling to countermeasures

This is the scary thing about a pandemic: once it is well seated it spreads on its own through normal human interaction. Most things where you might want to cause similar harm you would need to set up a massive distribution network, but not this.

  1. It isn't easy to come up with plausible threat actors who want to kill everybody.

In an LW context I think the easiest actors to imagine are suffering-focused ones. Consider someone who thinks that suffering matters far more than anything else, enough that they'd strongly prefer ending humanity to spreading life beyond earth.

why isn't 95 percent fatality bad enough to worry about? Or even 5 percent?

I also think those are quite bad, and worth working to prevent! And, note that everything I've proposed at the end of the post is the kind of thing that you would also do if you were trying to reduce the risk of something that kills 5%.

But the point I am arguing in the post is that something that might kill everyone, or close enough to end global civilization, is much more likely than you would get from extrapolating historical attacks by small groups.

Bioweapons in general are actually kind of lousy for non-movie-villains at most scales, including large scales, because they're so unpredictable, so poorly controllable, and so poorly targetable.

I don't think those apply for the kind of omnicidal actors I'm covering here?

It would be kind of sidetracking things to get into the reasons why, but just to put it on the record, I have serious doubts about your countermeasures, too.

Happy to get into these too if you like!

Overall, I do think folks who are skeptical of experts who won't share their full reasons or who trust different experts who don't think this is practical should end up with a much more skeptical view than I have. I think we can make some progress as we get a clearer idea of which concepts are too dangerous to share, but probably not enough.

[-]jbash4mo120

Pulling this to the top, because it seems, um, cruxish...

I think the best I can do here is to say that Kevin Esvelt (MIT professor, biologist, CRISPR gene drive inventor, etc) doesn't see this as a blocker.

In this sort of case, I think appeal to authority is appropriate, and that's a lot better authority than I have.

Just to be clear and pull all of the Esvelt stuff together, are you saying he thinks that...

  1. Given his own knowledge and/or what's available or may soon be available to the public,
  2. plus a "reasonable" lab that might be accessible to a small "outsider" group or maybe a slightly wealthy individual,
  3. and maybe a handful of friends,
  4. plus at least some access to the existing biology-as-a-service infrastructure,
  5. he could design and build a pathogen, as opposed to evolving one using large scale in vivo work,
  6. and without having to passage it through a bunch of live hosts,
  7. that he'd believe would have a "high" probability of either working on the first try, or
    1. failing stealthily enough that he could try again,
    2. including not killing him when he released it,
    3. and working within a few tries,
  8. to kill enough humans to be either an extinction risk or a civilization-collapsing risk,
  9. and that a relatively sophisticated person with "lesser" qualifications, perhaps a BS in microbiology, could
    1. learn to do the same from the literature, or
    2. be coached to do it by an LLM in the near future.

Is that close to correct? Are any of those wrong, incomplete, or missing the point?

When he gets into a room with people with similar qualifications, how do they react to those ideas? Have you talked it over with epidemiologists?

The scale of the attacks I'm trying to talk about are ones aimed at human extinction or otherwise severely limiting human potential (ex: preventing off-world spread). Either directly, through infecting and killing nearly everyone, or indirectly through causing global civilizational collapse. You're right that I'm slightly sloppy in calling this "extinction", but the alternatives are verbosity or jargon.

I think that, even if stragglers die on their own, killing literally everyone is qualitatively harder than killing an "almost everyone" number like 95 percent. And killing "almost everyone" is qualitatively harder than killing (or disabling) enough people to cause a collapse of civilization.

I also doubt that a simple collapse of civilization[1] would be the kind of permanent limiting event you describe[2].

I think there's a significant class of likely-competent actors who might be risk-tolerant enough to skate the edge of "collapsing civilization" scale, but wouldn't want to cause extinction or even get close to that, and certainly would never put in extra effort to get extinction. Many such actors probably have vastly more resources than anybody who wants extinction. So they're a big danger for sub-extinction events, and probably not a big danger for extinction events. I tend to worry more about those actors than about omnicidal maniacs.

So I think it's really important to keep the various levels distinct.

Instead of one 100% fatal pathogen you could combine several, each with a ~independent lower rate.

How do you make them independent? If one disease provokes widespread paranoia and/or an organized quarantine, that affects all of them. Same if the population gets so sparse that it's hard for any of them to spread.

Also, how does that affect the threat model? Coming up with a bunch of independent pathogens presumably takes a better-resourced, better-organized threat than coming up with just one. Usually when you see some weird death cult or whatever, they seem to do a one-shot thing, or at most one thing they've really concentrated on and one or two low effort add-ons. Anybody with limited resources is going to dislike the idea of having the work multiplied.

The idea is that to be a danger to civilization would likely either need to be so infectious that we are not able to contain it (consider a worse measles) or have a long enough incubation period that by the time we learn about it it's already too late (consider a worse HIV).

The two don't seem incompatible, really. You could imagine something that played along asymptomatically (while spreading like crazy), then pulled out the aces when the time was right (syphilis).

Which is not to say that you could actually create it. I don't know about that (and tend to doubt it). I also don't know how long you could avoid surveillance even if you were asymptomatic, or how much risk you'd run of allowing rapid countermeasure development, or how closely you'd have to synchronize the "aces" part.

This depends a lot on how much you think a tiny number of isolated stragglers would be able to survive and restart civilization.

True indeed. I think there's obviously some level of isolation where they all just die off, but there's probably some lower level of isolation where they find each other enough to form some kind of sustainable group... after the pathogen has died out. Humans are pretty long-lived.

You might even have a sustainable straggler group survive all together. Andaman islanders or the like.

By the way, I don't think "sustainable group" is the same as "restart civilization". As long as they can maintain a population in hunter-gatherer or primitive pastoralist mode, restarting civilization can wait for thousands of years if it has to.

In the stealth scenario, we don't know that we need therapy/vaccination until it's too late.

Doesn't that mean that every case has to "come out of incubation" at relatively close to the same time, so that the first deaths don't tip people off? That seems really hard to engineer.

Bioweapons in general are actually kind of lousy for non-movie-villains at most scales, including large scales, because they're so unpredictable, so poorly controllable, and so poorly targetable.

I don't think those apply for the kind of omnicidal actors I'm covering here?

Well, yes, but what I was trying to get at was that omnicidal actors don't seem to me like the most plausible people to be doing very naughty things.

It kind of depends on what kind of resources you need to pull off something really dramatic. If you need to be a significant institution working toward an official purpose, then the supply of omnicidal actors may be nil. If you need to have at least a small group and be generally organized and functional and on-task, I'd guess it'd be pretty small, but not zero. If any random nut can do it on a whim, then we have a problem.

I was writing on the assumption that reality is closer to the beginning of that list.

Happy to get into these too if you like!

I might like, all right, but at the moment I'm not sure I can or should commit the time. I'll see how things look tomorrow.


  1. ... depleted fossil resources or no... ↩︎

  2. Full disclosure: Bostromian species potential ideas don't work for me anyhow. I think killing everybody alive is roughly twice as bad as killing half of them, not roughly infinity times as bad. I don't think that matters much; we all agree that killing any number is bad. ↩︎

Just to be clear and pull all of the Esvelt stuff together, are you saying he thinks that...

I can't speak for him, but I'm pretty sure he'd agree, yes.

When he gets into a room with people with similar qualifications, how do they react to those ideas? Have you talked it over with epidemiologists?

I don't know, sorry! My guess is that they are generally much less concerned than he is, primarily because they've spent their careers thinking about natural risks instead of human ones and haven't (not that I think they should!) spent a lot of time thinking about how someone might cause large-scale harm.

If one disease provokes widespread paranoia and/or an organized quarantine, that affects all of them. Same if the population gets so sparse that it's hard for any of them to spread.

Sorry, I was thinking about 'independence' in the sense of not everyone being susceptible to the same illnesses, because I've mostly been thinking about the stealth scenario where you don't know to react until it's too late. You're right that in a wildfire scenario reactions to one disease can restrict the spread of another (recently: covid lockdowns in 2020 cutting the spread of almost everything else).

Anybody with limited resources is going to dislike the idea of having the work multiplied.

Probably depends a lot on how the work scales with more pathogens?

The two don't seem incompatible, really. You could imagine something that played along asymptomatically (while spreading like crazy), then pulled out the aces when the time was right (syphilis).

I don't think they're incompatible; I wasn't trying to give an exclusive "or".

Which is not to say that you could actually create it. I don't know about that (and tend to doubt it). I also don't know how long you could avoid surveillance even if you were asymptomatic, or how much risk you'd run of allowing rapid countermeasure development, or how closely you'd have to synchronize the "aces" part. ... Doesn't that mean that every case has to "come out of incubation" at relatively close to the same time, so that the first deaths don't tip people off? That seems really hard to engineer.

I think this is all pretty hard to get into without bringing up infohazards, unfortunately.

It kind of depends on what kind of resources you need to pull off something really dramatic. If you need to be a significant institution working toward an official purpose, then the supply of omnicidal actors may be nil. If you need to have at least a small group and be generally organized and functional and on-task, I'd guess it'd be pretty small, but not zero. If any random nut can do it on a whim, then we have a problem.

If we continue not doing anything then I think we do get to where one smart and reasonably dedicated person can do it; perhaps another Kaczynski?

Full disclosure: Bostromian species potential ideas don't work for me anyhow. I think killing everybody alive is roughly twice as bad as killing half of them, not roughly infinity times as bad. I don't think that matters much; we all agree that killing any number is bad.

While full-scale astronomical waste arguments don't work for a lot of people, it sounds like your views are almost as extreme in the other direction? If you're up for getting into this, is it that you don't think we should consider people who don't exist yet in our decisions?

I can't speak for him, but I'm pretty sure he'd agree, yes.

Hrm. That modifies my view in an unfortunate direction.

I still don't fully believe it, because I've seen a strong regularity that everything looks easy until you try it, no matter how much of an expert you are... and in this case actually making viruses is only one part of the necessary expertise. But it makes me more nervous.

I don't know, sorry! My guess is that they are generally much less concerned than he is, primarily because they've spent their careers thinking about natural risks instead of human ones and haven't (not that I think they should!) spent a lot of time thinking about how someone might cause large-scale harm.

Just for the record, I've spent a lot of my life thinking about humans trying to cause large scale harm (or at least doing things that could have large scale harm as an effect). Yes, in a different area, but nonetheless it's led me to believe that people tend to overestimate risks. And you're talking about a scale of effecicacy that I don't think I could get with a computer program, which is a much more predictable thing working in a much more predictable environment.

If you're up for getting into this, is it that you don't think we should consider people who don't exist yet in our decisions?

I've written a lot about it on Less Wrong. But, yes, your one-sentence summary is basically right. The only quibble is that "yet" is cheating. They don't exist, period. Even if you take a "timeless" view, they still don't exist, anywhere in spacetime, if they never actually come into being.

Any time you find yourself talking about 100 percent fatality, or about anybody trying to achieve 100 percent fatality, I think it's a good idea to sit back and check your thought processes for dramatic bias. I mean, why isn't 95 percent fatality bad enough to worry about? Or even 5 percent?

I agree that drama bias is a serious issue here, exacerbated by how much EAs/LWers place importance on pivotal acts for x-risk which is a massive red flag for drama/story biases.

In the other hand, this is focusing on existential risk, and with some exceptions, 5% lethality probably matters little to the question, though I agree that lower fatality percents matter here, especially since exponential population is not a safe assumption anymore.

It isn't easy to come up with plausible threat actors who want to kill everybody.

While I kind of agree with this, I think unfortunately, this is the easiest element of the threat model, compared to your other points, so I'd not rely on it.

Unfortunately, I feel like a lot of your comment is asking for things that are likely to be info hazardous, and I'd like to see an explanation for why to shift the burden of proof to the people that are warning us.

Unfortunately, I feel like a lot of your comment is asking for things that are likely to be info hazardous, and

Well, actually it's more like pointing out that those things don't exist. I think (1) through (4) are in fact false/impossible.

But if I'm wrong, it could still be possible to support them without giving instructions.

I'd like to see an explanation for why to shift the burden of proof to the people that are warning us.

Well, I think one applicable "rationalist" concept tag would be "Pascal's Mugging".

But there are other issues.

If you go in talking about mad environmentalists or whoever trying to kill all humans, it's going to be a hard sell. If you try to get people to buy into it, you may instead bring all security concerns about synthetic biology into disrepute.

To whatever degree you get past that and gain influence, if you're fixated on "absolutely everybody dies in the plague" scenarios (which again are probably impossible), then you start to think in terms of threat actors who, well, want absolutely everybody to die. Whatever hypotheticals you come up with there, they're going to involve very small groups, possibly even individuals, and they're going to be "outsiders". And deranged in a focused, methodical, and actually very unusal way.

Thinking about outsiders leads you to at least deemphasize the probably greater risks from "insiders". A large institution is far more likely to kill millions, either accidentally or on purpose, than a small subversive cell. But it almost certainly won't try to kill everybody.

... and because you're thinking about outsiders, you can start to overemphasize limiting factors that tend to affect outsiders, but not insiders. For example, information and expertise may be bottlenecks for some random cult, but they're not remotely as serious bottlenecks for major governments. That can easily lead you to misdirect your countermeasures. For example all of the LLM suggestions in the original post.

Similarly, thinking only about deranged fanatics can lead you to go looking for deranged fanatics... whereas relatively normal people behaving in what seem to them like relatively normal ways are perhaps a greater threat. You may even miss opportunities to deal with people who are deranged, but not focused, or who are just plain dumb.

In the end, by spending time on an extremely improbable scenario where eight billion people die, you can seriously misdirect your resources and end up failing to prevent, or mitigate, less improbabl cases where 400 million die. Or even a bunch of cases where a few hundred die.

>And even without LLMs, the number of graduate students who would be capable of doing this has been increasing quickly as technological progress and biological infrastructure decrease the difficulty.

Grad student mental health support might be the next big EA cause area.

Consider someone asking the open source de-censored equivalent of GPT-6 how to create a humanity-ending pandemic. I expect it would read virology papers, figure out what sort of engineered pathogen might be appropriate, walk you through all the steps in duping multiple biology-as-a-service organizations into creating it for you, and give you advice on how to release it for maximum harm.

This commits a common error in these scenarios: implicitly assuming that the only person in the entire world that has access to the LLM is a terrorist, and everyone else is basically on 2023 technology. Stated explicitly, it's absurd, right? (We'll call the open source de-censored equivalent of GPT-6 Llama-5, for brevity.)

If the terrorist has Llama-5, so do the biology-as-a-service orgs, so do law-enforcement agencies, etc. If the biology-as-a-service orgs are following your suggestion to screen for pathogens (which is sensible), their Llama-5 is going to say, ah, this is exactly what a terrorist would ask for if they were trying to trick us into making a pathogen. Notably, the defenders need a version that can describe the threat scenario, i.e. an uncensored version of the model!

In general, beyond just bioattack scenarios, any argument purporting to demonstrate dangers of open source LLMs must assume that the defenders also have access. Everyone having access is part of the point of open source, after all.

Edit: I might as well state my own intuition here that:

  • In the long run, equally increasing the intelligence of attacker and defender favors the defender.
  • In the short run, new attacks can be made faster than defense can be hardened against them.

If that's the case, it argues for an approach similar to delayed disclosure policies in computer security: if a new model enables attacks against some existing services, give them early access and time to fix it, then proceed with wide release.

I'm not assuming that the only person with Llama 5 is the one intent on causing harm. Instead, I unfortunately think the sphere of biological attacks is, at least currently, much more favorable to attackers than defenders.

If the biology-as-a-service orgs are following your suggestion to screen for pathogens

I'm not sure we get to assume that? Screening is far from universal today, and not mandatory.

their Llama-5 is going to say, ah, this is exactly what a terrorist would ask for if they were trying to trick us into making a pathogen

This only works if the screener has enough of the genome at once that Llama 5 can figure out what it does, but this is easy to work around.

In general, beyond just bioattack scenarios, any argument purporting to demonstrate dangers of open source LLMs must assume that the defenders also have access

Sure!

If that's the case, it argues for an approach similar to delayed disclosure policies in computer security: if a new model enables attacks against some existing services, give them early access and time to fix it, then proceed with wide release.

I don't actually disagree with this! The problem is that the current state of biosecurity is so bad that we need to fix quite a few things first. Once we do have biology as a service KYC, good synthesis screening, restricted access to biological design tools, metagenomic surveillance, much better PPE, etc, then I don't see Llama 5 as making us appreciably less safe from bioattacks. But that's much more than 90d! I get deeper into this in Biosecurity Culture, Computer Security Culture.

Strong upvoted. I'm really glad that people like you are thinking about this.

Something that people often miss with bioattacks is the economic dimension. After the 2008 financial crisis, economic failure/collapse became perhaps the #1 goalpost of the US-China conflict

It's even debatable whether the 2008 financial crisis was the cause of the entire US China conflict (e.g. lots of people in DC and Beijing would put the odds at >60% that >50% of the current US-China conflict was caused by the 2008 recession alone, in contrast to other variables like the emergence of unpredictable changes in cybersecurity).

Unlike conventional war e.g. over Taiwan and cyberattacks, economic downturns have massive and clear effects on the balance of power between the US and China, with very little risk of a pyrrhic victory (I don't currently know how this compares to things like cognitive warfare which also yield high-stakes victories and defeats that are hard to distinguish from natural causes).

Notably, the imperative to cause massive economic damage, rather than destroy the country itself, allows attackers to ratchet down the lethality as far as they want, so long as it's enough to cause lockdowns which cause economic damage (maybe mass IQ reduction or other brain effects could achieve this instead). 

GOF research is filled with people who spent >5 years deeply immersed in a medical perspective e.g. virology, so it seems fairly likely to me that GOF researchers will think about the wider variety of capabilities of bioattacks, rather than inflexibly sticking to the bodycount-maximizing mindset of the Cold War.

I think that due to disorganization and compartmentalization within intelligence agencies, as well as unclear patterns of emergence and decay of competent groups of competent people, it's actually more likely that easier-access biological attacks would first be caused by radicals with privileged access within state agencies or state-adjacent organizations (like Booz Allen Hamilton, or the Internet Research Agency which was accused of interfering with the 2016 election on behalf of the Russian government). 

These radicals might incorrectly (or even correctly) predict that their country is a sinking ship and that they only way out is to personally change the balance of power; theoretically, they could even correctly predict that they are the only ones left competent enough to do this before it's too late.

Did you see my old post with biorisk map 
https://www.lesswrong.com/posts/9Ep7bNRQZhh5QNKft/the-map-of-global-catastrophic-risks-connected-with