Authors: Megan Crawford, Finan Adamson, Jeffrey Ladish
Special Thanks to Georgia Ray for Editing
Most in the effective altruism community are aware of a possible existential threat from biological technology but not much beyond that. The form biological threats could take is unclear. Is the primary threat from state bioweapon programs? Or superorganisms accidentally released from synthetic biology labs? Or something else entirely?
If you’re not already an expert, you’re encouraged to stay away from this topic. You’re told that speculating about powerful biological weapons might inspire terrorists or rogue states, and simply articulating these threats won’t make us any safer. The cry of “Info hazard!” shuts down discussion by fiat, and the reasons cannot be explained since these might also be info hazards. If concerned, intelligent people cannot articulate their reasons for censorship, cannot coordinate around principles of information management, then that itself is a cause for concern. Discussions may simply move to unregulated forums, and dangerous ideas will propagate through well intentioned ignorance.
We believe that well reasoned principles and heuristics can help solve this coordination problem. The goal of this post is to carve up the information landscape into areas of relative danger and safety; to illuminate some of the islands in the mire that contain more treasures than traps, and to help you judge where you’re likely to find discussion more destructive than constructive.
Useful things to know already if you’re reading this post:
Much of the material in this also overlaps with Gregory Lewis’ Information Hazards in Biotechnology article, which we recommend.
We’ve divided this paper into two broad categories: risks from information sharing, and risks from secrecy. First we will go over the ways in which sharing information can cause harm, and then how keeping information secret can cause harm.
We believe considering both is important for determining whether or not to share a particular thought or paper. To keep things relatively targeted and concrete, we provide illustrative toy examples, or sometimes even real examples.
This section categorizes ways that sharing information in the biological sciences can be risky.
A topic covered in other Information Hazard posts that we chose not to focus on here is that different audiences can present substantially different risk profiles for the same idea.
With some ideas, you can achieve almost all of the benefits and de-risking associated with sharing by only mentioning your idea to one key researcher, or sharing findings in a journal associated with some obscure subfield, while simultaneously dodging most of the risk of these ideas finding their way to a foolish or bad actor.
If you’re interested in that topic, Gregory Lewis’ paper Information Hazards in Biotechnology covers it well.
A bad actor gets an idea they did not previously have
Some ways this could manifest:
Why might this be important?
State or non-state actors may have trouble developing ideas on their own. Model generation can be quite difficult, so generating or sharing clever new models can be risky. In particular, we are concerned about the possibility of ideas moving from biology researchers to bioterrorists or state actors. Biosecurity researchers are often better-educated and/or more creative than most bad actors. There are also probably many more researchers than people interested in bioterrorism; the difference in numbers could be even more impactful. If there are more biosecurity researchers than there are bad actors, researchers are likely to come up with many more ideas.
A careless actor gets an idea they did not previously have
Some careless actors may have a low chance of thinking of a given interesting idea on their own, but have the inclination and ability to implement an idea if they hear about it from someone else. One reason this might be true is that biosecurity researchers could specifically be looking for interesting possible threats, so the “interesting idea” space they explore will focus more heavily on risky ideas.
Toy example 1: Biosecurity researcher publishes a report about vulnerabilities in the water supply of Exemplandia and a biological agent, Sickmaniasis, that could be used to terrorize Exemplandia. Another researcher writes a paper that explores specific possible implementations of Sickmaniasis, including sequence information and lab procedures for generating Sickmaniasis. In this case of the Unilaterist’s Curse, both security researchers were motivated by the desire to prevent some kind of harm, but the first researcher was specifically more careful about publishing methods.
Toy example 2: Researcher publishes report on how to use a gene drive to drive an insect species extinct. A careless researcher uses this report to create a gene drive in a lab on a test population of an insect species. Some insects escape from the lab, and the wild insect species population crashes. Even though the original researcher’s lab was very careful with test implementations of their gene drives, the information they produce led to a careless lab crashing the population of a whole species.
Real Example: In 1997, rabbit hemorrhagic disease (RHD) began to spread through New Zealand. It is believed by authorities that New Zealand farmers smuggled the disease into the country and released it intentionally as an animal control measure. RHD was used in Australia as a biocontrol tool, and organizations had attempted to get the New Zealand government to approve it for use. The virus began to spread after the New Zealand government denied their application. This is a case where authorities that reviewed a biological tool for use decided it was a bad idea. Despite their disapproval, someone released it. This wasn’t a human pathogen, but the demonstrated potential for a unilateral actor to decide to release a banned disease agent and succeed is troubling all the same. We’d like to reiterate that unsanctioned pest control using disease is A BAD IDEA!
A bad actor gains access to details (but not an original idea) on how to create a harmful biological agent
The bad actor would not have been able to easily generate the instructions to create the harmful agent without the new source of information. As DNA synthesis & lab automation technology improves, the bottleneck to the creation of a harmful agent is increasingly knowledge & information rather than applied skill. Technical knowledge and precise implementation details have historically been a bottleneck for bioweapons programs, particularly terrorist or poorly-funded programs (see Barriers to Bioweapons by Sonia Ben Ouagrham-Gormley).
A little knowledge is a dangerous thing
Many new technologies (especially in biology) may have unintended side effects. Microscopic organisms can proliferate, and that may get out of hand if procedures are not followed carefully. Sometimes a tentative plan, which might or might not be a good idea, is perceived as a great plan by someone less familiar with its risks. The more careless actor may then take steps to implement a plan without considering the externalities.
As advanced lab equipment becomes cheaper and more accessible, and as more non-academic labs open up without the highly-cautious pro-safety incentives of academia, we might expect to see more experimenters who neglect to practice appropriate safety procedures. We might even see more experimenters who fell through the cracks, and never learned these procedures in the first place. How bad a development this is depends on precisely what those labs are working on, and the quality of their self-supervision.
Second-degree variant: Dangerous implementation knowledge is given to someone who is likely to distribute it, which might later result in a convergence of intent and means in a single individual, either a careless or malicious actor, who produces a dangerous biological product. Some examples of possible distributors might be a person whose job rewards the dissemination of information, or a person who chronically underestimates risks.
This risk means it is important to keep in mind what incentives people have to share information, and whether that might incline them to share information hazards.
Information that is not currently dangerous becomes dangerous
Technological progress can be difficult to predict. Sometimes there are major advances in technology that allow for new capabilities, such as rapidly sequencing and copying genomes. Could the information you share be dangerous in 5 years? 10? 100? How does this weigh against how useful the information is, or how likely it is to become public soon anyway?
Presenting an idea causes people to dismiss risks
Trying to change norms can backfire. If the first people presenting a measure to reduce the publication of risky research are too low-prestige to be taken seriously, no effect might actually be the best-case scenario. An idea that is associated with disreputable people or hard-to-swallow arguments may itself start being treated as disreputable, and face much higher skepticism and hostility than if better, proven arguments had been presented first.
This is almost the inverse of the Streisand effect, which appears to derive from similar psychological principles. In the case of the Streisand Effect, attempts to remove information are what catapult it into public consciousness. In the case of idea inoculation, attempts to publicize an idea ensure that the concept is ignored or dismissed out-of-hand, with no further consideration given to it.
It also connects in interesting ways with Bostrom's Schema
This list is not exhaustive, and we chose to lean concrete rather than abstract.
There were a few important-but-abstract risk categories that we didn’t think we could easily do justice while keeping them succinct and concrete. We felt that several were already implied in a more concrete way by the categories we did keep, but that they encompass some edge-cases that our schemas don’t capture. They at least warrant a mention and description.
One is the “Risk of Increased Attention,” what Bostrom calls “Attention Hazard.” This is naturally implied by the four “ideas/actors” categories, but in fact covers a broader set of cases. A zone we focused less on are the circumstances in which even useful ideas, combined with smart actors, can eventually lead to unintuitive but catastrophic consequences if given enough attention and funding. This is best exemplified in the fears about the rate of development and investment in AI. It’s also partially exemplified in “Information vulnerable to future advances.”
The other is “Information Several Inferential Distances Out Is Hazardous.” This is a superset of “Information vulnerable to future advances,” but it also encompasses cases where it’s merely a matter of extending an idea out a few further logical steps, not just technological ones.
For both, we felt they partially-overlapped with the examples already given, and leaned a bit too abstract and hard-to-model for this post’s focus on concrete examples. However, we think there’s still a lot of value in these important, abstract, and complete (but harder-to-use) schemas.
We’ve talked above about many of the risks involved in information hazards. We take the risks of sharing information hazards seriously, and think others should as well. But in the Effective Altruist community, it has been our observation that people don’t observe the flipside of this.
Conversations about risks from biology get shut down and turn into discussions of infohazards, even when the information being shared is already available. There is something to be said for not spreading information further, but shutting down the discussion of people looking for solutions also has downsides.
Leaving it to the experts is not enough when there may not be a group of experts thinking and coming up with solutions. We encourage people that want to work on biorisks to think about the value and risks in sharing potentially dangerous information. Below we will go through the risks or loss of value from not sharing information.
A holistic model of information sharing will include weighing both the risks and benefits of sharing information. A decision should be made having considered how the information might be used by bad or careless actors AND how valuable the information is for good actors to further research or coordinate to solve a problem.
Closed research culture stifles innovation
Good actors need information to develop useful countermeasures. In a world where researchers cannot communicate their ideas with each other it makes model generation more difficult and reduces the ability of the field to build up good defensive systems.
Information is not shared, so risky work is not stopped
Some fields of research are dangerous, or may eventually become dangerous. It is much harder to prevent a class of research if the dangers posed by that research cannot be discussed publicly.
Informal social checks on the standards or behavior of others seems to serve an important, and often underestimated, function as a monitoring and reporting system against unethical or unsafe behaviors. It can be easy to underestimate how much the objections of a friend can shift the way you view the safety of your research, as they may bring up a concern you didn’t even think to ask about.
There are also entities with a mandate to do formal checks, and it is dangerous if they are left in the dark. Work environments, labs, or even entire fields can develop their own unusual work cultures. Sometimes, these cultures systematically undervalue a type of risk because of its disproportionate benefits to them, even if the general populace would have objections. Law enforcement, lawmakers, public discussion, reporting, and entities like ethical review boards are intended to intervene in these sorts of cases, but have no way to do so if they never hear about a problem.
Each of these entities have their strengths and weaknesses, but a world without whistleblowers, or one where no one can access anyone capable of changing these environments, is likely to be a more dangerous world.
Siloing information leaves individual workers blind to the overall goal accomplished
Lab work seems to be increasingly getting automated, or outsourced piecemeal. At the same time, the biotechnology industry has an incentive to be secretive with any pre-patent information they uncover. Without additional precautions being taken, secretive assembly-line-esque offerings increase the likelihood that someone could order a series of steps that look harmless in isolation, but create something dangerous when combined.
Talented people don’t go into seemingly empty or underfunded fields
While many researchers and policy makers work in biosecurity, there is a shortage of talent applied to longer term and more extreme biosecurity problems. There have been only limited efforts to successfully attract top talent to this nascent field.
This may be changing. The Open Philanthropy Project has begun funding projects focused on Global Catastrophic Biorisk, and has provided funding for many individuals beginning their careers in the field of biosecurity.
Policies that require a lot of oversight or add on procedures that increase the cost of doing research cause there to be fewer opportunities for people who want to make a positive difference.
Suppressing information can cause it to spread
The Streisand effect is named after an incident where attempts to have photographs taken down led to a media spotlight and widespread discussion of those same photos. The photos had previously been posted in a context where only 1 or 2 people had taken enough of an interest to access it.
Something analogous could very easily happen with a paper outlining something hazardous in a research journal, or with an online discussion. The audience may have originally been quite targeted simply due to the nicheness or the obscurity of its original context. But an attempt at calling for intervention leads to a public discussion, which spreads the original information. This could be viewed as one of the possible negative outcomes of poorly-targeted whistleblowing.
As mentioned in the section on idea inoculation, this effect is functionally idea inoculation’s inverse and is based on similar principles.
Overall, we think biosecurity in the context of catastrophic risks has been underfunded and underdiscussed. There has been positive development in the time since we started on this paper; the Open Philanthropy Project is aware of funding problems in the realm of biosecurity and has been funding a variety of projects to make progress on biosecurity.
It can be difficult to know where to start helping in biosecurity. In the EA community, we have the desire to weigh the costs and benefits of philanthropic actions, but that is made more difficult in biosecurity by the need for secrecy.
We hope we’ve given you a place to start and factors to weigh when deciding to share or not share a particular piece of information in the realm of biosecurity. We think the EA community has sometimes erred too much on the side of shutting down discussions of biology by turning them into discussions about infohazards. It’s possible EA is being left out of conversations and decision making processes that could benefit from an EA perspective. We’d like to see collaborative discussion aimed towards possible actions or improvements in biosecurity with risks and benefits of the information considered, but not the central point of the conversation.
It’s a big world with many problems to focus on. If you prefer to focus your efforts elsewhere, feel free to do so. But if you do choose to engage with biosecurity, we hope you can weigh risks appropriately and choose the conversations that will lead to many talented collaborators and a world safer from biological risks.
By the way, the authors are part of the organizing team for the Catalyst Biosecurity Summit. It will bring together synthetic biologists and policymakers, academics and biohackers, and a broad range of professionals invested in biosecurity for a day of collaborative problem-solving. It will be on February 22, 2020. You can sign up for updates here.
Connecting “Risk of Idea Inoculation” with Bostrom’s Schema: this could be seen as a subset of Attention Hazard and a distant cousin of Knowing-Too-Much Hazard. Attention Hazard encompasses any situation where drawing too much attention to a set of known facts increases risk, and the link is obvious. In Knowing-Too-Much Hazard, the presence of knowledge makes certain people a target of dislike. However, in Idea Inoculation, people’s dislike for your incomplete version of the idea rubs that dislike off onto the idea itself ↩︎
Now that we've gone over some of the considerations, here's some of the concrete topics I see as generally high or low hazard for open discussion.
These things may be worth specialists discussing among themselves, but are likely to do more harm than good in an open thread.
Here's a simplification of my current assessment heuristic...
Biorisk - well wouldn't it be nice if we'd all been familiar with the main principles of biorisk before 2020? i certainly regretted sticking my head in the sand.
> If concerned, intelligent people cannot articulate their reasons for censorship, cannot coordinate around principles of information management, then that itself is a cause for concern. Discussions may simply move to unregulated forums, and dangerous ideas will propagate through well intentioned ignorance.
Well. It certainly sounds prescient in hindsight, doesn't it?
Infohazards in particular cross my mind: so many people operate on extremely bad information right now. Conspiracies theories abound, and I imagine the legitimate coordination for secrecy surrounding the topic do not help in the least. What would help? Exactly this essay. A clear model of *what* we should expect well-intentioned secrecy to cover, so we can reason sanely over when it's obviously not.
Y'all done good. This taxonomy clarifies risk profiles better than Gregory Lewis' article, though I think his includes a few vivid-er examples.
I opened a document to experiment tweaking away a little dryness from the academic tone. I hope you don't take offense. Your writing represents massive improvements in readability in its examples and taxonomy, and you make solid, straightforward choices in phrasing. No hopelessly convoluted sentence trees. I don't want to discount that. Seriously! Good job.
As I read I had a few ideas spark on things that could likely get done at a layman level, in line with spiracular's comment. That comment could use some expansion, especially in the direction of "Prefer to discuss this over that, or discuss in *this way* over *that way" for bad topics. Very relevantly, I think basic facts should get added to some the good discussion topics, since they represent information it's better to disseminate! we seek to review basic facts under the good discussion topics, since they represent information it's better to disseminate (EDIT, see comments).
"Basic facts" as "safe discussion topics": Ooh, I disagree! I think this heuristic doesn't always hold, especially for people writing on a large platform.
For basic information, it is sometimes a good idea to think twice if a fact might be very-skewed towards benefiting harmful actions over protective ones. If you have a big platform, it is especially important to do so.
(It might actually be more important for someone to do this for basic facts, than sophisticated ones? They're the ones a larger audience of amateurs can grasp.)
If something is already widely known, that does somewhat reduce the extent of your "fault" for disseminating it. That rule is more likely to hold for basic facts.
But if there is a net risk to a piece of information, and you are spreading it to people who wouldn't otherwise know? Then larger audiences are a risk-multiplier. So, sometimes spreading a low-risk basic thing widely could be more dangerous, overall, than spreading an high-risk but obscure and specialist thing.
It was easy for me to think of at least 2 cases where spreading an obvious, easy-to-grasp fact could disproportionately increase the hazard of bad actors relative to good ones, in at least some petty ways. Here's one.
Ex: A member of the Rajneeshee cult once deliberately gave a bunch of people food poisoning, then got arrested. This is a pretty basic fact. But I wouldn't want to press a button that would disseminate this fact to 10 million random people? People knowing about this isn't actually particularly protective against food poisoning, and I'd bet that there is least 1 nasty human in 10 million people. If I don't have an anticipated benefit to sharing, I would prefer not to risk inspiring that person.
On the other hand, passing around the fact that a particular virus needs mucus membranes to enter cells seems... net-helpful? It's easier for people to use that to advise their protective measures, and it's unlikely to help a rare bad actor who is sitting the razor's-edge case where they would have infected someone IF ONLY they had known to aim for the mucus membranes, AND where they only knew about that because you told them.
(And then you have complicated intermediate cases. Off the top of my head, WHO's somewhat-dishonest attempt to convince people that masks don't work, in a bid to save them for the medical professionals? I don't think I like what they did (they basically set institutional trust on fire), but the situation they were in does tug at some edge-cases around trying to influence actions vs beliefs. The fact that masks did work, but had a limited supply, meant that throwing information in any direction was going to benefit some and harm others. It also highlights that, paradoxically, it can be common for "basic" knowledge to be flat-out wrong, if your source is being untrustworthy and you aren't being careful...)
Edit: Just separating this for coherence's sake
Lab Safety Procedures/PPE/Sanitation: I think I have some ideas for where I could start on that? BSL is probably a good place to start.
I'd feel pretty weird posting about that on LessWrong, tbh? (I still might, though.)
I don't currently feel like writing this. But, I'll keep it in mind as a possibility.
Summary of orgs, positions, room-for-funding: I do not have the means, access, clearance, or credentials to do this. (I don't care about me lacking some of those credentials, but other people have made it clear that they do.)
I really would like this to exist! I get the sense that better people than me have tried, and were usually were only able to get part-way, but I haven't tracked it recently. This has led me to assume that this task is more difficult than you'd expect. I have seen a nice copy of a biosecurity-relevant-orgs spreadsheet circulating at one point, though (which I think could get partial-credit).
The closest thing I probably could output are some thoughts on what broad-projects or areas of research seem likely to be valuable and/or underfunded. But I would expect it to be lower-resolution, and less valuable to people.
Thanks for the proposed edits! I'll look them over.
"Careful, clear, and dry" was basically the tone that I intended. I will try to incorporate the places where your wording was clearer than mine, and I have found several places where it was.
Thank you for your reply! I'm very pleased.
In hindsight, I see I wrote very unclearly.. It sounds like I recommended "basic facts" as a separate category of Open Discussion topics. You correctly point out the serious issues with assuming "basic" means "safe". It does not. It really, really does not!
Certainly not what I meant to say. I meant we (the lesswrong community) should actively discuss basic facts *within* the good discussion topics.
Ah! Thanks for the clarification.
I've actually had several people say they liked the Concrete Examples section, but that they wish I'd said more that would help them recreate the thought-process.
Unfortunately, these were old thoughts for me. The logic behind a lot of them feels... "self-evident" or "obvious" to me, or something? Which makes me a worse teacher, because I'm a little blind to what it is about them that isn't landing.
I'd need to understand what people were seeing or missing, to be able to offer helpful guidance around it. And... nobody commented.
(My rant on basic knowledge was a partial-attempt on my part, to crack open my logic for one of these.)
Edit: I added my core heuristic to the Concrete Examples thread
I think the general subject of how to manage infohazards is quite important. I hadn't seen a writeup concretely summarizing the risks of secrecy before (although I've now looked over the Gregory Lewis piece linked near the top of this post). I appreciated the care and nuance that Megan, Finan and Jeffrey demonstrated in expanding the conversation here.
I found this useful both for bio-related infohazards, as well as infohazards in other domains.
I also appreciated a writeup that acted as a sort of hook-into-biosecurity. I'm not sure that biosecurity should be much more high profile in EA circles (my impression is that unlike AI the rest of civilization has been doing an okay-ish job, and it seems like much of the help that EAs could contribute requires much more specialization). But it seems useful to have at least a bit more explicit discussion of it.
I'd be interested in a followup post that delved more deeply into heuristics of what sort of open discussion is net-positive. (The OP seems more like a taxonomy than a guide. Spiracular's comment is helpful, but doesn't go into many details, or provide much of a generator for how to decide whether a novel topic is helpful or harmful to talk about publicly)
(I think the present pandemic was a "warning shot" highlighting the importance of knowledge about biosecurity, and this post therefore becomes important in retrospect.)
(You can find a list of all 2019 Review poll questions here.)
Heh. Damn, did this post end up in the right Everett branch.
Surprised about the answers to the second question. In conversations in EA circles I've had about biorisk, infohazards have never been brought up.
Perhaps there is some anchoring going on here?
Nominating for the same reasons I curated.