Authors: Megan Crawford, Finan Adamson, Jeffrey Ladish

Special Thanks to Georgia Ray for Editing


Most in the effective altruism community are aware of a possible existential threat from biological technology but not much beyond that. The form biological threats could take is unclear. Is the primary threat from state bioweapon programs? Or superorganisms accidentally released from synthetic biology labs? Or something else entirely?

If you’re not already an expert, you’re encouraged to stay away from this topic. You’re told that speculating about powerful biological weapons might inspire terrorists or rogue states, and simply articulating these threats won’t make us any safer. The cry of “Info hazard!” shuts down discussion by fiat, and the reasons cannot be explained since these might also be info hazards. If concerned, intelligent people cannot articulate their reasons for censorship, cannot coordinate around principles of information management, then that itself is a cause for concern. Discussions may simply move to unregulated forums, and dangerous ideas will propagate through well intentioned ignorance.

We believe that well reasoned principles and heuristics can help solve this coordination problem. The goal of this post is to carve up the information landscape into areas of relative danger and safety; to illuminate some of the islands in the mire that contain more treasures than traps, and to help you judge where you’re likely to find discussion more destructive than constructive.

Useful things to know already if you’re reading this post:

Much of the material in this also overlaps with Gregory Lewis’ Information Hazards in Biotechnology article, which we recommend.

Risks of Information Sharing

We’ve divided this paper into two broad categories: risks from information sharing, and risks from secrecy. First we will go over the ways in which sharing information can cause harm, and then how keeping information secret can cause harm.

We believe considering both is important for determining whether or not to share a particular thought or paper. To keep things relatively targeted and concrete, we provide illustrative toy examples, or sometimes even real examples.

This section categorizes ways that sharing information in the biological sciences can be risky.

A topic covered in other Information Hazard posts that we chose not to focus on here is that different audiences can present substantially different risk profiles for the same idea.

With some ideas, you can achieve almost all of the benefits and de-risking associated with sharing by only mentioning your idea to one key researcher, or sharing findings in a journal associated with some obscure subfield, while simultaneously dodging most of the risk of these ideas finding their way to a foolish or bad actor.

If you’re interested in that topic, Gregory Lewis’ paper Information Hazards in Biotechnology covers it well.

Bad conceptual ideas to bad actors

A bad actor gets an idea they did not previously have

Some ways this could manifest:

  • A bad actor uses these new ideas to create novel biological weapons or strategies.
  • State bioweapons programs or bioterrorists gain new research directions or ideas.

Why might this be important?

State or non-state actors may have trouble developing ideas on their own. Model generation can be quite difficult, so generating or sharing clever new models can be risky. In particular, we are concerned about the possibility of ideas moving from biology researchers to bioterrorists or state actors. Biosecurity researchers are often better-educated and/or more creative than most bad actors. There are also probably many more researchers than people interested in bioterrorism; the difference in numbers could be even more impactful. If there are more biosecurity researchers than there are bad actors, researchers are likely to come up with many more ideas.


  • Toy example: Biosecurity researcher writes and publishes a paper about vulnerabilities in the water supply of Exemplandia and a biological agent, Sickmaniasis, that could be used to terrorize Exemplandia. Bioterrorists read the paper, and decide to carry out an attack. A bioterrorist does research in how to manufacture Sickmaniasis, and how to disseminate Sickmaniasis into the water supply of Exemplandia, and carries out the attack.

Bad conceptual ideas to careless actors

A careless actor gets an idea they did not previously have

Some ways this could manifest:

  • A careless actor decides to either explore an idea publicly in further detail, or decides to implement the idea, not realizing or caring about the damage it could cause.

Why might this be important?

Some careless actors may have a low chance of thinking of a given interesting idea on their own, but have the inclination and ability to implement an idea if they hear about it from someone else. One reason this might be true is that biosecurity researchers could specifically be looking for interesting possible threats, so the “interesting idea” space they explore will focus more heavily on risky ideas.


  • Toy example 1: Biosecurity researcher publishes a report about vulnerabilities in the water supply of Exemplandia and a biological agent, Sickmaniasis, that could be used to terrorize Exemplandia. Another researcher writes a paper that explores specific possible implementations of Sickmaniasis, including sequence information and lab procedures for generating Sickmaniasis. In this case of the Unilaterist’s Curse, both security researchers were motivated by the desire to prevent some kind of harm, but the first researcher was specifically more careful about publishing methods.

  • Toy example 2: Researcher publishes report on how to use a gene drive to drive an insect species extinct. A careless researcher uses this report to create a gene drive in a lab on a test population of an insect species. Some insects escape from the lab, and the wild insect species population crashes. Even though the original researcher’s lab was very careful with test implementations of their gene drives, the information they produce led to a careless lab crashing the population of a whole species.

  • Real Example: In 1997, rabbit hemorrhagic disease (RHD) began to spread through New Zealand. It is believed by authorities that New Zealand farmers smuggled the disease into the country and released it intentionally as an animal control measure. RHD was used in Australia as a biocontrol tool, and organizations had attempted to get the New Zealand government to approve it for use. The virus began to spread after the New Zealand government denied their application. This is a case where authorities that reviewed a biological tool for use decided it was a bad idea. Despite their disapproval, someone released it. This wasn’t a human pathogen, but the demonstrated potential for a unilateral actor to decide to release a banned disease agent and succeed is troubling all the same. We’d like to reiterate that unsanctioned pest control using disease is A BAD IDEA!

Implementation details to bad actors

A bad actor gains access to details (but not an original idea) on how to create a harmful biological agent

Some ways this could manifest:

  • A bad actor exploits this newly available information to create a weapon they did not have the knowledge or ability to create before, even though they already knew of the potential attack vector.
  • Someone with the intent to produce a potentially-dangerous agent, but not the means or knowledge, is granted access to supplies and/or knowledge that allows them to develop a dangerous biological product.

Why might this be important?

The bad actor would not have been able to easily generate the instructions to create the harmful agent without the new source of information. As DNA synthesis & lab automation technology improves, the bottleneck to the creation of a harmful agent is increasingly knowledge & information rather than applied skill. Technical knowledge and precise implementation details have historically been a bottleneck for bioweapons programs, particularly terrorist or poorly-funded programs (see Barriers to Bioweapons by Sonia Ben Ouagrham-Gormley).


  • Toy example: A researcher publishes the information for how to reconstruct an extinct & deadly human virus. A bioterrorist or state bioweapon program uses this information to recreate the extinct virus and weaponizes it.
  • Real Example: It’s no secret that the smallpox genome is available online. It’s quite conceivable that a country could fund a program to reconstruct it from this information. It’s also not impossible that this has already happened in secret.

Implementation details to careless actors

A little knowledge is a dangerous thing

Some ways this could manifest:

  • Careless actors who might otherwise have had very little likelihood of creating or releasing anything particularly hazardous, gain access to methods or equipment that increase this likelihood
  • A careful researcher offhandedly mentions a potentially-valuable line of research, which they chose not to pursue due to its potentially catastrophic downsides, which might inspire an overly-optimistic colleague to pursue it

Why might this be important?

Many new technologies (especially in biology) may have unintended side effects. Microscopic organisms can proliferate, and that may get out of hand if procedures are not followed carefully. Sometimes a tentative plan, which might or might not be a good idea, is perceived as a great plan by someone less familiar with its risks. The more careless actor may then take steps to implement a plan without considering the externalities.

As advanced lab equipment becomes cheaper and more accessible, and as more non-academic labs open up without the highly-cautious pro-safety incentives of academia, we might expect to see more experimenters who neglect to practice appropriate safety procedures. We might even see more experimenters who fell through the cracks, and never learned these procedures in the first place. How bad a development this is depends on precisely what those labs are working on, and the quality of their self-supervision.

Second-degree variant: Dangerous implementation knowledge is given to someone who is likely to distribute it, which might later result in a convergence of intent and means in a single individual, either a careless or malicious actor, who produces a dangerous biological product. Some examples of possible distributors might be a person whose job rewards the dissemination of information, or a person who chronically underestimates risks.

This risk means it is important to keep in mind what incentives people have to share information, and whether that might incline them to share information hazards.


  • Toy Example 1: A civilian hears about how CRISPR can remove viruses from cells, buys himself some tools, and injects himself with an untested DIY Herpes ‘cure.’ He doesn’t actually cure his herpes, but he does accidentally edit his germline or give himself cancer. There is a massive social backlash towards synthetic biology, and the FDA shuts down multiple scientific attempts at a Herpes cure that used superficially-similar methods but had much higher odds of success.
  • Toy Example 2: An undergrad lab assistant tests out adding a plasmid to E. coli for a novel protein that she heard about at a conference. She fails to note that the original paper included a few non-prominent sentences on the necessity of only transfecting varieties with a genetic kill-switch, due to a strong suspicion that this gene considerably increases the hardiness of E. coli. Further carelessness results in this E. coli getting out and multiplying outside of the lab. Eventually, this hardiness gene is picked up by a human pathogen.
  • Real Example: A biohacker, among other exploits, injected himself with an agent meant to enhance muscle growth. This likely spurred others to take dangerous risks and the CEO of a biotech company ended up injecting himself with an untested herpes treatment.
  • Toy Example (Second Degree Variant): A researcher discovers a way to make Azure Death transmissible from guinea pigs to humans and tells a journalist to warn pet owners. The journalist spreads the researcher’s work, wanting to credit them for the discovery, widely spreading their methods.

Information vulnerable to future advances

Information that is not currently dangerous becomes dangerous

Some ways this could manifest:

  • Future tech could turn previously safe information into dangerous information.
  • Technological advances or economies of scale could alter the capabilities we could reasonably expect even a low-competence actor to have access to

Why might this be important?

Technological progress can be difficult to predict. Sometimes there are major advances in technology that allow for new capabilities, such as rapidly sequencing and copying genomes. Could the information you share be dangerous in 5 years? 10? 100? How does this weigh against how useful the information is, or how likely it is to become public soon anyway?


  • Toy Example 1: After future technology makes the discovery of new and functional enzymes much easier, conceptual ideas of bioweapons that previously required highly specialized knowledge to implement are now extremely hazardous.
  • Toy Example 2: A new culturing technique makes it drastically easier and cheaper to grow not only harmless bacterial cells, but also pathogenic ones. Suddenly, a paper published on the highly-specific culturing procedures for a finicky but dangerous pathogen is useful to non-specialists.
  • Real example: The Smallpox genome was published online. Later, DNA printing became cheap and easy to use. The publishing of the smallpox genome online wasn’t particularly dangerous when it happened. Humanity hadn’t yet developed the technology to print organisms from scratch, and genetic engineering methods were much less precise. Now, access to the smallpox genome could be used by bad actors with sufficient knowhow and technology to print it and use it as a bioweapon.

Risk of Idea Inoculation

Presenting an idea causes people to dismiss risks

Some ways this could manifest:

  • Presenting a bad version of a good idea can cause people to dismiss it prematurely and not take it seriously even when it’s presented in a better form

Why might this be important?

Trying to change norms can backfire. If the first people presenting a measure to reduce the publication of risky research are too low-prestige to be taken seriously, no effect might actually be the best-case scenario. An idea that is associated with disreputable people or hard-to-swallow arguments may itself start being treated as disreputable, and face much higher skepticism and hostility than if better, proven arguments had been presented first.

This is almost the inverse of the Streisand effect, which appears to derive from similar psychological principles. In the case of the Streisand Effect, attempts to remove information are what catapult it into public consciousness. In the case of idea inoculation, attempts to publicize an idea ensure that the concept is ignored or dismissed out-of-hand, with no further consideration given to it.

It also connects in interesting ways with Bostrom's Schema[1]


  • Toy Example 1: A biohacker attempts using CRISPR to alter their genome to produce more of the hormone incredulin. It doesn’t work and they give themselves cancer. The story gets popularized in media and lawmakers prevent useful research on the uses of CRISPR.
  • Toy Example 2: An overly-enthusiastic crackpot biologist over-promises some huge advancement in the next 2 years, and ends up plastered across the media. Once he’s revealed as a fraud, suddenly no funding agencies want to touch the field even though other people in this specialty are still doing meaningful, realistic work.

Some Other Risk Categories

This list is not exhaustive, and we chose to lean concrete rather than abstract.

There were a few important-but-abstract risk categories that we didn’t think we could easily do justice while keeping them succinct and concrete. We felt that several were already implied in a more concrete way by the categories we did keep, but that they encompass some edge-cases that our schemas don’t capture. They at least warrant a mention and description.

One is the “Risk of Increased Attention,” what Bostrom calls “Attention Hazard.” This is naturally implied by the four “ideas/actors” categories, but in fact covers a broader set of cases. A zone we focused less on are the circumstances in which even useful ideas, combined with smart actors, can eventually lead to unintuitive but catastrophic consequences if given enough attention and funding. This is best exemplified in the fears about the rate of development and investment in AI. It’s also partially exemplified in “Information vulnerable to future advances.”

The other is “Information Several Inferential Distances Out Is Hazardous.” This is a superset of “Information vulnerable to future advances,” but it also encompasses cases where it’s merely a matter of extending an idea out a few further logical steps, not just technological ones.

For both, we felt they partially-overlapped with the examples already given, and leaned a bit too abstract and hard-to-model for this post’s focus on concrete examples. However, we think there’s still a lot of value in these important, abstract, and complete (but harder-to-use) schemas.

Risks from Secrecy

We’ve talked above about many of the risks involved in information hazards. We take the risks of sharing information hazards seriously, and think others should as well. But in the Effective Altruist community, it has been our observation that people don’t observe the flipside of this.

Conversations about risks from biology get shut down and turn into discussions of infohazards, even when the information being shared is already available. There is something to be said for not spreading information further, but shutting down the discussion of people looking for solutions also has downsides.

Leaving it to the experts is not enough when there may not be a group of experts thinking and coming up with solutions. We encourage people that want to work on biorisks to think about the value and risks in sharing potentially dangerous information. Below we will go through the risks or loss of value from not sharing information.

A holistic model of information sharing will include weighing both the risks and benefits of sharing information. A decision should be made having considered how the information might be used by bad or careless actors AND how valuable the information is for good actors to further research or coordinate to solve a problem.

Risk of Lost Progress

Closed research culture stifles innovation

Some ways this could manifest:

  • Ignorance is the default outcome. If secretiveness ensures that nothing is added to the knowledge and work of a field, beneficial progress is unlikely to be made.

Why might this be important?

Good actors need information to develop useful countermeasures. In a world where researchers cannot communicate their ideas with each other it makes model generation more difficult and reduces the ability of the field to build up good defensive systems.


  • Toy Example 1: New information is learned about a recently-discovered virus, which indicate it is more dangerous and has greater pandemic potential than originally thought. This information is not shared on the grounds that it could inspire others to weaponize it. As a result, lab safety procedures for working with the virus are not updated.
  • Toy Example 2: Vaccines are not produced because researchers don’t have access to information about dangerous organisms.
  • Toy Example 3: A dangerous scenario is never discussed among good actors avoiding infohazards. Bad actors don’t avoid thinking about infohazards, so they create novel bioweapons that could have been prepared for if a discussion had occurred.
  • Toy Example 4: The public is unaware of risks, so politicians don’t fund programs that develop critical infrastructure towards defending against pathogens (see US gov defunding programs like the USDA).

Dangerous work is not stopped

Information is not shared, so risky work is not stopped

Some ways this could manifest:

  • Areas with stronger privacy norms, such as industry, may have incentives to hide details about their work. If the risks associated with a particular project are not open information, these risks may be missed or ignored by others engaging in the same work.
  • If a high standard of secrecy is maintained by labs by default, it can be hard for governmental or academic oversight to notice which labs should receive more oversight.

Why might this be important?

Some fields of research are dangerous, or may eventually become dangerous. It is much harder to prevent a class of research if the dangers posed by that research cannot be discussed publicly.

Informal social checks on the standards or behavior of others seems to serve an important, and often underestimated, function as a monitoring and reporting system against unethical or unsafe behaviors. It can be easy to underestimate how much the objections of a friend can shift the way you view the safety of your research, as they may bring up a concern you didn’t even think to ask about.

There are also entities with a mandate to do formal checks, and it is dangerous if they are left in the dark. Work environments, labs, or even entire fields can develop their own unusual work cultures. Sometimes, these cultures systematically undervalue a type of risk because of its disproportionate benefits to them, even if the general populace would have objections. Law enforcement, lawmakers, public discussion, reporting, and entities like ethical review boards are intended to intervene in these sorts of cases, but have no way to do so if they never hear about a problem.

Each of these entities have their strengths and weaknesses, but a world without whistleblowers, or one where no one can access anyone capable of changing these environments, is likely to be a more dangerous world.


  • Toy Example: An academic decides not to publish a paper about the risks of researching a particular strain of bacteria due to high rates of escape from seemingly quarantined labs. Researchers elsewhere begin research on the bacteria, but with lax containment because they were unaware of the risks.
  • Real Almost-Example: In 1972 -a year before the Asilomar Conference- grad student Janet Metz mentioned to other grad students that her lab might try to use a virus to put bacterial DNA into mammalian cells. Pollack told Berg (her supervisor) he should “put genes into a phage that doesn't grow in a bug that grows in your gut,” and reminded him that SV40 is a small-animal tumor virus that transforms human cells in culture and makes them look like tumor cells. Prior to that discussion, her lab had not fully thought through the potential dangerous implications of that research.
  • Real Example: The true source of the Rajneeshee Salmonella poisonings was only uncovered when a leader of the cult publicly expressed concern about the behavior of one of its members, and explicitly requested an investigation into their laboratory.

Risk of Information Siloing

Siloing information leaves individual workers blind to the overall goal accomplished

Some ways this could manifest:

  • It can be more difficult to prevent harm when the systems capable of producing it are not well understood by the participants. If you have processes of production or research where labor is specialized and distributed, moral actors may not notice when they are producing something harmful.

Why might this be important?

Lab work seems to be increasingly getting automated, or outsourced piecemeal. At the same time, the biotechnology industry has an incentive to be secretive with any pre-patent information they uncover. Without additional precautions being taken, secretive assembly-line-esque offerings increase the likelihood that someone could order a series of steps that look harmless in isolation, but create something dangerous when combined.


  • Toy Example 1: A platform outsources lab work while granting buyers a high degree of privacy. No individual worker in the assembly line was able to piece together that they were producing a dangerous biological agent until it had already been produced and released.
  • Toy Example 2: Diagnosis of novel diseases takes longer because knowledge of diseases was hidden.
  • Real Example 1: Researchers put together a bird flu that was airborne and killed ferrets. They didn’t create any mutations that didn’t exist in the wild already, they just put them together in a way that nature hadn’t yet, but could happen naturally through recombination. American and Dutch governments banned publication of papers with their methods. Had they been allowed to publish their research, it could have given other scientists more information with which to develop a vaccine. Americans have since reversed their decision on the ban.
  • Real Example 2: The Guardian successfully ordered part of the smallpox genome to a residential address from a bioprinting company.
  • Real Example 3: A DOD lab accidentally sent weapons grade anthrax to many labs. The CDC and other orgs have made similar mistakes.

Barriers to Funding and New Talent

Talented people don’t go into seemingly empty or underfunded fields

Some ways this could manifest:

  • A culture of secrecy can serve as a stumbling-block for early-career researchers interested in entering a field. It can make it more challenging to locate information, funding, and aligned mentors, and these can serve to deter people who might otherwise be interested in making a career solving an important problem.

Why might this be important?

While many researchers and policy makers work in biosecurity, there is a shortage of talent applied to longer term and more extreme biosecurity problems. There have been only limited efforts to successfully attract top talent to this nascent field.

This may be changing. The Open Philanthropy Project has begun funding projects focused on Global Catastrophic Biorisk, and has provided funding for many individuals beginning their careers in the field of biosecurity.

Policies that require a lot of oversight or add on procedures that increase the cost of doing research cause there to be fewer opportunities for people who want to make a positive difference.


  • Toy Example: A talented biology graduate looks at EA discussions and notices a lack of engagement with the most important biosecurity risks for the far future. They decide the EA community isn’t taking far future concerns seriously and apply their skills elsewhere.
  • Real Example: Labs opt out of valuable pathogen research because regulations increase operating costs and time costs of workers (Wurtz, et al). This leads to fewer places to learn and fewer job opportunities for people that want to prevent harmful pathogens.

Streisand Effect

Suppressing information can cause it to spread

Some ways this could manifest:

  • Attempting to suppress information can sometimes cause information to spread further than it would have otherwise. Many people’s response to even well-advised attempts at information suppression is to directly or indirectly increase the visibility of the event by discussing it or spreading the underlying information itself.

Why might this be important?

The Streisand effect is named after an incident where attempts to have photographs taken down led to a media spotlight and widespread discussion of those same photos. The photos had previously been posted in a context where only 1 or 2 people had taken enough of an interest to access it.

Something analogous could very easily happen with a paper outlining something hazardous in a research journal, or with an online discussion. The audience may have originally been quite targeted simply due to the nicheness or the obscurity of its original context. But an attempt at calling for intervention leads to a public discussion, which spreads the original information. This could be viewed as one of the possible negative outcomes of poorly-targeted whistleblowing.

As mentioned in the section on idea inoculation, this effect is functionally idea inoculation’s inverse and is based on similar principles.


  • Toy example: An online discussion group has policies for handling information that some view as overly restrictive. The frustrated people start a new online discussion group with overly-permissive infohazard guidelines.
  • Real Examples of the Streisand effect: Barbra Streisand’s attempts to remove photos of her seaside mansion from a large database of California coastline photos catapulted said photograph to fame. See also: The Roko’s Basilisk Incident, “Why the Lucky Stiff”’s Infosuicide
  • Real Bio Examples of the Streisand effect: In all likelihood, more people know that the smallpox genome is/was public due to the attempts to suppress it than from organic searches. Relatedly, some dangerous people might have assumed that printed DNA was carefully and successfully monitored if there weren’t so many articles about how sometimes it’s not.


Overall, we think biosecurity in the context of catastrophic risks has been underfunded and underdiscussed. There has been positive development in the time since we started on this paper; the Open Philanthropy Project is aware of funding problems in the realm of biosecurity and has been funding a variety of projects to make progress on biosecurity.

It can be difficult to know where to start helping in biosecurity. In the EA community, we have the desire to weigh the costs and benefits of philanthropic actions, but that is made more difficult in biosecurity by the need for secrecy.

We hope we’ve given you a place to start and factors to weigh when deciding to share or not share a particular piece of information in the realm of biosecurity. We think the EA community has sometimes erred too much on the side of shutting down discussions of biology by turning them into discussions about infohazards. It’s possible EA is being left out of conversations and decision making processes that could benefit from an EA perspective. We’d like to see collaborative discussion aimed towards possible actions or improvements in biosecurity with risks and benefits of the information considered, but not the central point of the conversation.

It’s a big world with many problems to focus on. If you prefer to focus your efforts elsewhere, feel free to do so. But if you do choose to engage with biosecurity, we hope you can weigh risks appropriately and choose the conversations that will lead to many talented collaborators and a world safer from biological risks.

Catalyst Biosummit

By the way, the authors are part of the organizing team for the Catalyst Biosecurity Summit. It will bring together synthetic biologists and policymakers, academics and biohackers, and a broad range of professionals invested in biosecurity for a day of collaborative problem-solving. It will be on February 22, 2020. You can sign up for updates here.


  1. Connecting “Risk of Idea Inoculation” with Bostrom’s Schema: this could be seen as a subset of Attention Hazard and a distant cousin of Knowing-Too-Much Hazard. Attention Hazard encompasses any situation where drawing too much attention to a set of known facts increases risk, and the link is obvious. In Knowing-Too-Much Hazard, the presence of knowledge makes certain people a target of dislike. However, in Idea Inoculation, people’s dislike for your incomplete version of the idea rubs that dislike off onto the idea itself ↩︎

New to LessWrong?

New Comment
15 comments, sorted by Click to highlight new comments since: Today at 11:01 PM

Now that we've gone over some of the considerations, here's some of the concrete topics I see as generally high or low hazard for open discussion.

Good for Open Discussion

  • Broad-application antiviral developments and methods
    • Vaccines
    • Antivirals proper
    • T-cell therapy
    • Virus detection and monitoring
  • How to report lab hazards
    • ...and how to normalize and encourage this
  • Broadly-applicable protective measures
    • Sanitation
    • Bunkers?
  • The state of funding
  • The state of talent
    • What broad skills to develop
    • How to appeal to talent
    • Who talent should talk to

Bad for Open Discussion

These things may be worth specialists discussing among themselves, but are likely to do more harm than good in an open thread.

  • Disease delivery methods
  • Specific Threats
  • Specific Exploitable Flaws in Defense Systems
    • Ex: immune systems, hospital monitoring systems
    • It is especially bad to mention them if they are exploitable reliably
    • If you are simultaneously providing a comprehensive solution to the problem, this can become more of a gray-area. Partial-solutions, or challenging-to-implement solutions, are likely to fall on the bad side of this equation.
  • Much of the synthetic biology surrounding this topic
  • Arguments for and against various agents using disease as an M.O.

Here's a simplification of my current assessment heuristic...

  • What order-of-magnitude is the audience? (a multiplier)
    • Any relevant audience skews/filters?
    • What are the tails?
  • What's the trade-off for offense vs defense? (+/- direction, & size)
    • Is it + or - overall? How big?
    • Do any points swamp the others in importance?
  • What am I not easily factoring in? Are there any gotchas? (checklist + Murphyjutsu)
    • Future Advances
    • Idea Inoculation
    • Second-degree and unintended audiences
    • Murphyjutsu it
  • Sanity Check: Other
    • Roughly how much do I actually trust the judgement I reached?
      • Should I sleep on it? Withhold it?
    • Anyone I should run things by?

Biorisk - well wouldn't it be nice if we'd all been familiar with the main principles of biorisk before 2020? i certainly regretted sticking my head in the sand.

> If concerned, intelligent people cannot articulate their reasons for censorship, cannot coordinate around principles of information management, then that itself is a cause for concern. Discussions may simply move to unregulated forums, and dangerous ideas will propagate through well intentioned ignorance.

Well. It certainly sounds prescient in hindsight, doesn't it?

Infohazards in particular cross my mind: so many people operate on extremely bad information right now. Conspiracies theories abound, and I imagine the legitimate coordination for secrecy surrounding the topic do not help in the least. What would help? Exactly this essay. A clear model of *what* we should expect well-intentioned secrecy to cover, so we can reason sanely over when it's obviously not.

Y'all done good. This taxonomy clarifies risk profiles better than Gregory Lewis' article, though I think his includes a few vivid-er examples.

I opened a document to experiment tweaking away a little dryness from the academic tone. I hope you don't take offense. Your writing represents massive improvements in readability in its examples and taxonomy, and you make solid, straightforward choices in phrasing. No hopelessly convoluted sentence trees. I don't want to discount that. Seriously! Good job.

As I read I had a few ideas spark on things that could likely get done at a layman level, in line with spiracular's comment. That comment could use some expansion, especially in the direction of "Prefer to discuss this over that, or discuss in *this way* over *that way" for bad topics. Very relevantly, I think basic facts should get added to some the good discussion topics, since they represent information it's better to disseminate! we seek to review basic facts under the good discussion topics, since they represent information it's better to disseminate (EDIT, see comments).

  • Summarize or link to standard lab safety materials.
  • Summarize the various levels of PPE and sanitation practices. It doesn't have to get into the higher end to prove useful for people:
    • how do you keep dishes sanitary?
    • the fridge?
    • a wound?
    • How can you neutralize sewage,
    • purify water
    • responsibly use antiobiotics?
  • The state of talent... I imagine there's low-hanging fruit here but idk what it is. Could list typical open positions and what the general degree track looks like.
  • Give a quick overview of the major biosecurity funds
  • Do a give-well-esque summary of which organizations have room for more funding, and which promising subcause-areas have relatively few/poor organizations pursuing them. (open phil's)

"Basic facts" as "safe discussion topics": Ooh, I disagree! I think this heuristic doesn't always hold, especially for people writing on a large platform.

For basic information, it is sometimes a good idea to think twice if a fact might be very-skewed towards benefiting harmful actions over protective ones. If you have a big platform, it is especially important to do so.

(It might actually be more important for someone to do this for basic facts, than sophisticated ones? They're the ones a larger audience of amateurs can grasp.)

If something is already widely known, that does somewhat reduce the extent of your "fault" for disseminating it. That rule is more likely to hold for basic facts.

But if there is a net risk to a piece of information, and you are spreading it to people who wouldn't otherwise know? Then larger audiences are a risk-multiplier. So, sometimes spreading a low-risk basic thing widely could be more dangerous, overall, than spreading an high-risk but obscure and specialist thing.

It was easy for me to think of at least 2 cases where spreading an obvious, easy-to-grasp fact could disproportionately increase the hazard of bad actors relative to good ones, in at least some petty ways. Here's one.

Ex: A member of the Rajneeshee cult once deliberately gave a bunch of people food poisoning, then got arrested. This is a pretty basic fact. But I wouldn't want to press a button that would disseminate this fact to 10 million random people? People knowing about this isn't actually particularly protective against food poisoning, and I'd bet that there is least 1 nasty human in 10 million people. If I don't have an anticipated benefit to sharing, I would prefer not to risk inspiring that person.

On the other hand, passing around the fact that a particular virus needs mucus membranes to enter cells seems... net-helpful? It's easier for people to use that to advise their protective measures, and it's unlikely to help a rare bad actor who is sitting the razor's-edge case where they would have infected someone IF ONLY they had known to aim for the mucus membranes, AND where they only knew about that because you told them.

(And then you have complicated intermediate cases. Off the top of my head, WHO's somewhat-dishonest attempt to convince people that masks don't work, in a bid to save them for the medical professionals? I don't think I like what they did (they basically set institutional trust on fire), but the situation they were in does tug at some edge-cases around trying to influence actions vs beliefs. The fact that masks did work, but had a limited supply, meant that throwing information in any direction was going to benefit some and harm others. It also highlights that, paradoxically, it can be common for "basic" knowledge to be flat-out wrong, if your source is being untrustworthy and you aren't being careful...)

Edit: Just separating this for coherence's sake

Lab Safety Procedures/PPE/Sanitation: I think I have some ideas for where I could start on that? BSL is probably a good place to start.

I'd feel pretty weird posting about that on LessWrong, tbh? (I still might, though.)

I don't currently feel like writing this. But, I'll keep it in mind as a possibility.

Summary of orgs, positions, room-for-funding: I do not have the means, access, clearance, or credentials to do this. (I don't care about me lacking some of those credentials, but other people have made it clear that they do.)

I really would like this to exist! I get the sense that better people than me have tried, and were usually were only able to get part-way, but I haven't tracked it recently. This has led me to assume that this task is more difficult than you'd expect. I have seen a nice copy of a biosecurity-relevant-orgs spreadsheet circulating at one point, though (which I think could get partial-credit).

The closest thing I probably could output are some thoughts on what broad-projects or areas of research seem likely to be valuable and/or underfunded. But I would expect it to be lower-resolution, and less valuable to people.

Thanks for the proposed edits! I'll look them over.

"Careful, clear, and dry" was basically the tone that I intended. I will try to incorporate the places where your wording was clearer than mine, and I have found several places where it was.

Thank you for your reply! I'm very pleased.

In hindsight, I see I wrote very unclearly.. It sounds like I recommended "basic facts" as a separate category of Open Discussion topics. You correctly point out the serious issues with assuming "basic" means "safe". It does not. It really, really does not!

Certainly not what I meant to say. I meant we (the lesswrong community) should actively discuss basic facts *within* the good discussion topics.

Ah! Thanks for the clarification.

I've actually had several people say they liked the Concrete Examples section, but that they wish I'd said more that would help them recreate the thought-process.

Unfortunately, these were old thoughts for me. The logic behind a lot of them feels... "self-evident" or "obvious" to me, or something? Which makes me a worse teacher, because I'm a little blind to what it is about them that isn't landing.

I'd need to understand what people were seeing or missing, to be able to offer helpful guidance around it. And... nobody commented.

(My rant on basic knowledge was a partial-attempt on my part, to crack open my logic for one of these.)

Edit: I added my core heuristic to the Concrete Examples thread


I think the general subject of how to manage infohazards is quite important. I hadn't seen a writeup concretely summarizing the risks of secrecy before (although I've now looked over the Gregory Lewis piece linked near the top of this post). I appreciated the care and nuance that Megan, Finan and Jeffrey demonstrated in expanding the conversation here.

I found this useful both for bio-related infohazards, as well as infohazards in other domains.

I also appreciated a writeup that acted as a sort of hook-into-biosecurity. I'm not sure that biosecurity should be much more high profile in EA circles (my impression is that unlike AI the rest of civilization has been doing an okay-ish job, and it seems like much of the help that EAs could contribute requires much more specialization). But it seems useful to have at least a bit more explicit discussion of it.

I'd be interested in a followup post that delved more deeply into heuristics of what sort of open discussion is net-positive. (The OP seems more like a taxonomy than a guide. Spiracular's comment is helpful, but doesn't go into many details, or provide much of a generator for how to decide whether a novel topic is helpful or harmful to talk about publicly)

(I think the present pandemic was a "warning shot" highlighting the importance of knowledge about biosecurity, and this post therefore becomes important in retrospect.)

(You can find a list of all 2019 Review poll questions here.)

Heh. Damn, did this post end up in the right Everett branch.

Surprised about the answers to the second question. In conversations in EA circles I've had about biorisk, infohazards have never been brought up.

Perhaps there is some anchoring going on here?

Nominating for the same reasons I curated.

[+][comment deleted]3y10