AI regulations could draw inspiration from the field of biosafety regulation, specifically the CDC's guidelines for Biosafety in Microbiological & Biomedical Laboratories (BMBL), which outline the necessary precautions for working with dangerous biological agents and recommend a systematic approach for assessing their risks.

The remainder of this report will describe the structure and mission of BMBL, outline its key principles and recommendations and indicate relevant takeaways for the field of AI regulation. 

Epistemic status: I am not an expert in biosafety. However, I think a summary document which highlights concrete safety steps undertaken in an adjacent field to AI and highlights some actionable steps for AI labs to increase safety could be potentially useful. All construcive feedback and suggestions for improvements are welcome!

Structure and Mission

BMBL is an advisory document protecting laboratory staff, the public and the environment from exposure to dangerous microorganisms and hazardous materials (e.g. radioactive agents). While many organizations and agencies use BMBL for regulations, it is primarily an advisory document to help with a comprehensive protocol which helps laboratories identify risks and ensure safe conduct when working with dangerous microorganisms and hazardous materials. It provides guidelines for protecting laboratory staff, the public and the environment. 

  • Relevance for AI Regulation: A difference between biosafety and AI safety may be that biological laboratories have a more obvious incentive to protect its staff, as there is more immediate danger of contracting a disease than interacting with an AI system. Similar guidelines for AI may need to be legally binding.


BMBL is a set of biosafety guidelines compiled by experts and members of the public. To produce BMBL, the Office of Laboratory Science and Safety (OLSS) works with the National Health Institute (NIH) to recruit over 200 expert contributors from scientific societies, federal agencies (NIH, CDC, FBI, and many more), and the public.

  • Relevance for AI Regulation: AI regulators could use a similar process. For instance, a director of office within the National Telecommunications and Information Administration (NTIA) could assemble a team of experts to produce similar guidelines. Furthermore, input from businesses and the public should be included to get a comprehensive idea of risks posed by AI.

Key Principles and Procedures

Containment of dangerous microorganisms is key to biosafety. Containment refers to the principle that the laboratory staff, the public and the environment should be protected from exposure to dangerous microorganisms being manipulated in the laboratory. 

  • Relevance for AI Regulation: AI labs should follow a similar principle, ensuring that dangerous AI systems are contained rather than being deployed on the markets for the public. 


Risk assessment is key to preventing laboratory-associated-infections. Risk assessment is the process that outlines the correct procedure of handling dangerous samples in order to prevent laboratory-associated-infections (LAI) both for laboratory staff and the public.

  • Relevance for AI Regulation: AI labs working with potentially dangerous models should identify procedures which prevent the code from being distributed or prevent leakage of the AI system, including leakage by a well-meaning actor from within the company (e.g. an employee sending a potentially dangerous AI system artifact through an unencrypted messaging service). 


Protective measures are taken relative to the degree of risk posed by concrete organisms. BMBL employs a risk-based approach to biosafety, where rigidity of protective measures is relative to the degree of risk posed by concrete microorganisms (labeled agents) in order to ensure effective distribution of resources.


“Err on the side of caution”. BMBL works under the precautionary principle of “imposing safeguards more rigorous than needed” where there is an insufficient amount of data to determine risk.

  • Relevance for AI Regulation: It may be a bit unclear how this rule would apply to AI labs. For biosafety, this principle targets primarily safety precautions inside the lab (using higher-level protective suits, increased ventilation etc.) and future research needs to identify similar precautions for an AI lab without creating obstacles to researching lesser-known AI models.


Degree of risk determines the degree of containment. Each level of containment describes the microbiological practices, safety equipment, and facility safeguards for the corresponding level of risk associated with handling an agent. The risk criteria are: Infectivity, Severity of disease, Transmissibility, Nature of the work being conducted and Origin of agent. 

  • Relevance for AI Regulation: Experts should determine how well these criteria translate into the domain of AI. Perhaps Infectivity and Transmissibility may be equivalent to the ability and speed with which an AI model is capable of making copies of itself, Severity may be measured in terms of harm caused etc.


Four levels of containment based on the risk criteria: 

BSL-1: appropriate for agents that do not cause disease to immunocompetent adult humans, 

BSL-2: moderate-risk agents that do not transmit through aerosol and are of varying (but not lethal) severity, 

BSL-3: agents with known potential for aerosol transmission and causing potentially lethal infections, 

BSL-4: agents with high risk of causing life-threatening disease by aerosol for which there is no known treatment.

  • Relevance for AI Regulation: Having more relaxed or constrained standards for manipulation with AI models based on risk seems especially useful, since systems which pose little to no risks can immensely increase productivity and efficiency in a range of industries and public services. Specific levels for AI may be determined e.g. through a scoring system employed by Canadian law. 


BMBL provides detailed risk-assessments for concrete known agents. Agent Summary Statements are written based on the levels above for agents known to present a risk to the laboratory personnel and to public health. Statements prepared by scientists, clinicians and biosafety professionals who contributed to BMBL.

  • Relevance for AI Regulation: Agent Summaries for concrete models may be incredibly useful, because they could provide guidance for businesses and the public on how to safely deploy the systems while simultaneously radically improving the efficiency of their work (e.g. using level 1 AI’s to discover new drugs). This could be done e.g. through model cards outlining intended uses and risks for specific models.


BMBL recommends an ongoing six-step risk-assessment procedure to mitigate risks. Laboratories are instructed to engage in ongoing risk-assessment for particular agents and procedures, especially prior to working with a new agent. 

1) Identify hazards characteristic for the agent and assess inherent risk. 

2) Identify procedural hazards of working with the agent. 

3) Determine Biosafety level. 

4) Consult a third-party professional, expert or expert-body. 

5) Assess proficiency of staff regarding safety practices. 

6) Continually review risk-management strategies in the lab.

  • Relevance for AI Regulation: With the exponentially growing speed of innovation in the field of AI, it seems necessary to mandate that AI labs should engage in a continual review process of their safety procedures for specific models. The risk-assessment procedure seems especially relevant for AI because of the growing potential for various AI systems to engage in covert communication. As such, AI labs should monitor the dangers of their systems and their ability to leak. 


Thanks to Jakub Kraus for valuable comments. Cross posted on EA forum:  https://forum.effectivealtruism.org/posts/g38CkMbFzKBtdzFXY/biosafety-regulations-bmbl-and-their-relevance-for-ai 

New to LessWrong?

New Comment