President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence
Released today (10/30/23) this is crazy, perhaps the most sweeping action taken by government on AI yet. Below, I've segmented by x-risk and non-x-risk related proposals, excluding the proposals that are geared towards promoting its use[1] and focusing solely on those aimed at risk. It's worth noting that some of these are very specific and direct an action to be taken by one of the executive branch organizations (i.e. sharing of safety test results) but others are guidances, which involve "calls on Congress" to pass legislation that would codify the desired action. [Update]: The official order (this is a summary of the press release) has now be released, so if you want to see how these are codified to a greater granularity, look there[2]. Existential Risk Related Actions: * Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government. In accordance with the Defense Production Act, the Order will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests. * Develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy. The National Institute of Standards and Technology will set the rigorous standards for extensive red-team testing to ensure safety before public release. The Department of Homeland Security will apply those standards to critical infrastructure sectors and establish the AI Safety and Security Board. The Departments of Energy and Homeland Security will also address AI systems’ threats to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks. Together, these are the most significant actions ever taken by any government to advance the field of AI safety.
Just want to say I think the design is brilliant in the age where LW is competing for those posting on Substack and others, very well done.