The PDF can be found here:

Some quick notes regarding this linkpost: 

  • I have not seen this resource posted on LessWrong as a linkpost, but should it exist please comment a link and I will delete this post. 
  • I have not read the report in full, but have read its introductory sections and the announcement post on the White House's website. 

My quick take: 

  • The report does mention "Explainable AI" (e.g., pp. 22; I have not read the sections where this phrase is mentioned, but from a brief scan, the topic of interpretability / explainable AI seems only to have been covered at surface level), but falls short of discussing the catastrophic risks posed by AI systems. As such, I believe the recommendations and design principles this report puts forward also fall short of addressing especially critical issues in AI progress, safety, and alignment. Holistically, the report seems more geared towards the "lesser" societal issues in AI use and development, which makes some sense given the laggard pace of government systems in tackling issues posed by emerging technologies. While I'm not epistemically confident in the idea that governments and regulatory bodies can significantly reduce the risk of an AI catastrophe, I hope that the United States proposes regulation and conducts work in areas slightly more topical to reducing catastrophic outcomes involving AI. 

Summary of the press release 

Many technologies can or do pose a threat to democracy. There are plentiful cases where these technological "tools" limit their users more than help them. Examples include problems with the use of technology in patient care, algorithmic bias in credit and hiring, and widespread breaches of user data privacy. Of course, automation is, generally speaking, helpful (e.g., agricultural production, severe weather prediction, disease detection) but we need to ensure that "progress must not come at the price of civil rights...". 

"The Blueprint for an AI Bill of Rights is a guide for a society that protects all people from these threats"

The Office for Science and Technology proposes 5 design principles for minimizing harm from automated systems:

  • Safe and Effective Systems: "You should be protected from unsafe or ineffective systems.
  • Algorithmic Discrimination Protections: "You should not face discrimination by algorithms and systems should be used and designed in an equitable way."
  • Data Privacy: "You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used."
  • Notice and Explanation: "You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you."
  • Human Alternatives, Consideration, and Fallback: "You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter."

Structure of the Report 

(so you can have a sense of what's contained)

  • Introduction
    • Foreword
    • About this framework
    • Listening to the American public
    • Blueprint for an AI bill of rights
  • From principles to practice: a technical companion to the blueprint for an AI bill of rights
    • Using this technical companion
    • Safe and effective systems
    • Algorithmic discrimination systems 
    • Data privacy
    • Notice and explanation
    • Human alternatives, considerations, and fallbacks
  • Appendix
    • Examples of automated systems
    • Listening to the American people
  • Endnotes


The existence of this linkpost is due in part to casens commenting on the release of the report in the following Metaculus question, which this report is relevant to. 

AI-Human Emulation Laws before 2025 

Before 2025, will laws be in place requiring that AI systems that emulate humans must reveal to people that they are AI?

New Comment
4 comments, sorted by Click to highlight new comments since: Today at 1:51 AM

Phew! From the title I first thought it would be about some under-employed bureaucrats drawing up rights for the AIs themselves.

That actually would also be worthwhile. We will have AGI soon enough, after all, and I think it's hard to argue that it wouldn't be sentient and thus deserving of rights.

AIXI contains sentient minds, but isn't itself sentient. I suspect there are designs of minds that are highly competent at many problems, and have a mental architecture totally different from humans. Such that if we had a clearer idea what we meant by "sentient", we would agree the AI wasn't sentient. 

Also, how long do we have sentient AI before singularity. If the first sentient AI is a paperclipper that destroys the world, any bill of "sentient AI rights" is pragmatically useless. 


Systems should undergo pre-deployment testing, risk identification and mitigation, and ongoing monitoring that demonstrate they safe and effective based on their intended use, mitigation of unsafe outcomes including those beyond the intended use, and adherence to domain-specific standards. Outcomes of these protective measures should include the possibility of not deploying the system or removing a system from use. Automated systems should not be designed with an intent or reasonably foreseeable possibility of endangering your safety or the safety of your community. They should be designed to proactively protect you from harms stemming from unintended, yet foreseeable, uses or impacts of automated systems.

It would be an interesting timeline if this language actually helped lobbyists shut down large AGI projects based on a lack of mitigation of foreseeable impacts.