Previously I posted on the National Institute of Standards and Technology's plan for AI standards, which is now open for public comment. Reading Federal documents is tedious, so I have provided an outline below.

Outline of the Draft Plan

1. Standards and Artificial Intelligence

A. Why is a plan for Federal engagement in AI technical standards needed?

  • AI is important to the economy and national security.
  • Executive Order (EO 13859)
  • Reflect Federal priorities for innovation, public trust, and public confidence in systems using AI
  • Enable creation of new AI-related industries, and adoption by current industries
  • Federal agencies are major players in developing and using AI
  • Definition of AI:
Note: While definitions of AI vary, for purposes of this plan AI technologies and systems are considered to comprise of software and/or hardware that can learn to solve complex problems, make predictions or undertake tasks that require human-like sensing (such as vision, speech, and touch), perception, cognition, planning, learning, communication, or physical action. Examples are wide-ranging and expanding rapidly. They include, but are not limited to, AI assistants, computer vision systems, biomedical research, unmanned vehicle systems, advanced game-playing software, facial recognition systems as well as application of AI in both Information Technology (IT) and Operational Technology (OT).
  • AI and Trustworthiness:
Increasing trust in AI technologies is a key element in accelerating their adoption for economic growth and future innovations that can benefit society. Today, the ability to understand and analyze the decisions of AI systems and measure their trustworthiness is limited. AI standards and related tools, along with AI risk management strategies, can help to address this limitation and spur innovation. Among the characteristics that relate to trustworthy AI technologies are accuracy, reliability, robustness, security, explainability, safety, and privacy–but there still is much discussion about the range of characteristics that determine AI systems’ trustworthiness. Ideally, these aspects of AI should be considered early on in the design process and tested during the development and use of AI technologies.

B. What are technical standards and why are they important?

  • ISO/IEC Guide 2:2004 Standardization:
a document, established by consensus and approved by a recognized body, that provides for common and repeated use, rules, guidelines or characteristics for activities or their results, aimed at the achievement of the optimum degree of order in a given context.
  • Standards help with product differentiation, innovation, etc.
  • Help systems meet critical objectives for functionality, interoperability, and trustworthiness, and perform accurately, reliably and safely.

C. How are technical standards developed?

The standards development approaches followed in the United States rely largely on the private sector to develop voluntary consensus standards, with Federal agencies contributing to and using these standards. Typically, the Federal role includes providing agency requirements for standards projects, contributing technical expertise to standards development, incorporating voluntary standards into policies and regulations, and citing standards in agency procurements. This use of voluntary consensus standards that are open to contributions from multiple parties, especially the private sector, is consistent with our market-driven economy and has been endorsed in Federal statute and policy. (See “Maximizing Use of the Voluntary Consensus Standards Process” on Page 12).
  • Some governments prioritize domestic industry and innovation over an open marketplace. This merits special attention to ensure that US interests are not impeded.
  • The timing of standards is important: too early and they can hinder, too late and they won't help.
  • IT standards are critical to AI standards.

D. What AI technical standards are needed?

  • Systems using AI are systems-of-systems: standards for AI applications and standards for AI-driven systems are both needed.
  • Both horizontal and vertical AI standards already exist, and Standards Developing Organizations (SDOs) are working on others.
  • Communication has well-established standards. Trustworthiness is only now being considered.
  • There are two tables which reflect the current state of AI standards, using information from the NIST Request for Information and the NIST AI Standards Workshop.
  • Table 1 is Technical Standards Related to AI Based on Stakeholder Input, p.8
  • Table 2 is Additional AI-related Standards to Inform Policy Decisions, Based on Stakeholder Input, p.8

E. What AI standards-related tools are needed?

  • Data standards and data sets in standardized formats, including metadata
  • Tools for capturing and reasoning with knowledge in AI systems
  • Fully documented use cases
  • Testing methodologies
  • Metrics
  • Benchmarks and evaluations
  • AI testbeds
  • Tools for accountability and auditing
  • HELP WANTED: Data Standards and Data Sets

F. What are other important considerations?

  • Law, ethics, social issues.
  • Public input highlighted the importance of aspirational principles and goals - see the Organization for Economic Cooperation and Development principles.
  • Ethical standards should be tied tightly to the risk to humans.
  • Privacy standards should be included.
  • Risk management will be considered, but will be ultimately left to system owners.

2. US government AI standards and priorities.

A1. Priorities for Federal government involvement.

  • Inclusive and accessible
  • Open and transparent
  • Multi-channel
  • Consensus-based
  • Globally relevant and non-discriminatory to all stakeholders

A2. Which standards characteristics are important?

  • Innovation-oriented
  • Applicable across sectors (horizontal)
  • Focused on particular sectors and applications (vertical)
  • Clearly stated provenance and intended use or design (“intent of design”)
  • Address the need to monitor and manage AI systems
  • Reflective of the early state of development and understanding of AI technologies, risk, and societal implications
  • Regularly updated
  • Effective in measuring and evaluating AI system performance
  • Human-centered
  • Harmonized and using clear language
  • Sensitive to ethical considerations

B. Prioritizing levels of US government involvement in AI standards

  • Monitoring: Following either a specific standards effort or broader programs and evolving standards being produced by SDOs to address unique needs or interests.
  • Participating: Commenting on and providing meaningful contributions to strategically important standards, including potentially serving as an observer on a committee.
  • Influencing: Developing a deeper understanding of, and relationships with, the key players–working directly with industry and international players and exerting influence through formal and informal discussions and by providing expertise.
  • Leading: Leading standards efforts by convening or administering consensus groups, serving as standards project editor or in similar technical leadership roles, or acting as the liaison representative between standards groups. This level of leadership also can be exercised by serving on the Board of Directors or in other executive positions of an SDO.

C. Practical steps for agency engagement in AI standards

  • Identify how AI technologies can be used to further the agency’s mission – for example, research, technology development, procurement, or regulation.
  • Know existing statutes, policies and resources relating to participation in the development of, and use of standards (e.g., OMB Circular A-119, Trade Agreements Act of 1979 as amended, Interagency Committee on Standards Policy).
  • Conduct a landscape scan and gap analysis to identify standards and related tools that exist or need to be developed.
  • If appropriate standards exist, use them.
  • If appropriate standards do not exist, engage in their development: coordinate with other Federal agencies that may have similar needs; follow guidance on where and how to engage: see section 2(A); identify, train, and enable staff to participate in standards development.
  • Agencies determine their own AI needs: the Department of Transportation and Food and Drug Administration both have reports on the subject.

3. Recommended Federal government standards actions to advance US AI leadership

Government agencies should:

  • supporting and conducting AI research and development
  • engaging at the appropriate involvement level in AI standards development
  • procuring and deploying standard-based products and services
  • developing and implementing policies, including regulatory policies where needed
the Federal government should commit to deeper, consistent, long-term engagement in AI standards development activities to help the United States to speed the pace of trustworthy AI technologies

A. Bolster AI standards-related knowledge, leadership, and coordination among Federal agencies to maximize effectiveness and efficiency.

  • The National Science and Technology Council (NSTC) Machine Learning/Artificial Intelligence (ML/AI) Subcommittee should designate a Standards Coordinator with responsibility to gather and share AI standards-related needs, strategies, roadmaps, terminology, and best practices around the use of trustworthy AI in government operations.
  • Make maximum use of existing standards that are broadly adopted by industry sectors that can be used or evolved within the new context of AI solutions.
  • Reinforce the importance of agencies’ adherence to Federal policies for standards and related tools, for example data access and quality. Suggested lead: OMB-OIRA
  • Maintain a flexible posture in specifying AI standards that are referenced in regulatory or procurement actions. Flexibility is required to adapt to the rapid pace of AI technology developments and standards and our understanding about trustworthiness and human-centered implications of AI. Suggested lead: GSA, DoD, NIST.
  • Grow a cadre of Federal staff with the relevant skills and training, available to effectively engage in AI standards development in support of U.S. government interests. Suggested lead: NIST, OPM.

B. Promote focused research to advance and accelerate broader exploration and understanding of how aspects of trustworthiness can be practically incorporated within standards and standards-related tools.

  • Plan, support, and conduct research and evaluation that underlies technically sound, fit-for-purpose standards and related tools for trustworthy AI. Suggested lead:NSF and research funding agencies.
  • Develop metrics to assess trustworthy attributes of AI systems, focusing on approaches that are readily understandable, available, and can be put on a path to standardization. Suggested lead: NIST and research funding agencies.
  • Prioritize multidisciplinary research related to trustworthiness and associated aspects that may help to identify technical approaches to implement responsible behaviors. Suggested lead: research funding agencies.
  • Conduct research to inform risk management strategies including monitoring and mitigating risks. Suggested lead: research funding agencies.
  • Identify research needs, requirements and approaches that help advance scientific breakthroughs for trustworthy AI, give us confidence in AI technologies and cultivate trust in design, development, and use of AI. Suggested lead: NIST and research funding agencies.

C. Support and expand public-private partnerships to develop and use AI standards and related tools to advance trustworthy AI.

  • Strategically increase participation in the development of technical AI standards in targeted venues and exercise a variety of engagement options.
  • Lead non-traditional collaborative models for standards development, such as open source efforts and Federal open data initiatives.
  • Increase data discoverability and access to Federal government data that enable more widespread training and use of AI technologies.
  • Lead in benchmarking efforts to assess the trustworthiness of AI systems. Ensure that these benchmarks are widely available, result in best practices, improve AI evaluations and methods for verification and validation.
  • Foster collaborative environments to promote creative problem solving through AI challenge problems and testbeds.

D. Strategically engage with international parties to advance AI standards for U.S. economic and national security needs.

  • Champion U.S. AI standards priorities in international AI standards development activities.
  • Partner and accelerate the exchange of information between Federal officials and counterparts in like-minded countries on AI standards and related tools. Suggested lead: NIST, Department of State, International Trade Administration, National Institute of Justice.
  • Track and understand AI standards development strategies and initiatives of foreign governments and entities. Suggested lead: NIST, Department of State, International Trade Administration, National Institute of Justice.


Appendix I: Definitions

Appendix II: AI Standards

Appendix III: Related Tools for AI Standardization

Appendix IV: The Assignment and Approach

Appendix V: Request for Information

Appendix VI: Workshop Agenda [note: this was May 30th]


New Comment
1 comment, sorted by Click to highlight new comments since: Today at 8:33 AM

It seems Trump signed a second Executive Order about AI on Dec 3rd, 2020 which is kind of about friendly AI:

Sec. 2. Policy. (a) It is the policy of the United States to promote the innovation and use of AI, where appropriate, to improve Government operations and services in a manner that fosters public trust, builds confidence in AI, protects our Nation’s values, and remains consistent with all applicable laws, including those related to privacy, civil rights, and civil liberties.

Executive Order on Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government

While there is a lot of talk about superficial trustworthiness ("foster public trust") it also makes clear that it protect the actual values underneath.

(I post it here because I'm not sure it deserves a linkpost)

New to LessWrong?