Whoa, this draft has a section on AGI and superintelligence that directly quotes Bostrom, Yudkowsky, Omohundro etc., and also has an "appreciation" section saying "We also wish to express our appreciation for the following organizations regarding their seminal efforts regarding AI/AS Ethics, including (but not limited to) [...] the Machine Intelligence Research Institute".
The executive summary for the AGI/ASI section reads as follows:
Future highly capable AI systems (sometimes referred to as artificial general intelligence or AGI) may have a transformative effect on the world on the scale of the agricultural or industrial revolutions, which could bring about unprecedented levels of global prosperity. The Safety and Beneficence of Artificial General Intelligence (AGI) and Artificial Superintelligence (ASI) Committee has provided multiple issues and candidate recommendations to help ensure this transformation will be a positive one via the concerted effort by the AI community to shape it that way.
Issues:
• As AI systems become more capable— as measured by the ability to optimize more complex objective functions with greater autonomy across a wider variety of domains—unanticipated or unintended behavior becomes increasingly dangerous.
• Retrofitting safety into future, more generally capable, AI systems may be difficult.
• Researchers and developers will confront a progressively more complex set of ethical and technical safety issues in the development and deployment of increasingly autonomous and capable AI systems.
• Future AI systems may have the capacity to impact the world on the scale of the agricultural or industrial revolutions.
I am probably saying the obvious, but this is not a guide to develop a Friendly AI.
It's more like a list of things that people who consider themselves ethical should think about when developing an autonomously driving car, or a drone, or a data-mining program that works with personal data.
And I don't feel impressed by it, but I am not sure what else could they have written instead to impress me more. Considering it's a document produced by a committee, it could have been much worse. Maybe we should have higher standards for a technical committee, but it was not realistic to expect them to provide a technical implementation of robotic ethics.
Considering it's a document produced by a committee, it could have been much worse.
I think people underestimate the degree to which 90% of everything is showing up; the section that Kaj was excited about (section 4) has its author list on page 120, and it's names that either are or should be familiar:
Malo Bourgon (Co-Chair) – COO, Machine Intelligence Research Institute
Richard Mallah (Co-Chair) – Director of Advanced Analytics, Cambridge Semantics; Director of AI Projects, Future of Life Institute
Paul Christiano – PhD Student, Theory of Computing Group, UC Berkeley
Bart Selman – Professor of Computer Science, Cornell University
Carrick Flynn – Research Assistant at Future of Humanity Institute, University of Oxford
Roman Yampolskiy, PhD – Associate Professor and Director, Cyber Security Laboratory; Computer Engineering and Computer Science, University of Louisville
And several more of us were at the workshop that worked on and endorsed this section at the Hague meeting - Anders Sandberg (FHI), Huw Price and myself (CSER). But regardless, the important thing is that a good section on long-term AI safety showed up in a major IEEE output - otherwise I'm confident it would have been terrible ;)
Considering it's a document produced by a committee, it could have been much worse.
This is what jumped out at me pretty strongly. In general I have been surprised by the non-terribleness of stuff like this and the White House thing considering the kind of bullshit that many academically ordained AI experts were spouting in the last few years when confronted with AI safety arguments.
edit: some parts do look laughably bad, which undermines how serious anyone serious takes this.
Looks bad.
This is an SJ-flavoured wishy-washy text written to hit the right buzzwords (human rights! empowerment! multi-stakeholder ecosystems! disadvantaged sub-groups!) but say nothing of substance. There is one overwhelming desire coming through though: the desire to regulate and control.
I've only glanced at it, but my favourite part so far is the suggestion that cops give presentations in schools about AI safety 8-D I'm not kidding:
Educating law enforcement surrounding these issues so citizens work collaboratively with them to avoid fear or confusion (e.g., in the same way police officers have given public safety lectures in schools for years, in the near future they could provide workshops on safe AI/AS).