Honorable AI
This note discusses a (proto-)plan for [de[AGI-[x-risk]]]ing [1] (pdf version). Here's the plan: 1. You somehow make/find/identify an AI with the following properties: * the AI is human-level intelligent/capable; * however, it would be possible for the AI to quickly gain in intelligence/capabilities in some fairly careful self-guided way, sufficiently...