Don't talk about the AGI control problem
About ten years ago, after reading Bostrom, Yudkowsky, and others on AI risk and the control problem, I had a conspiratorial thought: if a superintelligence ever emerged, wouldn’t all this public discussion of control strategies just help the superintelligence resist them? At the time, the idea struck me as silly,...