What is the best compact formalization of the argument for AI risk from fast takeoff?
Many people complain that the Singularity Institute's "Big Scary Idea" (AGI leads to catastrophe by default) has not been argued for with the clarity of, say, Chalmers' argument for the singularity. The idea would be to make explicit what the premise-and-inference structure of the argument is, and then argue about...
I prefer this briefer formalization, since it avoids some of the vagueness of "adequate preparations" and makes premise (6) clearer.