Unprecedented dangers
inevitably follow
from exponentially scaling
powerful technology
that we do not understand.

n.b. I'm a masters student in international policy (this program). In my experience, policy oriented folks do not understand that lines four and five can be simultaneously true. I think there are some simple ways that ML researchers can help address this misconception, and I'll share those here once I've written them up.

New to LessWrong?