There's previously been the "an AI could achieve a discontinuous takeoff by exploiting a security vulnerability to copy itself into lots of other computers" argument in at least Sotala 2012 (sect 4.1.) and Sotala & Yampolskiy 2015 (footnote 15), though those don't explicitly mention the "use the additional capabilities to break into even more systems" part. (It seems reasonably implicitly there to me, but that might just be illusion of transparency speaking.)

AI Alignment Open Thread August 2019

by habryka 1 min read4th Aug 201996 comments

37

Ω 12


Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.

This is an experiment in having an Open Thread dedicated to AI Alignment discussion, hopefully enabling researchers and upcoming researchers to ask small questions they are confused about, share very early stage ideas and have lower-key discussions.