I hold a nuanced view that I believe is more similar to the empirical claim than your views.

I think what we want is an extremely high level of justified confidence that any AI system or technology that is likely to become widely available is not carrying a significant and non-decreasing amount of Xrisk-per-second.
And it seems incredibly difficult and likely impossible to have such an extremely high level of justified confidence.

Formal verification and proof seem like the best we can do now, but I agree with you that we shouldn't rule out other approa... (read more)

AI Alignment Open Thread August 2019

by habryka 1 min read4th Aug 201996 comments

37

Ω 12


Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.

This is an experiment in having an Open Thread dedicated to AI Alignment discussion, hopefully enabling researchers and upcoming researchers to ask small questions they are confused about, share very early stage ideas and have lower-key discussions.