Wasn't hardware overhang the argument that if AGI is more bottlenecked by software than hardware, then conceptual insights on the software side could cause a discontinuity as people suddenly figured out how to use that hardware effectively? I'm not sure how your counterargument really works there, since the AI that arrives "a bit earlier" either precedes or follows that conceptual breakthrough. If it precedes the breakthrough, then it doesn't benefit from that conceptual insight so won't be powerful enough to take advantage of... (read more)

AI Alignment Open Thread August 2019

by habryka 1 min read4th Aug 201996 comments

37

Ω 12


Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.

This is an experiment in having an Open Thread dedicated to AI Alignment discussion, hopefully enabling researchers and upcoming researchers to ask small questions they are confused about, share very early stage ideas and have lower-key discussions.