Great article, but I might be biased since I’m also a fan of Chapman. I find the comments to be fascinating.
It seems to me that people read your article and think, “oh, that’s not new, I have a patch for that in my system.”
However. I think the point you and Chapman are trying to make is that we should think about these patches at the interface of rationality and the real world more carefully. The missing connection, then, is people wondering, why? Are you trying to:
Thank you for the thoughtful review, and laying the land!
I work in AI4Science, and have only recently started following the LessWrong thread of AI alignment.
In the spirit of seeking to learn, I wanted to ask: instead of all of these maximalist claims for mindshare, I was wondering if there are more "mundane" predictions, e.g. something like a "proto-ASI" missing a lot of important aspects of ASI will nevertheless be powerful enough to be destructive ("Most people will/die or at least be miserable" instead of "Everyone Dies") because those who have en... (read more)