Could A Superintelligence Out-Argue A Doomer?
Imagine a hypothetical conversation between an intelligent, rational, epistemically confident AI doomer (say Eliezer Yudkowsky) and a superintelligent AI. The goal of the superintelligence is to genuinely convince the doomer that doom is unlikely or impossible, using only persuasion - no rearranging mind states using nanobots or similar tricks, and...
May 10, 2023-16