Origins and dangers of future AI capability denial
In rationalist spheres, there's a fairly clear consensus that whatever AI's ultimate impact will be, it is at its core a capable technology that will have very large effects on the world. In the "general public" sphere, things are very different. There's a less clear but real consensus that AI's ultimate impact will be negative, and not much agreement on the capability of AI as a technology. I think it's very plausible that this could go the opposite way of rationalists, into a disbelief in the ability of AI as a technology, and eventually a more conspiratorial denial of even current AI capabilities. That is, the general public and policymakers could deny the existence of capabilities that are entirely undisputed among people familiar with the technology. I think this could happen, paradoxically, even in the face of AI-caused societal upheaval, and in almost any takeoff scenario. It would be extremely bad if many people come to believe this: arguments about existential risk mostly rely on the assumption that AI is capable, so they fall flat for people who don't agree with that. I think we should be emphasizing the core capability of AI more and talking about x-risks less. In this post I explain how and why this capability denial could develop, weak evidence that it is already developing, and what we could do about it. Irrationality First, a reminder, not that anyone needs it, on "how irrational can normal people be?" I have a bad habit of following conspiracy theorist drama. The venerable flat earth community produces a lot of it. In 2024, some guy flew a handful of prominent flat earthers to Antarctica so they could experience the midnight sun, a phenomenon that most flat earth theories don't have any real explanation for, so they deny the existence of it. To their credit, some people's beliefs were really changed by this. One guy's new Youtube logo is his old flat earth one but with a big red X across it. Others we
Great points. I think a lot of people yearn for some ultimate form of communication that is unmoored to and unencumbered by social norms and Overton windows and emotions, and thus much more efficient at transmitting information.
But that form of communication is (a) simply inhuman, and (b) not that much more efficient.
Wrapping your ideas in a thin veneer of politeness is similar to encoding them in words. It's fuzzy and non-optimal, yes, but it's just how humans transmit information. It's not possible to fully mentally separate idea criticisms from personal attacks, even for those who try very hard to do that. So if you want to communicate more effectively, you put in... (read more)