Another (arguably similar) unintended consequence of underemphasizing the difficulty of AI alignment was that it led some to believe that if we don't rush to build an ASI, we'll be left defenseless against other X-risks, which would be a perfectly rational thought if alignment were easier.
This looks very useful, although I think the performance improvements in the more recent open-weight, smaller, quantized models (like Gemma-2, Qwen-2.5, or Phil-3.5) have made it much more reasonable to run such a model locally for this purpose rather than using a remote API, since sending data about the webpages they visit to OpenAI is a repulsive idea to many people (it would also have cost benefits over huge models like GPT-4, but the increase in benefit/cost ratio would be an epsilon increase compared to budget proprietary models like Gemini-2.0-Flash).
Claude API support would be great since Claude 3 models are highly competitive. Claude-3 Haiku performs similarly to GPT-4 at a fraction of the cost and Claude-3 Opus outperforms GPT-4-Turbo in many tasks.
I'm afraid the evolution analogy isn't as convincing an argument for everyone as Eliezer seems to think. For me, for instance, it's quite persuasive because evolution has long been a central part of my world model. However, I'm aware that for most "normal people", this isn't the case; evolution is a kind of dormant knowledge, not a part of the lens they see the world with. I think this is why they can't intuitively grasp, like most rat and rat-adjacent people do, how powerful optimization processes (like gradient descent or evolution) can lead to mesa-optimization, and what the consequences of that might be: the inferential distance is simply too large.
I think Eliezer has made great strides recently in appealing to a broader audience. But if we want to convince more people, we need to find rhetorical tools other than the evolution analogy and assume less scientific intuition.