An AI Realist Manifesto: Neither Doomer nor Foomer, but a third more reasonable thing
Cross-posted from substack AI has been a hot topic in recent Twitter discourse, with two opposing camps dominating the conversation: the Doomers and the AI builders. The Doomers, led by Eliezer Yudkowsky and other rationalists, advocate for caution and restraint in the development of AI, fearing that it could pose an existential threat to humanity. Prominent figures in this camp include Elon Musk, who has expressed concerns about the potential dangers of AI while also founding AI-focused companies like OpenAI and up-and-coming “BasedAI.” On the other side of the debate are the AI builders, including Yann LeCunn and Sam Altman, who are eager to push the boundaries of AI development and explore its full potential. While some members of this group have been dismissed as "idiot disaster monkeys" by Yudkowsky, I will refer to them as "Foomers" for the purposes of this blog post. The divide between these two camps is significant, as it represents a fundamental disagreement about the future of AI and its potential impact on society. The debate around AI often centers on the concept of superintelligence, which refers to AI that surpasses human intelligence in every way. Doomers argue that superintelligence could pose an existential threat to humanity, as it would be capable of outsmarting humans and achieving its goals at any cost. This is particularly concerning given that the goals of such an AI would be difficult, if not impossible, to specify in advance. If the goals are misaligned with human values, the consequences could be catastrophic. The AI builders or "Foomers" tend to downplay these risks, arguing that superintelligence could be used for the benefit of humanity if developed and controlled properly. However, the Doomers counter that the risks are too great and that any attempt to control superintelligence is likely to fail. As such, the debate remains a contentious one, with both sides offering many arguments. While Foomers may reject critique through

Robin is a libertarian, Nate used to be, but after the whole calls to "bomb datacenters" and vague "regulation," calls from the camp, i don't buy libertarian credentials.
A specific term of "cradle bags of meat" is de-humanization. Many people view dehumanization as evidence of violent intentions. I understand you do not, but can you step away and realize that some people are quite sensitive to the phrasing?
More-over when i say "forcefully do stuff to people they don't like", this is a general problem. You seem to interpret this as only taking about "forcing people to be uploaded" which is a specific sub-problem. There are many other instances of this general problem which... (read more)