I lean more towards the Camp A side, but I do understand and think there's a lot of benefit to the Camp B side. Hopefully I can, as a more Camp A person, help explain to Camp B dwellers why we don't reflexively sign onto these kinds of statements.
I think that Camp B has a bad habit of failing to model the Camp A rationale, based on the conversations I see in in Twitter discussions between pause AI advocates and more "Camp A" people. Yudkowsky is a paradigmatic example of the Camp B mindset, and I think it's worth noting that a lot of people in the public readership of his book found the pragmatic recommendations therein to be extremely unhelpful. Basically, they (and I) see Yudkowsky's plan as calling for a mass-mobilization popular effort against AI. But his plans, both in IABIED and his other writings, fail to grapple at all with the existing political situation in the United States, or with the geopolitical difficulties involved in political enforcement of an AI ban.
Remaining in this frame of "we make our case for [X course of action] so persuasively that the world just follows our advice" does not make for a compelling political theory on any level of analysis. Looking at the nuclear analogy used in IABIED, if Yudkowsky had advocated for a pragmatic containment protocol like the New START nuclear weapons deal or the Iran-US JCPOA deal, then we (the readers) could see that the Yudkowsky/Camp B side had thought deeply about the complexity of using political power to achieve actions in the full messiness of the real world. But Yudkowsky leaves the details of how a multinational treaty far more widely scoped than any existing multinational agreement would be worked out as an exercise for the reader! When Russia is actively involved in major war with a European country and China is preparing for an semi-imminent invasion of an American ally, the (intentionally?) vague calls for a multinational AI ban ring hollow. Why is there so little Rat brainpower devoted to the pragmatics of how AI safety could be advanced within the global and national political contexts?*
There are a few other gripes that I (speaking in my capacity as a Camp A denizen) have with the Camp B doctrine. Beyond inefficacy/unenforcability, the idea that the development of a superintelligence is a "one-shot" action without the ability to fruitfully learn from near-ASI non-ASI models seems deeply implausible. Also various staples of the Camp B platform — orthogonality and goal divergence out of distribution, notably — seem pretty questionable, or at least undersupported by existing empirical and theoretic work by the MIRI/PauseAI/Camp B faction.
*I was actually approached by an SBF representative in early 2022, who told me that SBF was planning on buying enough American congressional votes via candidate PAC donations that EA/AI safetyists could dictate US federal policy. This was by far the most serious AI safety effort I've personally witnessed come out of the EA community, and one of only a few that connected the AI safety agenda to the "facts on the ground" of the American political system.
As someone who's done a fair amount of meditation and read a couple dozen books on the topic, I'd just like to flag the fact that this is pretty well examined in the community, and while meditation as a whole is quite pre-paradigmatic, there seems to be an emerging consensus on some of the ways that meditation harm can manifest.
First off, it's obviously true that if you have a pre-existing tendency towards schizophrenia or any general mental instability, then in a very similar way to psychedelics, meditation can cause a psychotic break or similar episode of mental instability.
Secondly, and to me more interestingly, there's an emerging consensus that one of the things meditation does is relieve subconscious mental tension accumulated by either large-scale or small-scale traumas in the course of one's life. It's very in vogue to use the term karma to refer to this accumulation of mental pathology that one can analogize to stuck priors or misfiring circuits in the synaptic map. This is also, of course, pre-paradigmatic, so it's not great to take anything on this front super seriously, but it's a very useful frame.
Now, when it comes to meditation harm, one of the things you see is that going very deep, very fast in meditation without processing this kind of mental tension or trauma can result in the trauma coming up in overwhelming or counterproductive ways. Often, people don't talk about this in explicit terms, but I personally think it's very obvious that the fastest and most straightforward ways of "making progress" on the meditative path are the Burmese Mahasi Sayadaw method, which also has by far the highest rate of negative side effects in meditation. To me, this indicates strongly that the Sayadaw method, because it is focused on blowing through to meditative insight without a lot of emotional hippie processing along the way, involves getting hit with all this kind of accumulated tension all at once in a very intensive fashion.
Although you don't explicitly mention it, I feel like this whole post is about value drift. The doomers are generally right on the facts (and often on the causal pathways), and we do nonetheless consider the post-doom world better, but the 1-nth order effects of these new technologies reciprocally change our preferences and worldviews to favor the (doomed?) world created by the aforementioned new technologies.
The question of value drift is especially strange given that we have a "meta-intuition" that moral/social values evolving and changing is good in human history. BUT, at the same time, we know from historical precedent that we ourselves will not approve of the value changes. One might attempt to square the circle here by arguing that perhaps if we were, hypothetically, able to see and evaluate future changed values, that we would in reflective equilibrium accept these new values. Sadly, from what I can gather this is just not borne out by the social science: when it comes to questions of value drift, society advances by the deaths of the old-value-havers and the maturation of a next generation with "new" values.
For a concrete example, consider that most Americans have historically been Christians. In fact, the history of the early United States is deeply influenced by Christianity, sometimes swelling in certain periods to fanatical levels. If those Americans could see the secular American republic of 2025, with little religious belief and no respect for the moral authority of Christian scripture, they would most likely be morally appalled. Perhaps they might view the loss of "traditional God-fearing values" as a harm that in itself outweighs the cumulative benefits of industrial modernity. As a certain Nazarene said: “For what shall it profit a man, if he shall gain the whole world, and lose his own soul?” (Mark 8:36)
With this in mind, as a final exercise I'd like you, dear reader, to imagine a future where humanity has advanced enormously technologically, but has undergone such profound value shifts that every central moral and social principle that you hold dear has been abandoned, replaced with mores which you find alien and abhorrent. In this scenario, do you obey your moral intuitions that the future is one of Lovecraftian horror? Or do you obey your historical meta-intuitions that future people probably know better than you do?