TL;DR I think "Pessimization" is a bad concept to think about in the case of activist movements. I think most of it can be explained by ordinary failures, and that using pessimization as a concept is worse than just thinking about the ordinary causes of failure on an object level.
"Perverse Pessimization", says Richard Ngo, is when an organization nominally dedicated to X ends up causing not-X. This is particularly common amongst activist groups according to Richard, and he has a separate thread calling it "The Activist's Curse"
Two of his choice examples are pretty weird to me (Liberalism? Liberalism?? The ideology which took over the half the world and reigned supreme for at least half a century, which is widely agreed to be the best half-century ever. Is the "pessimization" of liberalism that eventually people stopped doing liberalism? And I don't see how transhumanism has made us any less transhuman) so I'll use the examples I understand.
Communism:[1] the Bolsheviks wanted to remove the power that a smallish aristocratic class had over them, but ended up creating an even smaller and more tyrannical ruling class.
Environmentalism: environmentalists wanted to make the environment better, but they fucked up so badly by being anti-nuclear that they ended up making everything worse.
AI Safety: the AI Safety community wanted to make people concerned about AI, so they wouldn't recklessly build AI that kills everyone, but many of the companies recklessly building AI came out of the AI Safety community.
So there's a mystery here: communities trying to cause X end up causing Not-X. Or do they? Here's another hypothesis:
Activists often identify a problem Y in some area. They get really involved in that area, and sometimes don't make Y any better or worse. If you look from the outside, it seems like they were "involved" with the people who made Y worse. If the activists didn't exist, Y wouldn't have been much better or worse.
Communism: the Bolsheviks identified the problem of a tyrannical aristocracy. They got rid of the Tsar, but were unable to prevent a different tyrannical ruling class from taking its place.
Environmentalism: environmentalists noticed that policy did not systematically focus on environmental concerns. They changed a lot of policy in different ways, but the poor quality of the evidence-to-policy pipeline amplified the NIMBY voices (particularly for nuclear power) more than it amplified the voices saying nuclear was a good idea.
AI Safety: early AI Safety advocates identified the problem that AI was dangerous and companies would be reckless. They tried to tell people about the problem, but some people didn't listen and built those companies anyway.
Or maybe there's a third option.
Activists are sometimes wrong. In some domains, one big mistake can wipe out all your good efforts. The universe is just unfair like that.
Communism: the Bolsheviks misunderstood how economics and power works, so their policies just killed a bunch of people.
Environmentalism: environmentalists made a few wrong calls on nuclear, and also often make wrong calls on local vs wider issues. Because of the way our laws are set up, these NIMBYist mistakes get amplified and mess everything up.
AI Safety: the AI Safety people under-estimated how many people would get a distorted version of their message, and these people heard "AI is a huge ..." instead of "AI is a huge threat"
Any movement which gains popularity will be used as a cudgel in political disputes. Sometimes this will go against the stated aims of X, sometimes critically.
Communism: Stalin used "Communism" as an excuse to become an awful dictator, because communism was the most popular ideology of the day.
Environmentalism: people who have never once in their life thought about environmentalism mysteriously become concerned about "Environmentalism" the moment anything is about to be built near them.
AI Safety: opportunistic businessmen used "AI Safety" as a way to get money for their causes, while not caring about AI safety at all.
This is kind of similar to one of Richard's points about sociopaths.
I think Richard makes an important error when he complains about existing activist-ish groups: he compares these groups to an imaginary version of the activist group which doesn't make any mistakes. Richard seems to see all mistakes made by activist groups as unforced and indicative of deep problems or malice.
If you're running an activist group, you actually only have two options: change your policies to take in or exclude some members on some front, or cease to exist. I'll illustrate with environmentalism:
Suppose you're protesting an energy company buying up ecologically-significant wetlands to build a big oil refinery, because it would increase carbon emissions. You have to choose which of the following people to let into your coalition:
You don't know if you need two of these groups on your side, or five. Do you pick all five to be safe, or just pick the best two? Or hedge with three?
And what about what comes after?
A: A green-energy company might be buying up farmland in a completely different area in order to build solar panels.
B: Some farmers might be lobbying for some moors to be converted to a farm
C: A company might trying to put an offshore wind farm near some cliffs where birds are nesting.
Maybe you're in favour of the solar panels in case A, and would end up lobbying against people from group 3 in that case. But maybe group 3 is huge and the only way to counter the farmers in scenario B.
This is the sort of dilemma you end up in when you do activism. It's difficult, and errors are usually not-unforced.
Your other choice as an activist group is to shut down and re-roll. In that case, maybe someone comes along and does better, or maybe they do worse. Even if your activist group is doing badly, that doesn't mean that a random replacement would have done better.
I think my view is something like this:
Activist groups often try to swim against structural forces, which involves getting mixed up in an area where they have lots of adversaries. Sometimes, they just lose, and those adversaries win. Sometimes the activist group makes a mistake which, due to the adversarial conditions, ends up doing a lot of harm. Sometimes, opportunists use the activists' cause as a means of achieving what they were doing already. None of this is strong evidence that the activists were actually worse than replacement.
To put it all together:
The Bolsheviks wanted to end the tyranny of a small ruling class. This was inherently difficult, because countries in general, and Russia especially, has a very powerful attractor state where there is a small, tyrannical ruling class. They made some serious mistakes about how to set up a system of good government, which made everything worse. Their ideology, once popular, was exploited by opportunists who wanted to take as much power as possible. Overall, they were probably worse than the average replacement revolution (Stalin was really bad) but not the worst possible revolutionary movement (they did lead to Gorbachev, who led to a period of democracy where the country seemed to have some hope).
Environmentalists wanted to make the world and its policymakers care about the environment. This was inherently difficult because improving the environment (in the ways environmentalists care about) is not very attractive as a policy system. They made some serious mistakes about which technologies were net good vs net bad (nuclear) and this interacted with structural biases in policies (towards NIMBYism) and opportunists (general NIMBYs). Overall, I'm unclear on whether organized university-activist-driven environmentalism was more or less effective than whatever would have taken its place.[1]
AI Safety people wanted to stop an uncoordinated race to AI which kills everyone. This was inherently difficult because security mindset is rare, the world is badly coordinated, and AI safety is very difficult. The incentives for companies to lobby against AI regulation are strong. They made some mistakes about how their message would come across (many people listened to the part about AI being big, but didn't grasp the difficulties of making it safe) which led to more attention overall on AI. Some of these people were opportunists who claimed "AI Safety" to get money for companies, and many grantmakers were duped by these claims. I think the AI Safety community (going back to ~2004) have performed well above replacement because the replacement community would be made up of AI-optimist futurists and not even really notice the risks until we smashed headlong into them. Humanity would likely die with even less dignity in such a world.
Apologies for psychoanalyzing Richard Ngo, but he's the main person who's using this concept in practice.
I suspect that Richard has been a bit guilty of red-thing-ism when it comes to the activist's curse and pessimization. I don't think that the features of pessimization are well-formed enough to be thrown around as a solid concept. Here he, having been called out for a very wrong position (that western elites generally support Hamas) says it fits neatly into his model of pessimization.
I think it's a big strike against pessimization if the originator of the concept is using it as a gear and that gear ends up being a vector for twitter mind-killing. The concept can very easily slide into a mental shortcut: "Of course these activists/elites did something dumb and bad, it's pessimization!".
End of psychoanalysis, nothing else here is aimed at Richard.
This reminds me a bit of a different case. If you're rich, and you have a different opinion on some issue to a poor person, you can expect to be hit by cries of "Luxury belief!" In the same way, if you're trying and failing to achieve something, you can basically always be hit by accusations of "Pessimization!".
If the concept becomes popularized, I think it will have the following effects:
So I don't think that pessimization (as the concept exists now) is a great gear to add to one's world-model.
On the other hand, the possible explanations for this "effect" are worth thinking about! If you're doing any activist-ish stuff, you should absolutely be looking through the twelve things Richard talks about in the thread. These are real failure modes, but they're just failure modes.
If you set out to do something hard, you should expect to find failure modes. But that doesn't mean the better option is to just lie down and give up.
I find it hard to fully blame environmentalism for the state of nuclear power in the world. For example, the UK also cannot build reservoirs or housing at the moment, even in cases where environmentalism doesn't cause problems (see the new fire authority, or rules surrounding having windows on two sides of every flat)