Group B being bad is not something I said, but I get where you're coming from. Indeed, "PETA is like the German Nazi Party in terms of their demonstrated commitment to animal welfare" is technically correct while also being misleading.
The strength of an analogy depends on how many crucial connections there are between the elements being compared.
What puts AI researchers closer to Leninism than other forms of paternalism is in the vanguardist self-conception, the utopian vision, and the dismissal of criticism due to a teleological view of history driving inevitable outcomes. Beyond that, other forms of paternalism are distinguished from Leninism and AI research by their socially accepted legitimacy.
What pattern-matches it away from Leninism is e.g. the specific ideological content, but the structural parallels are still oddly conspicuous, just like "your mom" being invoked in an ontological argument.
Surprisingly, AI researchers are like Leninists in a number of important ways.
In their story, they're part of the vanguard working to bring about the utopia.
The complexity inherent to their project justifies their special status, and legitimizes their disregard of the people's concerns, which are dismissed as unenlightened.
Detractors are framed as too unsophisticated to understand how unpopular or painful measures are actually in their long-term interest, or a necessary consequence of a teleological inevitability underlying all of history.
I see what you mean, though the fact that those researchers wish to impose this outcome on everybody else without their consent is still basically dictatorial, just as it would be if members of some political party started to persecute their opposition in service of their leader without themselves aspiring to take his position.
In both cases, those doing the bidding aspire to a place under the sun in the system they're trying to bring about.
I suppose that one quirk of the AI researchers might be the belief that everywhere becomes a place under the sun, though I doubt that any of them believe that their role in bringing it about doesn't confer them some special privilege or elite-status, perhaps as members of a new priestly class. Then again, we've seen political movements of this type, with some pigs famously being more equal than others.
The 100k to 10M range is populated by abstract quantities—I think that for a measure to be useful here, it has to be imaginable.
Avogadro's number has the benefit of historical precedent for describing quantities, and the coincidental property of allowing us to represent present-day training runs with numbers we see in the real world (outside of screens or print) when used as a denominator. It too might cease to be useful once exponents become necessary to describe training runs in terms of mol FLOPs.
This is interesting.
I do want to push back a little on:
entities that are out to get you will target those who signal suffering less.
I see the intuition here. I see it in someone calling in sick, in disability tax credits, in DEI (where "privilege" is something like the inverse of suffering), in draft evasion, in Kanye's apology.
But it's not always true: consider the depressed psychiatric ward inpatient who wants to get out due to the crushing lack of slack. Signalling suffering to the psychiatrist would be counterproductive here.
Where is the fault line?
Principal: "I have a very sad announcement to make. Your teacher has unexpectedly passed away, and there is no substitute..."
Child (with bloodstained shirt, hiding a knife under the desk): "So... We all passed last week's test?"
I find it interesting and unfortunate that there aren't more economically left-wing thinkers influenced by Yudkowsky/LW thinking about AGI.
I noticed this too. In defence of LW, the Overton window here isn't as tightly policed as in other places on the internet, but it's noticeable. Recently, I seem to have found some of its edges here and here.
"Follow the money" is a good instinct, but I do think a lot of it is just memes fighting other memes using their hosts. A lot of this plays itself out by manipulating credibility signals (i.e. the voting mechanism).
Ultimately there's nothing any of us can do other than to follow, interrogate and stress-test the arguments being made.
"The AI does things that I personally approve of" as an alignment target with reference to everybody and their values is actually easier to hit than one might think.
It doesn't require ethics to be solved; it can be achieved by engineering your approval.
It might be impossible for you to tell which of these two post-ASI worlds you find yourself in.
Oh yeah? I'm going to... try to convince the government to pass a law to stop you, and then call the police to sort you out! ... What do you mean you "already took care of them"?