Maybe the most important test for a political or economic system is whether it self-destructs. This is in contrast to whether it produces good intermediate outcomes. In particular, if free-market capitalism leads to an uncontrolled intelligence explosion, then it doesn’t matter if it produced better living standards than alternative systems for ~200 years – it still failed at the most important test.
A couple of other ways to put it:
Under this view, political/economic systems that produce less growth but don’t create the incentives for unbounded competition are preferred. Sadly, for Molochian reasons this seems hard to pull off.
I feel like this isn't a very useful framework in practice. Do we have any reason to believe that alternate frameworks or ideologies such as communism wouldn't have lead to AGI in a counterfactual world where they were more dominant or lasted longer? The Soviets had the Dead Hand system, which potentially contributed to x-risk from "AI" due to the risk of nuclear warfare, not that the system was particularly intelligent. China is the next closest competitor after the US in the modern AI race (not that it's particularly communist in practice), and I can envision an alternate timeline where the Soviet Union survived in a communist state to the present date and also embraced modern AI.
More damningly, by disavowing intermediary metrics, you're making the cut-off for evaluating the success of such an ideology the Heat Death of the universe.
Under this view you can totally have intermediary metrics, they just look more like “how much does your society avoid tragedies of the commons” rather than “what is the median quality of life”.
To be clear, this post was not intended as a subtle endorsement of communism. I agree with MondSemmel’s point that basically any system which produced slower economic growth would probably do better under this view, if only because AI development is slower.
Fair point. I would still say that given a specific level of technological advancement and global industrial capacity/globalization, the difference would be minimal. Consider a counterfactual: a world where Communism was far more successful and globally dominant. I expect that such a world would have had slower growth metrics than ours, perhaps they'd have developed similar transistor technology or our prowess with software engineering decades or even a century later. Conversely, they might well have had a more lax approach to intellectual property rights, such that training data was even easier to appropriate (fewer lawsuits, if any).
Even so, a few decades or even a century is barely any time at all. It's not like we can easily tell if we're living in a timeline where humanity advanced faster or slower than it would in aggregate. They might well find themselves in precisely the same position as we do, in terms of relative capabilities and x-risk, just at a slightly different date on the calendar. I can't think of a strong reason for why a world with different ideologies to ours would have, say, differential focused on AI alignment theory without actual AI models to align. Even LessWrong's theorizing before LLMs was much more abstract than modern interpretability or capability work in actual labs (this is not the same as claiming it was useless).
Finally, this framework still doesn't strike me as helpful in practice. Even if we had good reason to think that some other political arrangement would been superior in terms of safety, that doesn't make it very easy to pivot away. It's hard enough to get companies and countries to coordinate on AI x-risk today, if we also had to reject modern globalized capitalism in the process, I do not see that working out. That's today or tomorrow, it's easy to wish that different historical events might have lead to better outcomes, but even that isn't amenable to interventions without a time machine.
To rephrase, you find yourself in 2026 looking at machines approaching human intelligence. That strikes you as happening very quickly. I think that even in a counterfactual world where you observed the same thing in 1999 or 2045, it wouldn't strike you as particularly different. We had a massive compute overhang before transformers came out, relative to the size of models being trained ~2017-2022. You could well be a (for the sake of argument) a Soviet researcher worrying about alignment of Communist-GPT in 2060, wishing that the capitalists had won because their ideology appeared so self-destructive and backwards that you believed it would have held back progress for centuries. We really can't know, we've only got one world to observe, and even if we knew with confidence, we can't do much about it.
I think OP's perspective is valid, and I'm not at all convinced by your reply. We're currently racing towards technological extinction with the utmost efficiency, to the point that it's hard to imagine that any arbitrary alternative system of economics or governance could be worse by that metric, if only by virtue of producing less economic growth. I don't see how nuclear warfare results in extinction, either; to my understanding it's merely a global catastrophic risk, but not an existential one. And regarding your final paragraph, there are a lot of orders of magnitude between a system of governance that self-destructs in <10k years, vs. one that eventually succumbs to the Heat Death of the universe.
Anyway, I made similar comments as OP in a doomy comment from last year:
In a world where technological extinction is possible, tons of our virtues become vices:
- Freedom: we appreciate freedoms like economic freedom, political freedom, and intellectual freedom. But that also means freedom to (economically, politically, scientifically) contribute to technological extinction. Like, I would not want to live in a global tyranny, but I can at least imagine how a global tyranny could in principle prevent AGI doom, namely by severely and globally restricting many freedoms. (Conversely, without these freedoms, maybe the tyrant wouldn't learn about technological extinction in the first place.)
- Democracy: politicians care about what the voters care about. But to avert extinction you need to make that a top priority, ideally priority number 1, which it can never be: no voter has ever gone extinct, so why should they care?
- Egalitarianism: resulted in IQ denialism; if discourse around intelligence was less insane, that would help discussion of superintelligence.
- Cosmopolitanism: resulted in pro-immigration and pro-asylum policy, which in turn precipitated both a global anti-immigration and an anti-elite backlash.
- Economic growth: the more the better; results in rising living standards and makes people healthier and happier... right until the point of technological extinction.
- Technological progress: I've used a computer, and played video games, all my life. So I cheered for faster tech, faster CPUs, faster GPUs. Now the GPUs that powered my games instead speed us up towards technological extinction. Oops.
I think I disagree with counter-examples. Dead Hand system was created in a conflict with other countries, it can be viewed as a mostly forced risk. While AI races within companies of a single country are more of a “self-destruction” pattern. Capitalism creates rivals (and therefore races with more risks and less global safety) within one country, more than other economic systems may do so.
.
- The Soviets had the Dead Hand system, which potentially contributed to x-risk from "AI" due to the risk of nuclear warfare, not that the system was particularly intelligent.
I strongly doubt that even ideological uniformity would reduce inter-nation competition to zero, and I still doubt that the reduction would be meaningful. Consider that in our timeline, the Soviets and the Chinese had serious border skirmishes that could have escalated further, and did so despite considering the United States to be their primary opponent.
I am not talking about ideological uniformity between two countries. I am talking about events inside 1 country. As I understand, it the core of socialism economics is that government decides where resources of the country go (when in capitalism there are companies who then only pay taxes). Companies can have races with each other. With central planing it’s ~impossible. The problem of international conflicts is more of an another topic.
As of now, for example, neglect of AI safety comes (in a big part) from races between USA companies. (With some exception of china, which is arguably still years behind and doesn’t have enough compute)
I find it interesting and unfortunate that there aren't more economically left-wing thinkers influenced by Yudkowsky/LW thinking about AGI. It seems like a very natural combination given e.g. "Marx subsequently developed an influential theory of history—often called historical materialism—centred around the idea that forms of society rise and fall as they further and then impede the development of human productive power.". It seems likely that LW being very pro-capitalism has meaningfully contributed to the lack of these sorts of people.
[1]
I guess ACS carries sth like this vibe. But (unlike ACS) it also seems natural to apply this sort of view of history to AI except also thinking that fooming will be fast.
[2]
Relatedly, I wonder if I should be "following the money" more when thinking about AI risk. In particular, instead of saying that "AI researchers/companies" will disempower humanity, maybe it would be appropriate to instead or additionally say "(AI )capitalists and capital and capitalism". My current guess is that while it is appropriate to place a bunch of blame on these, it's also true that e.g. Soviet or Chinese systems [wouldn't be]/aren't doing better, so I've mostly avoided saying this so far. That said, my guess is that if the world were much more like Europe, we would be dying with significantly more dignity, in part due to Europe getting some hyperparameters of governance+society+culture+life more right due to blind luck, but also actually in part due to getting some hyperparameters right because of good reasoning that was basically tracking something logically connected to AI risk (though so far not significantly explicitly tracking AI risk), e.g. via humanism. Another example of a case where I wonder if I should follow the money more is: to what extent should I think of Constellation being wrong/confused/thoughtless/slop-producing on AGI risk in ways xyz as "really being largely about" OpenPhil/Moskovitz/[some sort of outside view impression on AI risk that maybe controls these] being wrong/confused/thoughtless/slop-liking on AGI risk in ways x'y'z'.
I've been meaning to spend at least a few weeks thinking these sorts of questions through carefully, but I haven't gotten around to that yet. I should maybe seek out some interesting [left-hegelians]/marxists/communists/socialists to talk to and try to understand how they'd think about these things.
Under this view, political/economic systems that produce less growth but don’t create the incentives for unbounded competition are preferred. Sadly, for Molochian reasons this seems hard to pull off.
Imo one interesting angle of attack on this question is: it seems plausible/likely that an individual human could develop for a very long time without committing suicide with AI or otherwise (imo unlike humanity as it is currently organized); we should be able to understand what differences between a human and society are responsible for this — like, my guess is that there is a small set of properties here that could be identified; we could try to then figure out what the easiest way is to make humanity have these properties.
By saying this, I don't mean to imply that LW is incorrect/bad to be very pro-capitalism. Whether it is bad is mostly a matter of whether it is incorrect, and whether it is incorrect is an open question to me. ↩︎
I guess this post of mine is the closest thing that quickly comes to mind when I try to think of something carrying that vibe, but it's still really quite far. ↩︎
I find it interesting and unfortunate that there aren't more economically left-wing thinkers influenced by Yudkowsky/LW thinking about AGI.
Maybe it's just my bubble -- and I really do not want to offend anyone, only to report honestly on what I observe around me -- understanding economics seems right-wing coded. More precisely, when I talk to right-wing people about economics, there is a mix of descriptive and normative, but when I talk to left-wing people about economics, it is normative only: what should be done, in their opinion, often ignoring the second-order effects. Describing the economics as it is, seems like expressing approval; and approving of capitalism is right-wing.
Basically, if you made a YouTube video containing zero opinion on how things should be, only explaining the basic things about supply and demand (like, how scarcity makes things more expensive in a free market) and similar stuff, people listening to the video would label you as right-wing. Many of those who identify as left-wing would even dismiss the video as right-wing propaganda.
So, if my understanding is correct, this seems like a problem the left wing needs to solve internally. There is not much we can do as rationalists when someone makes not understanding something a signal of loyalty.
I find it interesting and unfortunate that there aren't more economically left-wing thinkers influenced by Yudkowsky/LW thinking about AGI.
I noticed this too. In defence of LW, the Overton window here isn't as tightly policed as in other places on the internet, but it's noticeable. Recently, I seem to have found some of its edges here and here.
"Follow the money" is a good instinct, but I do think a lot of it is just memes fighting other memes using their hosts. A lot of this plays itself out by manipulating credibility signals (i.e. the voting mechanism).
Ultimately there's nothing any of us can do other than to follow, interrogate and stress-test the arguments being made.
I think it's generally the case that patterns need to survive to be good, but also it's fairly normal for patterns to die and this to be kinda fine. (i.e. it's fine if feudalism lasts a few hundred years and then is outcompeted by other stuff).
The application to superintelligence does seem, like, true and special, but, probably most patterns that evolved naturally don't successfully navigate superintelligence well and I'm not sure it's the right standard for them.
probably most patterns that evolved naturally don't successfully navigate superintelligence well and I'm not sure it's the right standard for them.
I'm not sure what you mean by standard, but navigating superintelligence well is something I care a lot about. So it seems like a reasonable thing to criticize a system for, and it would be great if we found a pattern that did navigate it well (even if finding or switching to another one is very hard).
What I meant was, if you're trying to discriminate between political/economic systems and notice "21st century capitalism / mix-mash-distribution-of-democracy-and-various-flavors-of-authoritatian-etc" doesn't look on track to successfully navigate superintelligence, well, that might be true, but, it's probably also true of most political/economic-systems that aren't Dath Ilan or similar.
It seems true, but, something like "there are kinds of different ways to fail, there are relatively few ways to succeed, so failure isn't actually that informative."
It's hard to imagine a economic/political system that doesn't eventually lead to an intelligence explosion. Maybe specific rules are easier to imagine: for example if you had a country in which building any form of intelligence was forbiden (I am not advocating for this), there wouldn't be an intelligence explosion. Such countries could have similar levels of growth all the way up to the advent of machine intelligence. It's important to remember though such countries could be caught in power prisoner-dilemmas which would give them a strong incentive to not have such rules.
In regards to the AI crash-testing political systems an additional caveat is that SOTA Norway is too small to pull off the explosion. It is a megaproject done by the coalition of American companies training the AI (with a potential inclusion of Chinese workers) and Taiwanese and South Korean manufacturers of things like compute and memory. This makes the task of determining which political system is suitable for the AGI a far more difficult endeavor, and it is especially difficult if, for example, the true reason of failure is the race netween the USA and China. Or if alignment to the UBI-powered utopia somehow ends up impossible.
Principal: "I have a very sad announcement to make. Your teacher has unexpectedly passed away, and there is no substitute..."
Child (with bloodstained shirt, hiding a knife under the desk): "So... We all passed last week's test?"
You will never find a $100 bill on the floor of Grand Central Station at rush hour, because someone would have picked it up already.
Are you really less likely to find $100 in Grand Central Station than finding $100 anywhere else? It's true that there are many more people who could find it before you, but there are also many more people that could drop $100. If you imagine a 1D version, where everyone walks through either Grand Central Station or a quiet alley along the same line, one after the other, then it seems like you should be equally likely to find $100 in either case – if the person in front of you in the line drops $100.
Agree. Also, the one time I found a 50€ bill on the ground it was at a busy train station, so my guess is the evidence actually goes the other way around (you have many people coming through, all of whom are paying uniquely little attention to their surroundings, and so the aggregate likelihood of someone dropping a $100 bill and no one noticing before you do, is higher).
In this scenario, are you not also also paying uniquely little attention to your surroundings (and thus equally less likely to spot the bill)?
It feels a little like begging the question to apply that modifier to other people in the scenario, but not yourself.
Yeah I think the only thing that really matters is the frequency with which bills are dropped, and train stations seem like high-frequency places.
There's probably more "bill-spotting effort per occupant" in grand central than elsewhere in life. Like maybe at a random train station there is on average $10/day available for the world to make watching the ground for abandoned cash, while at Grand Central there is $300/day. It is worth ~nobody's time to just sit around looking for bills in the random case, but in the grand central case maybe there is a kid who decides to try. If a bill drops within 20 meters of that kid, they are pretty likely (say > 30%) to notice it before anyone else does — just being the nearest person to the dropper isnt' good enough.
(If this is a claim about "market efficiency", then I'd say something like "the largest markets attract the most sophisticated participants".)
Some infectious disease graphs I would like to see but haven't been able to find: