Mod note: I get the sense that some commenters here are bringing a kind of... naive political partisanship background vibe? (mostly not too overt, but it felt off enough I felt the need to comment). I don't have a specific request, but, make sure to read the LW Political Prerequisites sequence and I recommend trying to steer towards "figure out useful new things" or at least have the most productive version of the conversation you're trying to have.
(that doesn't mean there won't/shouldn't be major frame disagreements or political fights here, but, like, lean away from drama on the margin)
They felt to me like "comments that were theoretically fine, but they had the smell of 'the first very slight drama-escalation that tends to lead to Demon Threads'".
I suspect that your post might have more upvotes if there was agreement/disagreement karma for posts, not just comments.
Meta note: Is it... necessary or useful (at least at this point in the conversation) to label a bunch of these ideas right-wing or left-wing? Like, I both feel like this is overstating the degree to which there exists either a coherent right-wing or left-wing philosophy, and also makes discussion of these ideas a political statement in a way that seems counterproductive.
I think a post that's like "Three under-appreciated framed for AGI (Geo)Politics" that starts with "I've recently been reading a bunch more about ideas that are classically associated with right-leaning politics, and I've found a bunch of them quite valuable, here they are" seems just as clear, and much less likely to make the discussion hard in unnecessary ways.[1]
And like, I think this is symmetrically true in that I think a discussion that didn't label hypotheses "grey tribe hypotheses" or "left-wing hypotheses" or "rationalist hypotheses" also seems less likely to cause people to believe dumb things.
There’s a layer of political discourse at which one's account of the very substance or organization of society varies from one ideology to the next. I think Richard is trying to be very clear about where these ideas are coming from, and to push people to look for more ideas in those places. I’m much more distant from Richard’s politics than most people here, but I find his advocacy for the right-wing ‘metaphysics’ refreshing, in part because it’s been unclear to me for a long time that the atheistic right even has a metaphysics (I don’t mean most lw-style libertarians when I say ‘the right’ here).
This kind of structuralist theorizing is much more the domain of explicitly leftist spaces, and so you get these unexamined and, over time, largely forgotten or misremembered ideological precepts that have never had to pay rent. I think offering a coherent opposition to liberal or leftist orthodoxy, and emphasizing the cross-domain utility of the models there, is great for discourse.
I think these gestures would mean more if Richard were in the room with the leftists who are thinking about what he’s thinking about (it would help keep them honest, for one), but there’s still at least some of this effect on lw.
I strong agreed with your comment because I think people are taking the bait to argue against what they may suspect is kind of a motte and bailey or dog whistle, and so there’s one layer of discourse that would certainly be improved by Richard down-playing his ideology. But still, there’s another layer (not much of which is happening here, admittedly) that stands to profoundly benefit from the current framing.
[I’m not sure how straw I think his stories about the left are; I’m not sure what he means by the left; I’m not sure how many things he believes that I might find distasteful; I’m not sure how valuable this line of inquiry on his part even is. But it’s nice to see a style of thinking ~monopolized by the left proudly deployed by the opposition!]
This is a reasonable point, though I also think that there's something important about the ways that these three frames tie together. In general it seems to me that people underrate the extent to which there are deep and reasonably-coherent intuitions underlying right-wing thinking (in part because right-wing thinkers have been bad at articulating those intuitions). Framing the post this way helps direct people to look for them.
But I could also just say that in the text instead. So if I do another post like this in the future I'll try your approach and see if that goes better.
I prefer (classical / bedrock) liberalism as a frame for confronting societal issues with AGI, and am concerned by the degree to which recent right-wing populism has moved away from those tenets.
Liberalism isn't perfect, but it's the only framework I know of that even has a chance of resulting in a stable consensus. Other frames, left or right, have elements of coercion and / or majoritarianism that inevitably lead to legitimacy crises and instability as stakes get higher and disagreements wider.
My understanding is that a common take on both the left and right these days is that, well, liberalism actually hasn't worked out so great for the masses recently, so everyone is looking for something else. But to me every "something else" on both the left and right just seems worse - Scott Alexander wrote a bunch of essays like 10y ago on various aspects of liberalism and why they're good, and I'm not aware of any comprehensive rebuttal that includes an actually workable alternative.
Liberalism doesn't imply that everyone needs to live under liberalism (especially my own preferred version / implementation of it), but it does provide a kind of framework for disagreement and settling differences in a way that is more peaceful and stable than any other proposal I've seen.
So for example on protectionism, I think most forms of protectionism (especially economic protectionism) are bad and counterproductive economic policy. But even well-implemented protectionism requires a justification beyond just "it actually is in the national interest to do this", because it infringes on standard individual rights and freedoms. These freedoms aren't necessarily absolute, but they're important enough that it requires strong and ongoing justification for why a government is even allowed to do that kind of thing. AGI might be a pretty strong justification!
But at the least, I think anyone proposing a framework or policy position which deviates from a standard liberal position should acknowledge liberalism as a kind of starting point / default, and be able to say why the tradeoff of any individual freedom or right is worth making, each and every time it is made. (And I do not think right-wing frameworks and their standard bearers are even trying to do this, and that is very bad.)
I think the key issue for liberalism under AGI/ASI is that AGI/ASI makes value alignment matter way, way more to a polity, and in particular you cannot get a polity to make you live under AGI/ASI if the AGI/ASI doesn't want you to live, because you are economically useless.
Liberalism's goal is to avoid the value alignment question, and to mostly avoid the question of who should control society, but AGI/ASI makes the question unavoidable for your basic life.
Indeed, I think part of the difficulty of AI alignment is lots of people have trouble realizing that the basic things they take for granted under the current liberal order would absolutely fall away if AIs didn't value their lives intrinisically, and had selfish utility functions.
The goal of liberalism is to make a society where vast value differences can interact without negative/0-sum conflict and instead trade peacefully, but this is not possible once we create a society where AIs can do all the work without human labor being necessary.
I like Vladimir Nesov's comment, and while I have disagreements, they're not central to his point, and the point still works, just in amended form:
Hard agree. It's ironic that it took hundreds of years to get people to accept the unintuitive positive-sum-ness of liberalism, libertarianism, and trade. But now we might have to convince everyone that those seemingly-robust effects are likely to go away, and that governments and markets are going to be unintuitively harsh.
There are several important "happy accidents" that allowed almost everyone to thrive under liberalism, that are likely to go away:
- Not usually enough variation in ability to allow sheer domination (though this is not surprising, due to selection - everyone who was completely dominated is mostly not around anymore).
- Predictable death from old age as a leveler preventing power lock-in.
- Sexual reproduction (and deleterious effects of inbreeding) giving gains to intermixing beyond family units, and reducing the all-or-nothing stakes of competition.
- Not usually enough variation in reproductive rates to pin us to Malthusian equilibria.
To be honest, these epistemic positions sound like counterproductice delaying tactics at best. If the future of humanity can be summed up as "we are an increasingly embattled minor nation in a hostile international conflict zone, and we must ruthlessly police the ingroup-outgroup divide to maintain our bio-ethno-national identity" then I don't see much if any increases in wellbeing in store for "normal humans". At best we become transhumanist North Korea, at worst we find the war we're looking for and we lose to forces we cannot even comprehend.
Most of what motivates me to work on AI safety and theory of alignment is the belief that there are options other than what you have presented here.
I'm afraid you might be right, though maybe something like "transhumanist North Korea" is the best we can hope for while remaining meaningfully human. Care to outline, or link to, other options you have in mind?
Hey David,
I wish that I had easy answers I could link. I've looked at a lot of angles in this space (everything from AI for epistemics to collective deliberation/digital democracy tools all the way to the Deep Lore of agent foundations/FEP etc.), and I haven't found anything like a satisfying solution package. Even stuff I wrote up I was not happy with. My current project plan is to try and deconfuse myself about a lot of these topics, and working to bridge the gap between the theory of machine learning as we historically understood it and theories of biological learning. I think the divide between ML systems and biological systems is smaller than people think, and therefore there is both more room for multi-scalar cooperation and also more risk of danger from misconceptions and mistreatment.
Would love to discuss more in DMs/email if you feel up for it or have thoughts to share, most of my relevant contact info/past work can be found here: https://utilityhotbar.github.io
All of these ideas are very high-level, but they give an outline of why I think right-wing politics is best-equipped to deal with the rise of AI.
While I think you raise some interesting points, I don't buy the final conclusion at all - or you're "right-wing politics" is a more theoretical or niche politics compared to the right-wing politics I'm used to see rather globally:
To cope with advanced AI we need powerful redistribution, within and across nations (and I think traditional protectionism doesn't seem to help you here). But right-wing politics in practice does exactly the contrary as far as I see it a bit everywhere: populist right-wing movements, who admittedly are not only stupid per se - they rightly call out the left being narrowly minded extreme left in their policies, alienating with wokeism or the like the more middle-ground people - but focus a lot on dismantling social cohesion/welfare state; are fully in bed with big business and help it profit at the expense of the common men domestically & abroad, lowering taxes, .. -> all in all the very contrary of building a robust society able to cope intelligently with AI.
The fact that it slows growth is not a problem,
It would be not a problem if everyone was slowing their growth. But if e.g. the profit-maximizing autonomous corporations are not slowing their growth, and the countries which have been taken over by AIs are not slowing their growth, and the dictatorships are not slowing their growth, then yes it's a huge problem that protectionism in human-run democracies slows growth.
Yes I think protectionist viewpoints are very naive. The industrial revolution flipped the gameboard for which countries stood at the top and the most economically powerful country back then, China ruled by the Qing dynasty, did a lot of these protectionist measures and what actually happened was tiny backwater nations instead dominated it decades-centuries later. AGI compresses this to months-years.
Re: Point 1, I would consider the hypothesis that some form of egalitarian belief is dominant because of its link with the work ethic. The belief that the market economy rewards hard work implies some level of equality of opportunity, or the idea that most of the time, pre-existing differences can be overcome with work. As an outside observer to US politics, it's very salient how every proposal from the mainstream left or right goes back to that framing, to allow a fair economic competition. So when the left proposes redistribution policies, it will be framed in terms of unequal access to opportunities. That said, it's possible to propose redistribution policies or universal allowances outside of an egalitarian (sensu OP) framework. The extreme of such policies, Marx's "From each according to their ability, to each according to their need" is explicitly asymmetric. I'm not saying a post-AGI world will become Marxist. But I would expect that AGI would be disruptive enough to require societies to review their ideas around the work ethic and the moral basis for distribution of resources.
A couple of points that I thought about reading this.
I think it's probably true that some valuable right wing ideas have been overlooked due to the larger left-wing academic cultural milieu. I very regularly see people on all ends of the political spectrum reject ideas out of turn, just because they don't fit into something they see as a socially important grouping of political ideas.
However,
I strongly suspect that the most potent wells of useful political thought won't overlooked ideas from the left or right wing, but non-American, and more broadly, non-Western modes of political thought. Even just as an Australian, it can be frustrating how rigidly many users here stick to the American political overton window; assuming strong correlations between fundamentally unrelated ideas (for example, a correlation between socialism and anti-racism; or between conservatism and fossil fuel spruiking).
As for your actual points:
(Nor can you run a country for the benefit of “all humans”, because then you’re in an adversarial relationship with your own citizens, who rightly want their leaders to prioritize them.)
I think there's a compromise, in which you run the country so as to benefit all humans but disproportionately benefit / prioritize your own citizens. This seems to be a decent aggregation of the preferences of US citizens for example, many of whom do donate abroad for example.
My candidate is “asymmetrist”. Egalitarianism tries to enforce a type of symmetry across the entirety of society. But our job will increasingly be to design societies where the absence of such symmetries is a feature not a bug.
This is a revival of class-collaborationist corporatism with society stratified by cognition. When cognitive ability is enabled by access to wealth (or historic injustice), this corporatism takes on an authoritarian character. I think the left-wing is more than capable of engaging with these issues - it simply rejects them on a normative basis.
I initially posted a comment that was negative here shortly after the post came out, then quickly deleted it when I realized I was being unreasonable. I think there's some sense in which I actually agree with some conceptual threads underlying this fairly strongly, but I'm very hesitant about the risk of invoking a different part of a conflationary alliance than intended. I find, when trying to clean up politically loaded concepts, the first thing I want to do is rephrase away from the originating memeplex so that I'm not relying on shared understanding of shibboleths - it seems like possibly your approach is, instead of rephrasing away, to try to be precise about which part of the shibboleth conflationary space you're referring to?
Some disagreements, though:
I'd hope humanity as a whole is a set within which we can converge on something like egalitarian or near-egalitarian terminal valuation, possibly via something that looks vaguely like an eigenmorality coprotection group? I'm not sure what that would look like concretely. This might look, in your framework, like an international alliance, for example. But this would be separate from obviously correct asymmetric instrumental valuation. In a sense, you seem to be talking about that with the nationalism concept, but I'd hope to see a larger set than a single current nation as the conserved group.
I'm also quite concerned that this approach might end up preventing many of the uplifts that one would want humanity to have from a successful alignment outcome. If the framework requires people to stay human in a way the overall group is in agreement on, rather than having the flexibility to select the part of being-human which permits large-scale degradation prevention.
I'm not at all convinced that this adds up to "right wing politics is best equipped". I buy, and have bought for a while, that right wing politics has frameworks which are ready to talk about the rise of AI. It doesn't seem clear from this argument that the frameworks' concerning properties would be robust to degradation over time, though. That still seems to me to be an unsolved problem.
I empathize a lot in particular with exactly the downside you point out from free trade. But, on
But we’re heading towards a future in which a) most people become far wealthier in absolute terms due to AI-driven innovation, and b) AIs will end up wielding a lot of power in not-very-benevolent ways
It seems imho absolutely uncertain AI makes "most people far wealthier". I fear to the very contrary (and more so given recent years' political developments), AI may make some very rich but a majority poor; we'll be too dumb (or as elite too egoistical and/or self-serving biased) (including unable or unwilling to internationally coordinate) to have reasonable redistribution to the job-losers from automation.
These are interesting, but I think you're fundamentally misunderstanding how power works in this case. The main questions are not "Which intellectual frames work?" but "What drives policy?". For the Democrats in the US, it's often The Groups: a loose conglomeration of dozens to hundreds of different think-tanks, nonprofits, and other orgs. These in turn are influenced by various sources including their own donors but also including academia. This lets us imagine a chain of events like:
This might be wrong but it is functional as a theory of change.
What's the Theory of Change for these frames? Going via voters is pretty risky, because politicians often don't actually know what voters actually want. So who drives policy changes in the Republican party? Lobbyists and Donors? Individual political pressure groups like the Freedom Caucus (but then what drives those pressure groups, we need a smaller theory of change for them). I would guess that the most plausible ToC here is to change the minds of politicians directly through (non-monetary) lobbying. If so, have you had any success doing this?
I've increasingly found right-wing political frameworks to be valuable for thinking about how to navigate the development of AGI. In this post I've copied over a twitter thread I wrote about three right-wing positions which I think are severely underrated in light of the development of AGI. I hope that these ideas will help the AI alignment community better understand the philosophical foundations of the new right and why they're useful for thinking about the (geo)politics of AGI.
Nathan Cofnas claims that the intellectual dominance of left-wing egalitarianism relies on group cognitive differences being taboo. I think this point is important and correct, but he doesn't take it far enough. Existing group cognitive differences pale in comparison to the ones that will emerge between baseline humans and:
Current cognitive differences already break politics; these will break it far more. So we need to be preparing for a future in which egalitarianism as an empirical thesis is (even more) obviously false.
I don’t yet have a concise summary of the implications of this position. But at the very least I want a name for it. Awkwardly, we don’t actually have a good word for “anti-egalitarian”. Hereditarian is too narrow (as is hierarchist) and elitist has bad connotations.
My candidate is “asymmetrist”. Egalitarianism tries to enforce a type of symmetry across the entirety of society. But our job will increasingly be to design societies where the absence of such symmetries is a feature not a bug.
(See also this talk of mine.)
Protectionism gets a bad rap, because global markets are very efficient. But interacting with them is very much not adversarially robust. If you are a small country and you open your borders to the currency, products and companies of a much larger country, then you will get short-term wealthier but also have an extremely hard time preventing that other country from gaining a lot of power over you in the long term. (As a historical example, trade was often an important precursor to colonial expansion. See also Amy Chua’s excellent book World on Fire, about how free markets enable some minorities to gain disproportionate power.)
When you’re poor enough, or the larger power is benevolent enough, this may well be a good deal! But we’re heading towards a future in which a) most people become far wealthier in absolute terms due to AI-driven innovation, and b) AIs will end up wielding a lot of power in not-very-benevolent ways (e.g. automated companies that have been given the goal of profit-maximization).
Given this, protectionism starts to look like a much better idea. The fact that it slows growth is not a problem, because society will already be reeling from the pace of change. And it lets you have much more control over the entities that are operating within your borders - e.g. you can monitor the use of AI decision-making within companies much more closely.
To put it another way, in the future the entire human economy will be the “smaller country” that faces incursions by currency, products and companies under the control of AIs (or humans who have delegated power to AIs). Insofar as we want to retain control, we shouldn’t let people base those AIs in regulatory havens while still being able to gain significant influence over western countries.
Okay, but won’t protectionist countries just get outcompeted? Not if they start off with enough power to deter other countries from deploying power-seeking AIs. And right now, the world’s greatest manufacturing power is already fairly protectionist. So if the US moves in that direction too, it seems likely that the combined influence of the US and China will be sufficient to prevent anyone else from “defecting”. The bottleneck is going to be trust between the two superpowers.
All of the above is premised on the goal of preserving human interests in a world of much more powerful agents. This is inherently a kind of conservatism, and one which we shouldn’t take for granted. The tech right often uses the language of “winning”, but as I’ve observed before there will increasingly be a large difference between a *country* winning and its *citizens* winning. In the limit, a fully-automated country could flourish economically and politically without actually benefiting any of the humans within it.
National conservatism draws a boundary around a group of people and says “here are the people whose interests we’re primarily looking out for”. As Vance put it, America is a group of people with a shared history and a common future. Lose sight of that, and arguments about efficiency and productivity will end up turning it instead into a staging-ground for the singularity. (Nor can you run a country for the benefit of “all humans”, because then you’re in an adversarial relationship with your own citizens, who rightly want their leaders to prioritize them.)
China’s government has many flaws, but it does get this part right. They are a nation-state run by their own people for their own people. As part of that, they’re not just economically protectionist but also culturally protectionist - blocking foreign ideas from gaining traction on their internet. I don’t think this is a good approach for the West, but I think we should try to develop a non-coercive equivalent: mechanisms by which a nation can have a conversation with itself about what it should value and where it should go, with ideas upweighted when their proponents have “skin in the game”. Otherwise the most eloquent and persuasive influencers will end up just being AIs.
All of these ideas are very high-level, but they give an outline of why I think right-wing politics is best-equipped to deal with the rise of AI. There’s a lot more work to do to flesh them out, though.