I was very struck by the following claim in Dario Amodei's recent essay:
I don’t think it is too much of a stretch (if we get a “country of geniuses”) to imagine AI companies, semiconductor companies, and perhaps downstream application companies generating ~$3T in revenue per year, being valued at ~$30T, and leading to personal fortunes well into the trillions.
[Following is in a footnote]: The total value of labor across the economy is $60T/year, so $3T/year would correspond to 5% of this. That amount could be earned by a company that supplied labor for 20% of the cost of humans and had 25% market share, even if the demand for labor did not expand (which it almost certainly would due to the lower cost).
This is really not much money. I read this as making a claim about the value of the sector as a whole, though it's possible he's saying that an individual company will achieve that kind of valuation, in which case the sector as a whole might be some multiple higher.
The current market cap of publicly traded tech companies is a bit over $40 trillion on revenue of about $7 trillion. So if the prediction is interpreted as being about the AI sector as a whole, then he's forecasting it will be smaller than the current tech sector. Depending on how expansively one defines the AI sector, we are already definitely within an order of magnitude of $30 trillion in valuation, and perhaps as much as halfway there.
If he's describing projections for a single company, then obviously that's more significant, but it's still pretty low and not something that could really be classed as "transformative" -- this would be a company with a market cap a little under 7x of Nvidia's current value ($4.6 trillion) and revenue about 8x of Alphabet ($385 bn). It's a little sticky to compare this to a GDP number, but it suggests that a "country of geniuses" in a data center would be producing economic value somewhere in between current France and Italy.
Perhaps this is just a throwaway line, or perhaps the Anthropic legal team made him scrub out another prediction that could be interpreted as promising Anthropic investors fantastical returns. Another possibility is that he thinks the returns on AI will overwhelmingly flow outside the sector itself. But, these figures just seem really, really low for a world-transforming technology and that makes me wonder about his beliefs.
The PM is less hindered by the independent judiciary than the president. The PM in a Westminster system also exerts greater control over the individual legislators via his party than in the American system. The PM can serve for an unlimited time, and call elections at strategic moments, while Trump is limited to two terms. All these things increase the power of the PM and the risk of oppressive rule in Westminster-style parliamentary systems.
None of those are inherently features of a parliamentary (or even Westminster-style) government. Those are all separate institutional choices you can make in either setup.
With this, you focus too narrowly on this specific minority-rule "seizure of power" scenario rather than the relative power of the offices more generally.
Sorry, I thought we were discussing the possibility of collapse into authoritarianism, in which case some kind of seizure of power is the relevant question? The claim I was making above is relevant to this, and not to other bad things that might happen.
As to the "power of the offices," I do want to re-emphasize what I said earlier which is that you have to make a separation between the powers of the office (i.e., those vested in the office itself) and the typical powers of the officeholder (i.e., additional power that is typically held by the person holding the office but not as a consequence of holding the office). Much of the power of the typical prime minister flows from the fact that they are also the leader of a legislative majority. The matched comparison would be some kind of situation where the American president is also the speaker of the house and the Senate has been reduced to a ceremonial role (and if you want to match Britain in particular to the US, you also have to match other unrelated features like federalism and the strength of judicial review).
The U.S. president holds a weaker office than the heads of government in most other countries. The Canadian and British PMs and the French presidents definitely seem stronger
It matters exactly what you're comparing here.
An American president is typically less effectual than a British PM, but the office is stronger. That is, the PM receives basically no power qua PM whereas the American presidency directly comes with considerable constitutional power.
If you were randomly dropped in by some process as the US president tomorrow, you'd immediately be a very powerful person and you'd hold those powers for a considerable length of time. If you were randomly dropped in as British PM, you'd be removed in a confidence vote in an instant.
The PM in a parliamentary system can typically get a great deal more done than the US president but that's a selection effect really -- being the PM means you also commanded a Parliamentary majority in order to get there, so of course you face less gridlock. The legislative branch doesn't typically want to stop you. But, if the legislature suddenly does want to stop you, you're gone immediately.
Can you explain your thinking here more and how it connects to the idea of constitutional risk?
Suppose the executive wants to seize power. If the legislature supports that, then it's going to be a relatively easy thing to do in either a presidential or a parliamentary system. Whatever constraint there has to come from somewhere else.
The distinction between the two systems really only matters if the legislature opposes the seizure. Under a parliamentary system, they have an easy remedy -- trigger a no-confidence vote and get rid of the problematic leader. Under a presidential system? Removing the leader is hard, and if you get into some kind of fight otherwise the president has all kinds of levers to pull. Which turns nasty (and those kind of moments of conflict also potentially create an opening for the military or someone else to seize power). It's clearly better to be in a parliamentary system in that situation.
I was also referencing above the classic essay "The Perils of Presidentialism" by Juan Linz, which lays out a much more sophisticated set of arguments.
At the end of the day, the rule of a law is a Tinker Bell situation (it only survives if we believe in it). Long-term constitutional stability under a presidential system of government is also quite exceptional. The standard argument is that the US is the only successful case of long-run constitutional stability under a presidential argument (though, depending on how you define long-run, you might throw in Costa Rica today). We're very lucky that we've believed for so long.
I'd add a couple more factors into your analysis, though.
One thing you leave out is mass public opinion, and all the various ways that can be effective -- demonstrators in the streets, general strike, cessation of quasi-voluntary compliance in all the areas where the government requires it, and so on, perhaps insurgency or terrorism in extremis. Layer onto that the various additional actions available to economic elites. The real hope for the Supreme Court is that the public takes its side in some extreme crisis, and that a clear ruling on its part serves as the focal point to kick all of that off.
It's pretty unlikely that the US military would be willing to crack down in that scenario. But even if it were, it doesn't have the capacity to operate a police state. Most of our military capabilities aren't geared towards that (something like a B-2 bomber or an aircraft carrier just really isn't so helpful) -- the infantry forces of the US military aren't even numerous enough to take over for the existing cadre of state and local police (assuming they walk off in this scenario) much less to do some kind of large-scale repression on top of maintaining ordinary law and order.
Another factor (in less extreme scenarios) is that the courts, in their ordinary and apolitical capacity, are extremely valuable to the government. A collision that ends up destroying the courts takes a lot of the economy with it because large chunks of the economy are underpinned by the existence of a rule of law system governing economic transactions. And the courts are also necessary to keep the trains running on criminal justice and so on. A surgical attack on the courts that disables them only on the political issues while keeping all of that running is very difficult to mount, especially in the face of their concerted resistance. Even autocrats find it useful to have a functional court system (and our own legal tradition emerged as a tool of the British monarchy)
On a lot of lower stakes stuff, this is really what matters, especially when the actions the government wants to take flow directly through the courts. There are a lot of issues in terms of criminal procedure where the executive would have pretty wide public support for violating the constitution (e.g., in surveys, a substantial majority of Americans favor rolling back various rights constitutionally granted to criminal defendants). Similarly, a pretty sizable chunk of Americans on either side seem to actively favor imprisoning political opponents on trumped up charges and no one is going to take to the streets if it happens. But, because the criminal process runs straight through the courts, you can't really get those things done without blowing up the system. And that's a big step to take.
Another consideration here is the power the courts have over lawyers. So long as the executive branch is still playing the game with reference to the rules (however fast and loose it's being with those rules), the lawyers advancing its positions are subject to judicial discipline and, therefore, face personal consequences like disbarment. If the executive has decided to go all-out, that stops mattering. But in a lesser constitutional crisis, those people are still thinking about those interests and that exerts a lot of pressure in the rule of law direction. Likewise for the willingness of the courts to continue extending the government the presumption of regularity.
Circling back on the military, a couple of points:
As a closing though, the scenarios that worry me the most don't involve outright defiance and clashes. The smart way of doing things is a little more subtle (and in the current moment also leverages the fact that the Supreme Court is willing to give the administration considerable benefit of the doubt). The Supreme Court's own precedents have also handicapped it in that it has declared a variety of the legal tools you'd want in a crisis to be beyond its own powers and invented a lot of technicalities for the president to play to his own advantage.
Thanks for the recommendation btw. I take it from your comment that you're involved in writing this?
I ended up using a version of this from the Center for AI Safety that I edited down for length (basically taking the intro, the front end of each section, and then most of the "Rogue AIs" section).
My context here is assigning this to undergraduates in a survey course where AI is being discussed among other serious future threats. Some things I didn't like about it for that purpose (that may or may not have anything to do with your own target audience):
For my purposes in particular, I'd like to see a bit more on the geopolitics.
I think the confusion here is purely semantic rather than substantive. "Treaty" can mean one of two things:
"Vibes" govern whether or not something is in Category #2, and there are a lot of things in Category #1 but not Category #2. But, Category #1 is not a vibes thing. See the Circular 175 procedure and associated law and regulations. There are various pedantic ways of referring to Category #1 if you want more precise vocabulary (e.g., "treaties and other [binding] international agreements").
Another factor here is that artists are mostly self-employed, so you don't see headlines like "ArtCo closes factory; lays off 3,500 workers." Instead, a diffuse group of people spread all over the country just have a harder time finding work.
Can anyone recommend a ~15 page introduction to AI existential risk that would be appropriate for a general audience with a non-technical background? Ideally, something with a degree of grounding in the current moment (i.e., with something to say about LLMs, chips, China, etc.) rather than a purely abstract take.
In 2026, most of your long-term financial risk comes from your job being automated, which will plausibly happen in the next 5 years. If this happens, your salary will go to zero while the S&P 500 will probably at least double (assuming no AI takeover)
I'm not sure this is true. In a scenario featuring widespread automation of white collar work, you're going to have all kinds of churn in the economy and it's really unclear how that translates into stock market winners and losers. Many people seem to assume that the big winners will be incumbent tech giants, in which case, yea, the S&P 500 will go way up. But big economic changes are often good for upstarts rather than incumbents, and it's worth pointing out that all the frontier labs today other than Google are a) privately held and b) relatively recently founded. The profits from automating your job might well flow to some startup that doesn't even exist yet, and that same startup might kill off the giants of the S&P.
There are some other prediction markets on Manifold/Metaculus that address the question more directly but they're small.
Some economists have argued that you should look at long run real interest rates -- the idea being that AGI boosts the return on capital, so bondholders should demand higher rates in order to lock up their money in bonds.
I think it's pretty hard to infer much from the stock prices of tech companies because it's kinda ambiguous what AGI would do to those companies (and depends on what exactly counts) plus sub-AGI advances in AI can confuse the price effect. Nvidia, for example, is the market's favorite AI play but AGI in the "dominates humans at all white collar work" sense is almost certainly bad for Nvidia because then the AGI can just design better chips than Nvidia engineers (but then factor in that Nvidia has invested in a whole web of other AI companies so maybe that pushes the other way, and so on and so on).
One thesis here is that white-collar replacement AGI is bearish for fabless semiconductor companies but bullish for the fabs -- so maybe if Nvidia starts tanking while TSMC starts rising?
The valuations of the frontier labs are maybe better, but it's hard to tell if the market is predicting AGI or just placing bets on OpenAI becoming the next Alphabet.
Another idea is that AGI is very bearish for the value of white collar human capital generally, and thus especially for the higher ed sector. In basically any scenario, AGI is the apocalypse for your average university. You can't really trade those in equity markets, but maybe something like interest rates on university debt or credit default swaps?