2475

LESSWRONG
LW

2474
Compute GovernanceExistential riskGovernmentRisks of Astronomical Suffering (S-risks)AI
Frontpage

12

Draconian measures can increase the risk of irrevocable catastrophe

by dsj
23rd Sep 2025
3 min read
0

12

This is a linkpost for https://thedavidsj.substack.com/p/draconian-measures-can-increase-the

12

New Comment
Moderation Log
More from dsj
View more
Curated and popular this week
0Comments
Compute GovernanceExistential riskGovernmentRisks of Astronomical Suffering (S-risks)AI
Frontpage

I frequently see arguments of this form:

We have two choices:

  1. accept the current rate of AI progress and a very large risk[1] of existential catastrophe,

    or

  2. slow things down, greatly reducing the risk of existential catastrophe, in exchange for a cosmically irrelevant delay in reaping the benefits of AI.

(Examples here[2] and here, among many others.)

But whether this is true depends on what mechanism is used to slow things down.

Some are proposing a regime of control over the world’s compute supply which we would all recognize as draconian in any other context. Whoever is in charge of that regime would necessarily possess great power, both because of the required severity of the control mechanisms and because of the importance of the resource being controlled. This would pose a substantial risk of creating a permanent authoritarian society.

Instituting such a regime at a time of rapid democratic backsliding seems especially dangerous, because in that environment it is more likely that political actors would be able to abuse it to permanently lock in their own power. The resulting future could plausibly be worse than no future at all, because it could include vast suffering and a great deprivation of human liberty.

It is not obvious that the risks created by such measures are lower than the risks they are intended to prevent. I personally think they are likely to be significantly greater. Either way, there is an unexamined tradeoff here, which we have a responsibility to acknowledge and consider.

AI chips are not like uranium.

Some reply that heavy controls on AI chips are not draconian, invoking precedent from controls on uranium supply and enrichment. But this analogy fails in important ways.

In large quantities, non-depleted uranium has only two main uses: making an atom bomb and making electricity. Electricity is a general-purpose technology, but uranium itself is not. It is possible to monitor and control the supply of uranium so as to ensure that it is used only to generate electricity, and the users of that general-purpose technology — the people and industries on the electric grid — do not need to be monitored or controlled at all to achieve this.

By contrast, AI chips are themselves a general-purpose technology, roughly as much as other computer chips. There are a myriad of legitimate use cases for those chips, and there is no way to monitor and control the supply of or access to AI chips to the proposed degree without a large intrusion into the privacy and freedoms of all people and industries.

There is no known technical means by which only the excessively risky applications can be identified or prevented while preserving the privacy and freedoms of users for nearly all other applications. Shavit hoped for such a mechanism, and so did I. But I spent many months of my life looking for one before eventually concluding that there were probably none to be found — indeed I believe there is no meaningful precedent in the history of computing for such a mechanism. And I am in a position to understand this: I am an expert on the computational workloads required for training and inference, and played a leading role in advising the federal government on AI chip export controls.

This does not mean that no controls on the AI chip supply or its use are warranted. For example, I think some export controls and know-your-customer regulations are justifiable. But it does mean they come with costs and risks, and the more severe the controls, the greater those risks. We must consider them and balance them against the perceived dangers of alternative courses of action.

Many have confidently proclaimed the end of the world in the past. Anthropic bias and object-level specifics mean that we cannot dismiss current concerns merely because the predictions of the past were wrong. But we should beware the dangers of pursuing drastic measures justified by overconfident proclamations of doom. For drastic measures also risk doom.

There is no guaranteed safe path into the future. We must muddle through as best we can.

Thanks to Tommy Crow for feedback on an earlier draft, and Alexander R. Cohen for his editing service.

 

  1. ^

    Some even claim near certainty, though their arguments cannot justify anything close to that.

  2. ^

    Note that I agree narrowly with this tweet from Nate. Where I disagree is his view that the policies he proposes amount to taking bullets out of the cylinder, as opposed to adding more bullets into it.