This post is a modified crosspost (with unnecessary context removed) from the main post on Substack.
On Wednesday, Senator Bernie Sanders announced he’ll soon be introducing legislation calling for a moratorium on the construction of new data centers. His recent video announcement specifically cited a few issues:
“Bottom line: We are at the beginning of the most profound technological revolution in world history. That’s the truth. This is a revolution which will bring unimaginable changes to our world. This is a revolution which will impact our economy with massive job displacement. It will threaten our democratic institutions. It will impact our emotional well-being, and what it even means to be a human being. It will impact how we educate and raise our kids. It will impact the nature of warfare, something we are seeing right now in Iran.
Further, and frighteningly, some very knowledgeable people fear that that what was once seen as science fiction could soon become a reality—and that is that superintelligent AI could become smarter than human beings, could become independent of human control, and pose an existential threat to the entire human race. In other words, human beings could actually lose control over the planet. I think these concerns are very serious.”
I, too, think these concerns are very serious. But I’m not confident that a 2026 data center moratorium addresses them, and I worry it might leave us even worse off. I think a temporary data center moratorium is unlikely to meaningfully slow AI development, and more importantly, it risks associating AI safety with weak environmental arguments and left-populist politics in a way that could generate backlash and make more important regulation harder to pass.
It probably won’t work
Even in the best case for this legislation, a data center moratorium is a temporary measure. Without concrete political pressure for a pause on superintelligence, there are just too many forces fighting against this type of legislation. National security is, rightfully, an important concern in our political climate. AI development must continue to keep up with China’s military capabilities. The entire economy is also betting on AI-powered growth. There are extremely well-funded actors, from the major AI labs to the venture capital firms backing them, that will lobby aggressively against any construction freeze, and they have far more political leverage than Sanders does. Without broad public support, which is building but not here yet, this thing will get repealed (assuming it could even pass).
And even while a moratorium is in effect, it only targets one part of the capabilities pipeline. Existing data centers keep running, and capital will be reallocated to do better research with existing compute, or build up energy infrastructure in preparation for when data center construction can resume. The labs don’t just sit around. This means that even a full year of a construction freeze (which seems wildly unlikely given the current administration) might only delay frontier AI capabilities by a matter of months.
What does Bernie plan to do with that extra time?
The economic concerns about job loss and disempowerment don't get magically solved with this slowdown. His efforts would be better spent working on the actual policy responses to automation and economic disruption. And on the issue of existential risk, a unilateral U.S. moratorium doesn’t slow down China. If powerful AI is going to be built, I would rather it be built by labs with alignment researchers in a country with a free press, accountable to its citizens. The real solution to AI x-risk is not a domestic construction ban. We need to be working toward a treaty banning the development of superintelligence, verified through international monitoring of data center compute, analogous to nuclear non-proliferation.
I don't think this policy would be very effective. But the bigger risk isn't that the moratorium fails to slow down AI. It's that it sets back the political movement that actually could.
It might backfire
“Data center moratorium” is, in the public mind, not really about existential risk. To Sanders’s credit, his announcement actually focused on the right things: existential risk, economic concerns, and political disempowerment. But the growing public opposition to data centers is built on a coalition of concerns, and much of that coalition is driven by environmental complaints about water usage and energy consumption that I don’t really buy.
The top YouTube comment on Sanders’s announcement video
A bill banning data center construction is going to get interpreted through that lens regardless of Sanders’s intent. There’s a reason he proposed banning buildings instead of, say, taxing compute. The message is simple, and it maps well onto a public that already thinks data centers are evil water-sucking entities. I don’t love the idea of the AI safety movement becoming associated with arguments that are both wrong and very partisan. It might be useful to ride the wave of populism, but if you're not careful, it will backfire. Just look back at the tech-right backlash to progressive overreach in 2024, which is still shaping AI policy today (see David Sacks). A data center moratorium from Bernie Sanders is exactly the kind of thing they will point to as evidence that AI regulation is just left-wing populist environmentalism dressed up in safety language. That framing makes it harder to pass serious compute governance proposals down the line, because every future attempt at regulation has to fight through the association. Stopping ASI development should not be partisan, and it doesn’t need to be populist. It needs to be common sense.
How this could go well
All that said, the current situation might still be fine. I think in an ideal scenario, the legislation moves the idea of a pause into the Overton window, but does not pass. It helps Washington wake up to the risks of advanced AI without polarizing people in tech or strongly associating AI safety with environmental issues. If that’s how it plays out, great.
There’s also a world where a bill does pass, I’m wrong, and it turns out to be a net positive. If a moratorium is in place for 6-12 months, maybe it slows capability development just enough to push the really important decisions to a more competent presidential administration. The 2028 election is still over two years out, but prediction markets currently have Gavin Newsom, JD Vance, and Marco Rubio as the leading contenders. On AI policy, I think I prefer a decision from that group of possible presidents to the current one.
Then what should we do?
I don’t have all the answers. But there is a narrow window to build the political support needed to effectively handle the societal challenges ahead. The 2028 election is likely one of the most important in our century. We should learn from past mistakes and be extremely smart about our political strategy leading up to this moment. Don’t waste political capital on data center moratoriums. Instead, convince legislators that frontier AI could be used to develop novel bioweapons. Talk about how AI can weaken nuclear deterrence. Help them understand alignment. Educate the public about the dangers of ASI, build a non-partisan AI safety coalition, and pass legislation to implement and enforce safety guidelines and monitoring domestically. The big challenge is getting an international treaty banning the development of superintelligence, with verification.
This post is a modified crosspost (with unnecessary context removed) from the main post on Substack.
On Wednesday, Senator Bernie Sanders announced he’ll soon be introducing legislation calling for a moratorium on the construction of new data centers. His recent video announcement specifically cited a few issues:
I, too, think these concerns are very serious. But I’m not confident that a 2026 data center moratorium addresses them, and I worry it might leave us even worse off. I think a temporary data center moratorium is unlikely to meaningfully slow AI development, and more importantly, it risks associating AI safety with weak environmental arguments and left-populist politics in a way that could generate backlash and make more important regulation harder to pass.
It probably won’t work
Even in the best case for this legislation, a data center moratorium is a temporary measure. Without concrete political pressure for a pause on superintelligence, there are just too many forces fighting against this type of legislation. National security is, rightfully, an important concern in our political climate. AI development must continue to keep up with China’s military capabilities. The entire economy is also betting on AI-powered growth. There are extremely well-funded actors, from the major AI labs to the venture capital firms backing them, that will lobby aggressively against any construction freeze, and they have far more political leverage than Sanders does. Without broad public support, which is building but not here yet, this thing will get repealed (assuming it could even pass).
And even while a moratorium is in effect, it only targets one part of the capabilities pipeline. Existing data centers keep running, and capital will be reallocated to do better research with existing compute, or build up energy infrastructure in preparation for when data center construction can resume. The labs don’t just sit around. This means that even a full year of a construction freeze (which seems wildly unlikely given the current administration) might only delay frontier AI capabilities by a matter of months.
What does Bernie plan to do with that extra time?
The economic concerns about job loss and disempowerment don't get magically solved with this slowdown. His efforts would be better spent working on the actual policy responses to automation and economic disruption. And on the issue of existential risk, a unilateral U.S. moratorium doesn’t slow down China. If powerful AI is going to be built, I would rather it be built by labs with alignment researchers in a country with a free press, accountable to its citizens. The real solution to AI x-risk is not a domestic construction ban. We need to be working toward a treaty banning the development of superintelligence, verified through international monitoring of data center compute, analogous to nuclear non-proliferation.
I don't think this policy would be very effective. But the bigger risk isn't that the moratorium fails to slow down AI. It's that it sets back the political movement that actually could.
It might backfire
“Data center moratorium” is, in the public mind, not really about existential risk. To Sanders’s credit, his announcement actually focused on the right things: existential risk, economic concerns, and political disempowerment. But the growing public opposition to data centers is built on a coalition of concerns, and much of that coalition is driven by environmental complaints about water usage and energy consumption that I don’t really buy.
The top YouTube comment on Sanders’s announcement video
A bill banning data center construction is going to get interpreted through that lens regardless of Sanders’s intent. There’s a reason he proposed banning buildings instead of, say, taxing compute. The message is simple, and it maps well onto a public that already thinks data centers are evil water-sucking entities. I don’t love the idea of the AI safety movement becoming associated with arguments that are both wrong and very partisan. It might be useful to ride the wave of populism, but if you're not careful, it will backfire. Just look back at the tech-right backlash to progressive overreach in 2024, which is still shaping AI policy today (see David Sacks). A data center moratorium from Bernie Sanders is exactly the kind of thing they will point to as evidence that AI regulation is just left-wing populist environmentalism dressed up in safety language. That framing makes it harder to pass serious compute governance proposals down the line, because every future attempt at regulation has to fight through the association. Stopping ASI development should not be partisan, and it doesn’t need to be populist. It needs to be common sense.
How this could go well
All that said, the current situation might still be fine. I think in an ideal scenario, the legislation moves the idea of a pause into the Overton window, but does not pass. It helps Washington wake up to the risks of advanced AI without polarizing people in tech or strongly associating AI safety with environmental issues. If that’s how it plays out, great.
There’s also a world where a bill does pass, I’m wrong, and it turns out to be a net positive. If a moratorium is in place for 6-12 months, maybe it slows capability development just enough to push the really important decisions to a more competent presidential administration. The 2028 election is still over two years out, but prediction markets currently have Gavin Newsom, JD Vance, and Marco Rubio as the leading contenders. On AI policy, I think I prefer a decision from that group of possible presidents to the current one.
Then what should we do?
I don’t have all the answers. But there is a narrow window to build the political support needed to effectively handle the societal challenges ahead. The 2028 election is likely one of the most important in our century. We should learn from past mistakes and be extremely smart about our political strategy leading up to this moment. Don’t waste political capital on data center moratoriums. Instead, convince legislators that frontier AI could be used to develop novel bioweapons. Talk about how AI can weaken nuclear deterrence. Help them understand alignment. Educate the public about the dangers of ASI, build a non-partisan AI safety coalition, and pass legislation to implement and enforce safety guidelines and monitoring domestically. The big challenge is getting an international treaty banning the development of superintelligence, with verification.