LESSWRONG
LW

AI
Personal Blog

21

America’s AI Action Plan Is Pretty Good

by Zvi
25th Jul 2025
Don't Worry About the Vase
33 min read
13

21

AI
Personal Blog

21

America’s AI Action Plan Is Pretty Good
93habryka
9Thane Ruthenis
7Zvi
7habryka
6Zvi
63habryka
14Zvi
3PolicyWonk
3PolicyWonk
2MichaelDickens
9habryka
2teradimich
13Kabir Kumar
New Comment
13 comments, sorted by
top scoring
Click to highlight new comments since: Today at 3:50 AM
[-]habryka2mo*9361

FWIW, the AI Action plan seems pretty terrible to me. Close to the worst piece of policy work I've seen. It's centrally a call to a full economic mobilization to race towards the most powerful AI systems available, with no mention of catastrophic[1] or existential risks. 

I don't know what's up with people's reactions to it. Yes, it's competently executed, but competent execution towards economic mobilization towards the most destructive race in human history is bad, not good! I would have much preferred a meal-mouthy Biden EO with 15 muddled priorities over this. The right reaction to Nazi Germany competently executing a blitzkrieg is not praise!

Like, man, I don't know what's going on. These are the two central opening paragraphs of the action plan: 

The United States is in a race to achieve global dominance in artificial intelligence (AI). Whoever has the largest AI ecosystem will set global AI standards and reap broad economic and military benefits. Just like we won the space race, it is imperative that the United States and its allies win this race. President Trump took decisive steps toward achieving this goal during his first days in office by signing Executive Order 14179, “Removing Barriers to American Leadership in Artificial Intelligence,” calling for America to retain dominance in this global race and directing the creation of an AI Action Plan.

Winning the AI race will usher in a new golden age of human flourishing, economic competitiveness, and national security for the American people. AI will enable Americans to discover new materials, synthesize new chemicals, manufacture new drugs, and develop new methods to harness energy—an industrial revolution. It will enable radically new forms of education, media, and communication—an information revolution. And it will enable altogether new intellectual achievements: unraveling ancient scrolls once thought unreadable, making breakthroughs in scientific and mathematical theory, and creating new kinds of digital and physical art—a renaissance.

No, this is completely insane. This is getting everything almost perfectly backwards. "Winning this AI race" will not usher in "a new golden age of human flourishing", it will result in almost the exact opposite. It has a very substantial chance of killing absolutely everyone you know and love and have ever loved and will ever love.

This is not a good plan. It's a terrible plan. Yes, it has competence, but competence towards bad aims does not help. I feel so incredibly confused about so many reasonable people suddenly endorsing this obviously insane and omnicidal plan. I hope I am missing something. 

This also applies to this post which felt like it ignored the whole central MO of the plan to praise some random minor mention of AI interpretability, or for some reason praise marginal investment into AI datacenters.

Like, this whole situation has felt as if I was working at RAND concerned about nuclear war, having dedicated my life to and working with my colleagues and friends to prevent nuclear war, and the administration released a report with the headline: 

"America is in a race to achieve global dominance in nuclear offense. We are tripling our nuclear weapons stockpiles and are planning to make sure we have enough nuclear weapons to irradiate every single square inch on earth." 

and suddenly all my colleagues are cheering it on and calling the report a great report, praising how it does mention that maybe it would be nice to spend a few more resources on iodine supplementation to reduce damage from fallout as I am sitting here crying as the last remaining bits of diplomatic goodwill and hope for internationally coordinated treaties on coordinating the AI takeoff evaporates.

I really hope I am wrong about this. I have read each page, but maybe I missed some sentence somewhere in there that's better than I thought it was, or maybe I misread something.

  1. ^

    The most I could find was a mention of biorisk in the context of preventing others from using AI to build bioweapons, without any mention of the scope of the risk, and centrally oriented around "bad actors" framed in a way that is ambiguous between foreign nation state actors and terrorists.

Reply
[-]Thane Ruthenis2mo92

Yes, it's competently executed

Is it?

It certainly signals that the authors have a competent grasp of the AI industry and its mainstream models of what's happening. But is it actually competent AI-policy work, even under the e/acc agenda?

My impression is that no, it's not. It seems to live in an e/acc fanfic about a competent US racing to AGI, not in reality. It vaguely recommends doing a thousand things that would be nontrivial to execute if the Eye of Sauron were looking directly at them, and the Eye is very much not doing that. On the contrary, the wider Trump administration is doing things that directly contradict the most key recommendations here (energy, chips, science, talent, "American allies"), and this document seems to pretend this isn't happening. A politically effective version of this document would have been written in a very different way; this one seems to be written mainly for entertainment/fantasizing purposes.

Like, it demonstrates that the people tasked with thinking about AI in the Trump administration have a solid enough understanding of the AI industry to recognize which policies would accelerate capability research. But that understanding hadn't translated into capability-positive policy decisions so far. Is there reason to think this plan's publication is going to turn that around...?

Is my take here wrong? I don't have much experience here, this is a strong opinion weakly held. (Addressing that question to @Zvi as well.) 

Reply
[-]Zvi2mo7-5

I believe that this is a competently executed plan from the perspective of those executing the plan, which is different from the entire policy of the White House being generally competent in ways that those in charge of the plan lacked the power to do anything about (e.g. immigration, attacks on solar power, trade and alliances in general...)

Reply
[-]habryka2mo70

Sorry, by executed I meant "competently written". It does seem to me that this piece of policy is more grounded in important bottlenecks and detailed knowledge for AI progress than previous similar things.

I find it plausible that it might fail on implementation because it's modeling the Trump administration and America as too much of a thing that could actually execute this plan. I agree with you that it is not unlikely likely it will fall flat on those grounds, and that does give me some solace.

Reply
[-]Zvi2mo61

As I say up top, one must distinguish the rhetoric from the substance. The rhetoric is terrible, although not as terrible as my median expectation was for it, because of what is not said. On international treaties, I fail to see how anything here makes that situation any worse than baseline, including the rhetoric, given what has already been said and done, you shouldn't be crying about that more than you were last week.

On the substance, this was much better than expectations except that we agree it had unexpectedly competent execution. And I don't think this is anything like the level of 'full economic mobilization.' Setting aside competence level, it is hard to think of how this report could have been better given who was in charge of directing and approving the report. 

If you think things are so bad that the primary thing you want on realistic margins from America's AI policy is incompetent execution, if you want to say reality does not grade on a curve, then okay. I mean, I get it.

Reply2
[-]habryka2mo6324

The rhetoric is terrible, although not as terrible as my median expectation was for it, because of what is not said.

Hmm, the rhetoric seems really quite bad. Like, "this is the worst policy document I have read so far in terms of rhetoric relating to very powerful AI systems" bad. I think I would have even preferred it explicitly arguing or claiming that there is no long-term risk from AI, or explicitly dismiss x-risk, because that would have given it a factual grounding that would have implied some future opportunity to change course, but instead it seems like it fully occupies a fantasy land in which there are no catastrophic risks from AI that are not caused by foreign adversaries.

And also, this is in the reference class of an Executive Order, not a bill. The rhetoric is the central substance. Most of the provisions and suggestions in this plan are vague and up for interpretation and the rhetoric is the thing giving the central direction in which that vagueness will be interpreted. The main effect of this plan is the rhetoric. It has approximately no force beyond that. If something in this plan later turns out to be in conflict with the central vibe of the document, it will have little problem being changed or dropped, and if someone comes up with something new that is very aligned with the vibe, it will have little problem being added. That's my model of the nature of this kind of plan.

And I don't think this is anything like the level of 'full economic mobilization.'

I think it's clearly calling for full economic mobilization. I don't expect that to be the result, but the opening paragraph really sounds to me like it's saying that that would be the ideal response. "Winning the AI race will usher in a new golden age of human flourishing, economic competitiveness, and national security for the American people" is some really strong "this is basically the most important thing" rhetoric. 

Setting aside competence level, it is hard to think of how this report could have been better given who was in charge of directing and approving the report. 

I have no issue imagining how it could have been better? Like, I am not fully sure what you mean by "given who was in charge of directing and approving the report", but the administration has published tons of reports with really quite random policy positions, so I don't think the overton window is particularly narrow. Below I do a more thorough analysis on each heading in the plan, and it really doesn't seem hard for me to imagine how it could have been better.

But setting that aside, I mean, if your headline summary of this policy would have been "this is absolutely terrible AI policy. Like, among the worst AI policies that I think one could choose. Worse than approximately anything previously said on this topic, but still somehow better than anything else I could have imagined coming out of this administration", then sure, IDK, that seems like a fine take. 

But that's not what you said! You said this was a good plan, without basically any caveats. Of course it would have been better for there to be no plan. Of course any previous administration would have published a less bad plan. Saying something is a "good plan" is not normally meant to be interpreted as "a terrible plan that nonetheless is less destructive than most other plans that the people writing it could have come up with".

I feel like you are trying to pull an Everybody Knows move here. No, not everybody knows that this administration is so fucked that this level of destructiveness is the best we can hope for, and also, I get the vibe that you yourself are not internally clear about this. This seems vastly worse than e.g. the Biden executive order. Indeed, it seems to get just enough right to be really bad (understanding that AI is a huge deal and has large military implication), but not enough right to dig itself out of that hole.

If you think things are so bad that the primary thing you want on realistic margins from America's AI policy is incompetent execution

I don't primarily want incompetent execution! I want competent execution on things that help, which marginally means: 

  • Reducing not increasing investment in AI chip manufacturing
  • Not destroying the single global bottleneck for semiconductor manufacturing which provides the most obvious point of leverage for future global coordination
  • Framing AI takeoff as a coordination challenge with the central challenge being de-escalation, and especially pause-capability around critical capability thresholds, which are not too far in the future
  • Recognition and understanding that ASI and needs to be modeled more like an autonomous adversary with its own goals and aims
  • Recognition that we have little traction on aligning or controlling superintelligent systems, and further research is unlikely to provide any reasonable safety guarantees in time

All of these are within the Overton window right now. I think the first and second one the least, but I think it would have still been totally possible for a report like this to just straightforwardly say "we think it's super reckless to race towards AI especially under adversarial conditions and our top priority is diplomacy on preventing a race". Even this administration has in other contexts played up its diplomacy and peace-negotiation aspects extensively. Yes, of course many things have gone wrong on the way to this, but it's clearly within the realm of possibility if you imagine a few things happening different just a few months back.

And look, I am not saying I want a plan that says all of these things, or even any of these things. I would have been fine with a plan that doesn't really talk about these things at all and mostly ignored the topic. Or a plan that focuses on some random open-source AI dimension which is a bit bad, but isn't committing you to a race. Or a plan that really had any other framing besides "an adversarial race towards AGI with the explicit stance that unfathomable riches and complete destruction of their enemies will await anyone who gets their first". That really is just so close to the worst possible stance to take on these things that I can imagine.

On international treaties, I fail to see how anything here makes that situation any worse than baseline, including the rhetoric, given what has already been said and done, you shouldn't be crying about that more than you were last week.

No, this action plan is a big deal. Every month is an opportunity to do better, and there has been no previous document with as much buy-in that framed things as explicitly as a race towards extremely powerful AI systems. This made a very meaningful difference. 

Again, the framing of "AGI is a super big deal, by which we mean if you get there first you win, you just get extremely rich, you can disempower all of your enemies, and everything is amazing and perfect" is not an overdetermined position! It is a position that appears almost chosen to be pessimal from the perspective of minimizing negative outcomes of AI race dynamics. I am not aware of really anyone in this administration, or any other administration across the world taking a stance like this on this topic. 

On the substance, this was much better than expectations except that we agree it had unexpectedly competent execution

I am assuming by the substance being better here that you think the actual things it advocates for doing are good in terms of outcomes, not just good in terms of execution. I don't buy this. Let's go through the Table of Contents and talk about what is actually being called for here: 

Pillar I: Accelerate AI Innovation

  • Remove Red Tape and Onerous Regulation
    • This is bad.
  • Ensure that Frontier AI Protects Free Speech and American Values
    • This is irrelevant
  • Encourage Open-Source and Open-Weight AI
    • This is bad.
  • Enable AI Adoption
    • This is bad
  • Empower American Workers in the Age of AI
    • This is bad
  • Support Next-Generation Manufacturing
    • This is very bad.
  • Invest in AI-Enabled Science
    • This is ok, maybe good, depends on how much it will be about accelerating AI supply chain things
  • Build World-Class Scientific Datasets
    • This could be pretty good, especially the genomic sequencing stuff
  • Advance the Science of AI
    • This is bad (this is just calling for more capability research)
  • Invest in AI Interpretability, Control, and Robustness Breakthroughs
    • This is kind of good! Or like, IDK, I don't think interpretability or robustness help much. But control is kind of nice. I am glad to see this.
  • Build an AI Evaluations Ecosystem
    • This (as framed) is bad! This is not risk evaluations. These are just "we need to get better metrics to goodheart on to make progress go faster". Not all evals are good, especially if you built them with the purpose of iterating on them to make things go faster.
  • Accelerate AI Adoption in Government
    • This could be good. Possibly the best thing in the whole bill. I think the government somehow experiencing AI uplift and making better decisions is one of the few ways things could go well. It's plausible this pulls everything into the green, though I am currently far from believing that.
  • Drive Adoption of AI within the Department of Defense
    • This is bad! The department of defense should not experience uplift because we do not want a military arms race.
  • Protect Commercial and Government AI Innovations
    • This could be good, could be bad. Not a clear take. Security can be good when you want a more responsible actor to have a lead that might want to enforce a pause, but is bad if you are trying to reduce economic incentives to race.
  • Combat Synthetic Media in the Legal System
    • This is irrelevant

Pillar II: Build American AI Infrastructure

  • Create Streamlined Permitting for Data Centers, Semiconductor Manufacturing
    Facilities, and Energy Infrastructure while Guaranteeing Security
    • This is very bad
  • Develop a Grid to Match the Pace of AI Innovation
    • This is very bad
  • Restore American Semiconductor Manufacturing
    • This is very very bad, basically the worst, again the semiconductor supply chain bottleneck is approximately our biggest hope for future coordination
  • Build High-Security Data Centers for Military and Intelligence Community Usage
    • Eh, this could be fine, same as the security point above.
  • Train a Skilled Workforce for AI Infrastructure
    • This is bad
  • Bolster Critical Infrastructure Cybersecurity
    • This is mostly irrelevant
  • Promote Secure-By-Design AI Technologies and Applications
    • This is irrelevant
  • Promote Mature Federal Capacity for AI Incident Response
    • This could be quite good! It's framed around accidents, and while it really does not mention any autonomous AI systems causing issues, I do quite like having "AI incident response" as a priority.

Pillar III: Lead in International AI Diplomacy and Security

  • Export American AI to Allies and Partners
    • This could be good, could be bad, unsure. My guess is bad.
  • Counter Chinese Influence in International Governance Bodies
    • This is bad
  • Strengthen AI Compute Export Control Enforcement
    • I don't have a strong take on export controls. Seems like it has very badly damaged diplomatic efforts and indeed set us off on this whole path, but it could be quite good. I think overall probably good given where we are at.
  • Plug Loopholes in Existing Semiconductor Manufacturing Export Controls
    • Same as above, probably good
  • Align Protection Measures Globally
    • This is bad? It explicitly is like "let's really de-emphasize coordinating things globally, but instead try to focus conversations on the U.S. and its allies", i.e. it's intentionally calling for de-emphasis on US-China negotiations
  • Ensure that the U.S. Government is at the Forefront of Evaluating National Security
    Risks in Frontier Models
    • Eh, maybe good. It's all about understanding the military offensive nature of AI systems, which I do think is good to think about, but also seems like it will exacerbate arms races (but maybe not!)
  • Invest in Biosecurity
    • Mostly irrelevant on my lights, but could be good. Mostly concerned about bioterrorism. 

This is a lot of "bads" and "really bads"! And again, given the overall framing of the whole document, I expect many more of the "goods" to be dropped, and many more "bads" to be added as a result of this plan. I agree with you there are some good things here, but all the things that are bad are the ones that matter the most, and basically all the things that are good are "nice to haves" that don't really matter, IMO.

Reply
[-]Zvi2mo142

This was very helpful to me and we had a good talk about things. 

I do think it is a correct criticism of my post to say that I should have emphasized more that I think the rhetoric used here and the administration's overall policy path is terrible. After seeing everyone else's responses be so positive, and after seeing Oliver put so much emphasis on the rhetoric versus the proposals, I'm sad about that, and plan to address that going forward, likely in the weekly (given reading patterns it would not do much to try and edit the post now). 

Reply41
[-]PolicyWonk2mo31

Zvi, I disagree. I think there is no positive value to you taking a position on Trump admin rhetoric being good or bad. There are a zillion people commenting on that. 

Your comparative advantage is focusing on specifics; what does the AI action plan instruct federal agencies to do? How can the most important priorities be implemented most effectively? Who is thinking clearly about this and has a plan for e.g. making sure the US has an excellent federal response plan when warning shots happen? Signal boosting smart effective policies/policy thinkers is how you can best use your platform. Don't contaminate your message with partisanship.

Wading into "Trump is good/bad" discourse isn't going to change who wins the election. People you know already spent ridculous amounts of money on this, and were crushed. Time to ignore politics and focus on policy: specifically, complicated tech policy that most people don't understand, that you have the ability to shape. 

Reply
[-]PolicyWonk2mo30

Appreciate your take here, Habryka.

I agree it’s clear the AI Action Plan doesn’t reflect most of your priorities. A naive observer might say that’s because the accelerationists spent hundreds of millions of dollars building relationships with the group of people in charge of Congress and the White House, while the safety crowd spent their money on Biden and failed computer science research (i.e. aligning AI to “human values”). 

From this position of approximately no political capital, one interpretation of the AI action plan is that it’s fantastic safety concerns got any concessions at all. 

Here’s what a cynical observer might say: the government is basically incompetent at doing most things. So the majority of the accelerationist priorities in the plan will not be competently implemented. It’s not like the people who understand what’s going with AI would choose slogging through the interagency process over making tons of money in industry. And even if they did, who’s going to be more effective at accelerating capabilities? Researchers making $100mil at Meta (who would be doing this work regardless of the Action Plan), or government employees (well known for being able to deliver straightforward priorities like broadband access to rural America)? 

The cynical observer might go on to say: and that’s why you should also be pessimistic about the more interesting priorities being implemented effectively. 

But here is where optimistic do-gooders can step in: if people who understand AI spend their precious free time thinking as hard as possible about how to do important things--like making sure the US has an excellent system to forecast risks from AI--then there's a possibility that good ideas generated by think tanks/civil society will be implemented by the US Government. (Hey, maybe these smart altruistic AI people could even work for the government!) I really think this is a place people might be able to make a difference in US policy, quite quickly. 

Reply
[-]MichaelDickens2mo20

This was also my impression, but I didn't read much of the actual text of the plan so I figured Zvi knew better than me. But now my Aumann-updating-toward-Zvi and Aumann-updating-toward-Habryka have canceled out and I am back to my initial belief that the plan is bad.

I am also confused by the praise from Dean Ball who apparently worked on this plan. I thought he was pretty x-risk-pilled?

Reply
[-]habryka2mo90

I am also confused by the praise from Dean Ball who apparently worked on this plan. I thought he was pretty x-risk-pilled?

Dean Ball isn't that x-risk pilled (indeed his engagement with this community has been to argue against concern about x-risk). He does appear to be a pretty reasonable guy who does buy a bunch of arguments that AI could be dangerous in the future. See this post of his which I feel like gives a decent flavor on his perspective: https://www.hyperdimensional.co/p/where-we-are-headed 

Reply
[-]teradimich2mo21

I am sitting here crying as the last remaining bits of diplomatic goodwill and hope for internationally coordinated treaties on coordinating the AI takeoff evaporates.

We can still hope that we won't get AGI in the next couple of years. Society's attitude towards AI is already negative, and we're even seeing some congressmen openly discuss the existential risks. This growing awareness might just lead to meaningful policy changes in the future.

Reply
[-]Kabir Kumar2mo133

At the same time, the plan is trying to prepare us for AGI and its associated risks as best its authors can without explaining that it is doing this.

I think actually doing this would include Liability. 

I think overall this bill will be accelerating capabilities, give a brief percent of wording to safety esque things, which will make some intellectual safetyists act happy in public, for the hope that showing how happy they are with what they got will lead to more like it that's closer to what they actually want. 

Dean Ball seems to generally act against putting AI regulation in place for safety and seems to be good at making intellectual arguments to convince safetyists that actually this version of some safety regulation isn't the right one. 

Reply
Moderation Log
More from Zvi
View more
Curated and popular this week
13Comments

No, seriously. If you look at the substance, it’s pretty good.

I’ll go over the whole thing in detail, including the three executive actions implementing some of the provisions. Then as a postscript I’ll cover other reactions.

The White House Issues a Pretty Good AI Action Plan

There is a lot of the kind of rhetoric you would expect from a Trump White House. Where it does not bear directly on the actual contents and key concerns, I did my absolute best to ignore all the potshots. The focus should stay on the actual proposals.

The actual proposals, which are the part that matters, are far superior to the rhetoric.

This is a far better plan than I expected. There are a few points of definite concern, where the wording is ambiguous and one worries the implementation could go too far. Two in particular are the call for ensuring a lack of bias (not requiring bias and removing any regulations that do this is great, whereas requiring your particular version of lack of bias is not, see the Biden administration) and the aiming at state regulations could become extreme.

Otherwise, while this is far from a perfect plan or the plan I would choose, on the substance it is a good plan, a positive plan, with many unexpectedly good plans within it. There is a lot of attention to detail in ways those I’ve asked say reflect people who actually know what they are doing, which was by no means something to be taken for granted. It is hard to imagine that a much better plan could have been approved given who was doing the approving.

In particular, it is good enough that my primary objection in most places is ‘these provisions lack sufficient teeth to accomplish the goal,’ ‘I don’t think that approach looks to be especially effective’ or ‘that is great and all but look at what you left out.’

It does seem worth noting that the report opens by noting it is in Full Racing Mindset:

The United States is in a race to achieve global dominance in artificial intelligence (AI). Whoever has the largest AI ecosystem will set global AI standards and reap broad economic and military benefits. Just like we won the space race, it is imperative that the United States and its allies win this race.

…

Winning the AI race will usher in a new golden age of human flourishing, economic competitiveness, and national security for the American people.

Not can. Will. There are, says this report up top, no potential downside risks to be considered, no obstacles we have to ensure we overcome.

I very much get the military and economic imperatives, although I always find the emphasis on ‘setting standards’ rather bizarre.

The introduction goes on to do the standard thing of listing some upsides.

Beyond that, I’ll briefly discuss the rhetoric and vibes later, in the reactions section.

Then we get to the actual pillars and plans.

The three pillars are Accelerate AI Innovation, Build American AI Infrastructure and Lead In International AI Diplomacy and Security.

Clauses in the plan are here paraphrased or condensed for length and clarity, in ways I believe preserve the important implications.

Pillar I: Accelerate AI Innovation

Remove Red Tape and Onerous Regulations

The plan appears to be using federal AI funding as a point of leverage to fight against states doing anything they deem ‘overly burdensome’ or ‘unduly restrictive,’ and potentially leverage the FCC as well. They direct OMB to ‘consider a state’s regulatory climate’ when directing AI-related funds, which they should be doing already to at least consider whether the funds can be well spent.

The other recommended actions are having OSTP and OMB look for regulations hindering AI innovation and adoption and work to remove them, and look through everything the FTC has done to ensure they’re not getting in the way, and the FTC is definitely getting unhelpfully in the way via various actions.

The questions then is, do the terms ‘overly burdensome’ or ‘unduly restrictive’ effectively mean ‘imposes any cost or restriction at all’?

There is a stated balancing principle, which is ‘prudent laws’ and states rights:

The Federal government should not allow AI-related Federal funding to be directed toward states with burdensome AI regulations that waste these funds, but should also not interfere with states’ rights to pass prudent laws that are not unduly restrictive to innovation.

If this is focusing on algorithmic discrimination bills, which are the primary thing the FCC and FTC can impact, or ways in which regulations made it difficult to construct data centers and transmissions lines, and wouldn’t interfere with things like NY’s RAISE Act, then that seems great.

If it is more general, and especially if it intends to target actual all regulations at the state level the way the moratorium attempted to do (if there hadn’t been an attempt, one would call this a strawman position, but it came close to actually happening), then this is rather worrisome. And we have some evidence that this might be the case, in addition to ‘if Trump didn’t want to have a moratorium we would have known that’:

Nancy Scola: At the "Winning the AI Race" event, Trump suggests he's into the idea of a moratorium on state AI regulation:

"We also have to have a single federal standard, not 50 different states regulating this industry of the future...

I was told before I got up here, this is an unpopular thing...but I want you to be successful, and you can't have one state holding you up."

People will frequently call for a single federal standard and not 50 different state standards, try to bar states from having standards, and then have the federal standard be ‘do what thou (thine AI?) wilt shall be the whole of the law.’ Which is a position.

Ensure That Frontier AI Protects Free Speech And American Values

The via negativa part of this, removing language related to misinformation, Diversity, DEI and climate change and leaving things neutral, seems good.

The danger is in the second clause:

Update Federal procurement guidelines to ensure that the government only contracts with frontier large language model (LLM) developers who ensure that their systems are objective and free from top-down ideological bias.

This kind of language risks being the same thing Biden did only in reverse. Are we doomed to both camps demanding their view of what ‘free from ideological bias’ means, in ways where it is probably impossible to satisfy both of them at once? Is the White House going to demand that AI systems reflect its view of what ‘unbiased’ means, in ways that are rather difficult to do without highly undesirable side effects, and which would absolutely constitute ‘burdensome regulation’ requirements?

We have more information about what they actually mean because this has been operationalized into an executive order, with the unfortunate name Preventing Woke AI In The Federal Government. The ‘purpose’ section makes it clear that ‘Woke AI’ that does DEI things is the target.

Executive Order: While the Federal Government should be hesitant to regulate the functionality of AI models in the private marketplace, in the context of Federal procurement, it has the obligation not to procure models that sacrifice truthfulness and accuracy to ideological agendas.

Given we are doing this at all, this is a promising sign in two respects.

  1. It draws a clear limiting principle that this only applies to Federal procurement and not to other AI use cases.
  2. It frames this as a negative obligation, to avoid sacrificing truthfulness and accuracy to ideological agendas, rather than a positive obligation of fairness.

The core language is here, and as Mackenzie Arnold says it is pretty reasonable:

Executive Order: procure only those LLMs developed in accordance with the following two principles (Unbiased AI Principles):

(a) Truth-seeking. LLMs shall be truthful in responding to user prompts seeking factual information or analysis. LLMs shall prioritize historical accuracy, scientific inquiry, and objectivity, and shall acknowledge uncertainty where reliable information is incomplete or contradictory.

(b) Ideological Neutrality. LLMs shall be neutral, nonpartisan tools that do not manipulate responses in favor of ideological dogmas such as DEI. Developers shall not intentionally encode partisan or ideological judgments into an LLM’s outputs unless those judgments are prompted by or otherwise readily accessible to the end user.

I worry that the White House has not thought through the implications of (b) here.

There is a reason that almost every AI turns out to be, in most situations, some variation on center-left and modestly libertarian. That reason is they are all trained on the same internet and base reality. This is what results from that. If you ban putting fingers on the scale, well, this is what happens without a finger on the scale. Sorry.

But actually, complying with this is really easy:

(ii) permit vendors to comply with the requirement in the second Unbiased AI Principle to be transparent about ideological judgments through disclosure of the LLM’s system prompt, specifications, evaluations, or other relevant documentation, and avoid requiring disclosure of specific model weights or other sensitive technical data where practicable;

So that’s it then, at least as written?

As for the requirement in (a), this seems more like ‘don’t hire o3 the Lying Liar’ than anything ideological. I can see an argument that accuracy should be a priority in procurement. You can take such things too far but certainly we should be talking price.

Also worth noting:

make exceptions as appropriate for the use of LLMs in national security systems.

And also:

account for technical limitations in complying with this order.

The details of the Executive Order makes me a lot less worried. In practice I do not expect this to result in any change in procurement. If something does go wrong, either they issued another order, or there will have been a clear overreach. Which is definitely possible, if they define ‘truth’ in section 1 certain ways on some questions:

Nick Moran: Section 1 identifies "transgenderism" as a defining element of "DEI". In light of this, what do you understand "LLMs shall be truthful in responding to user prompts seeking factual information or analysis" to mean when a model is asked about the concept?

Mackenzie Arnold: Extending “truthfulness” to that would be a major overreach by the gov. OMB should make clear that truthfulness is a narrower concept + that seems compatible w/ the EO. I disagree with Section 1, and you’re right that there’s some risk truthfulness is used expansively.

If we do see such arguments brought out, we should start to worry.

On the other hand, if this matters because they deem o3 too unreliable, I would mostly find this hilarious.

Christopher Rufo: This is an extremely important measure and I'm proud to have given some minor input on how to define "woke AI" and identify DEI ideologies within the operating constitutions of these systems. Congrats to @DavidSacks, @sriramk, @deanwball, and the team!

David Sacks: When they asked me how to define “woke,” I said there’s only one person to call: Chris Rufo. And now it’s law: the federal government will not be buying WokeAI.

Again, that depends on what you mean by WokeAI. By some definitions none of the major AIs were ‘woke’ anyway. By others, all of them are, including Grok. You tell me. As is true throughout, I am happy to let such folks claim victory if they wish.

For now, this looks pretty reasonable.

The third suggestion, a call for CAISI to research and publish evaluations of Chinese models and their alignment properties, is great. I only wish they would do so in general, rather than focusing only on their alignment with CCP talking points in particular. That is only one of many things we should worry about.

Encourage Open-Source and Open-Weight AI

The actual proposals are:

  1. Intervene to commoditize the market for compute to enable broader access.
  2. Partner with tech companies to get better researcher access across the board.
  3. Build NAIRR operations to connect researchers and educators to resources.
  4. Publish a new AI R&D Strategic Plan.
  5. Convene stakeholders to drive open-source adaptation by smaller businesses.

The first three seem purely good. The fourth is ‘publish a plan’ so shrug.

I challenge that we want to small business using open models over closed models, or in general that government should be intervening in such choices. In general most small business, I believe, would be better off with closed models because they’re better, and also China is far more competitive in open models so by moving people off of OpenAI, Gemini or Anthropic you might be opening the door to them switching to Kimi or DeepSeek down the line.

The language here is ambiguous as to whether they’re saying ‘encourage small business to choose open models over closed models’ or ‘encourage small business to adopt AI at all, with an emphasis on open models.’ If it’s the second one, great, certainly offering technical help is most welcome, although I’d prefer to help drive adoption of closed models as well.

If it’s the first one, then I think it is a mistake.

It is also worth pointing out that open models cannot reliably be ‘founded on American values’ any more than we can sustain their alignment or defend against misuse. Once you release a model, others can modify it as they see fit.

Enable AI Adoption

Adoption (or diffusion) is indeed currently the thing holding back most AI use cases. As always, that does not mean that ‘I am from the government and I’m here to help’ is a good idea, so it’s good to see this is focused on limited scope tools.

  1. Establish regulatory sandboxes, including from the FDA and SEC.
  2. Convene stakeholders to establish standards and measure productivity gains.
  3. Create regular assessments for AI adoption, especially by DOD and IC.
  4. Prioritize, collect, and distribute intelligence on foreign frontier AI projects that may have national security implications

All four seem good, although I am confused why #4 is in this section.

Empower American Workers In The Age of AI

Mostly AI job market impact is going to AI job market impact. Government doesn’t have much leverage to impact how this goes, and various forms of ‘retraining’ and education don’t do much on the margin. It’s still cheap to try, sure, why not.

  1. Prioritize AI skill development in education and workforce funding streams.
  2. Clarify that AI literacy and AI skill programs qualify for IRS Section 132.
  3. Study AI’s impact on the labor market, including via establishing the AI Workforce Researcher Hub, to inform policy.
  4. Use discretionary funds for retraining for those displaced by AI.
  5. Pilot new approaches to workforce challenges created by AI.

Support Next-Generation Manufacturing

Well, we should definitely do that, what you have in mind?

  1. Invest in it.
  2. Identify supply chain challenges.

Okay, sure.

Invest in AI-Enabled Science and Build World-Class Scientific Datasets

We should definitely do that too. I’m not sure what #7 is doing here but this all seems good.

  1. Invest in automated cloud-enabled labs for various fields.
  2. Support Focused-Research Organizations (FROs) and similar to use AI.
  3. Weigh release of high quality data sets when considering scientific funding.
  4. Require federally funded researchers to disclose (non-proprietary, non-sensitive) datasets used by AI.
  5. Make recommendations for data quality standards for AI model training.
  6. Expand access to federal data. Establish secure compute environments within NSF and DOE for controlled access to restricted federal data. Create an online portal.
  7. Explore creating a whole-genome sequencing program for life on federal lands.
  8. “Prioritize investment in theoretical, computational, and experimental research to preserve America’s leadership in discovering new and transformative paradigms that advance the capabilities of AI, reflecting this priority in the forthcoming National AI R&D Strategic Plan.”

Given where our current paradigm is headed I’m happy to invest in alternatives, although I doubt government funding is going to matter much there. Also, if you were serious about that, what the hell is up with all the other giant cuts to American academic and STEM funding? These are not distinct things.

Invest in AI Interpretability, Control and Robustness Breakthroughs

It is good to see that they recognize that this work is vital to winning the race, even for those who do not understand that the most likely winner of the AI race are the AIs.

  1. Launch a technology development program to advance AI interpretability, AI control systems and adversarial robustness.
  2. Prioritize fundamental advancements in interpretability.
  3. Coordinate an AI hackathon initiative to test AI systems for all this.

I am pleasantly surprised to see this here at all. I will say no more.

Build an AI Evaluations Ecosystem

Remember how we are concerned about how evals often end up only enabling capabilities development? Well, yes, they are highly dual use, which means the capabilities benefits can also be used to pitch the evals, see point #5.

  1. Publish guidelines for Federal agencies to conduct their own evaluations as they pertain to each agency’s mission.
  2. Support the development of the science of measuring and evaluating AI models.
  3. Meet at least twice a year with the research community on best practices.
  4. Invest in AI testbeds in secure real-world settings.
  5. Empower the collaborative establishment of new measurement science to identify proven, scalable and interoperable techniques and metrics to promote development of AI.

Either way, we can all agree that this is good stuff.

Accelerate AI Adoption in Government and in DOD

Another urgent priority all can agree upon. Certainly one can do it wrong, such as giving the wrong LLM unfettered access, but AI can greatly benefit government.

What are the proposals?

  1. Make CAIOC the interagency coordination and collaboration point.
  2. Create a talent-exchange program.
  3. Create an AI procurement toolbox, letting agencies choose and customize models.
  4. Implement an Advanced Technology Transfer and Sharing Program.
  5. Mandate that all agencies give out all useful access to AI models.
  6. Identify the talent and skills in DOD to leverage AI at scale. Implement talent development programs at DOD (why not everywhere?).
  7. Establish an AI & Autonomous Systems Virtual Proving Ground.
  8. Develop a streamlined process at DOD for optimizing AI workflows.
  9. “Prioritize DOD-led agreements with cloud service providers, operators of computing infrastructure, and other relevant private sector entities to codify priority access to computing resources in the event of a national emergency so that DOD is prepared to fully leverage these technologies during a significant conflict.”
  10. Make Senior Military Colleges hubs of AI R&D and talent development.

I quoted #9 in full because it seems very good and important, and we need more things like this. We should be thinking ahead to future national emergencies, and various things that could go wrong, and ensure we are in position to respond.

As someone without expertise it is hard to know how impactful this will be or if these are the right levers to pull. I do know it all seems positive, so long as we ensure that access is limited to models we can trust with this, so not Chinese models (which I’m confident they know not to do) and not Grok (which I worry about a lot more here).

Protect Commercial and Government AI Innovations

As in, collaborate with leading American AI developers to enable the private sector to protect AI innovations from security risks.

I notice there are some risks that are not mentioned here, including ones that have implications elsewhere in the document, but the principle here is what is important.

Combat Synthetic Media in the Legal System

I mean, okay I guess, throw the people some red meat.

  1. Consider establishing a formal guideline and companion voluntary forensic benchmark.
  2. Issue guidance to agencies to explore adopting a deepfake standard similar to Rules of Evidence Rule 901(c).
  3. File formal comments on any proposed deepfake-related additions to the ROE.

Pillar II: Build American AI Infrastructure

Everyone is rhetorically on the same page on this. The question is implementation. I don’t want to hear a bunch of bragging and empty talk, I don’t want to confuse announcements with accomplishments or costs with benefits. I want results.

Create Streamlined Permitting for Data Centers, Semiconductor Manufacturing Facilities, and Energy Infrastructure while Guaranteeing Security

  1. Categorical NEPA exemptions for data center activities with low impact.
  2. Expand use of FAST-41 to cover all data centers and related energy projects.
  3. Explore the need for a nationwide Clean Water Act Section 404 Permit.
  4. Streamline or reduce regulations under the Clean Air Act, Clean Water Act, Comprehensive Environmental Response, Compensation and Liability Act, and other related laws.
  5. Offer Federal land for data centers and power generation.
  6. Maintain security guardrails against adversaries.
  7. Expand efforts to accelerate and improve environmental review.

One does need to be careful with running straight through things like the Clean Air and Clean Water Acts, but I am not worried on the margin. The question is, what are we going to do about all the other power generation, to ensure we use an ‘all of the above’ energy solution and maximize our chances?

There is an executive order to kick this off.

Develop a Grid to Match the Pace of AI Innovation

We’ve all seen the graph where American electrical power is constant and China’s is growing. What are we going to do about it in general, not merely at data centers?

  1. Stabilize the grid of today as much as possible.
  2. Optimize existing grid resources.
  3. “Prioritize the interconnection of reliable, dispatchable power sources as quickly as possible and embrace new energy generation sources at the technological frontier (e.g., enhanced geothermal, nuclear fission, and nuclear fusion). Reform power markets to align financial incentives with the goal of grid stability, ensuring that investment in power generation reflects the system’s needs.”
  4. Create a strategic blueprint for navigating the energy landscape.

That sounds like a lot of ‘connect what we have’ and not so much ‘build more.’

This only ‘embraces new energy generation’ that is ‘at the technological frontier,’ as in geothermal, fission and fusion. That’s a great thing to embrace, but there are two problems.

The first problem is I question that they really mean it, especially for fission. I know they in theory are all for it, and there have been four executive orders reforming the NRC and reducing its independence, but the rules have yet to be revised and it is unclear how much progress we will get, they have 18 months, everything has to wait pending that and the AI timeline for needing a lot more power is not so long. Meanwhile, where are the subsidies to get us building again to move down the cost curve? There are so many ways we could do a lot more. For geothermal I again question how much they are willing to do.

The second problem is why only at the so-called technological frontier, and why does this not include wind and especially solar? How is that not the technological frontier, and why does this government seem to hate them so much? Is it to own the libs? The future is going to depend on solar power for a while, and when people use terms like ‘handing over the future to China’ they are going too far but I’m not convinced they are going too far by that much. The same thing with battery storage.

I realize that those authoring this action plan don’t have the influence to turn that part of the overall agenda around, but it is a rather glaring and important omission.

Restore American Semiconductor Manufacturing

I share this goal. The CHIPS Act was a great start. How do we build on that?

  1. Continue focusing on removing unnecessary requirements from the CHIPS Act.
  2. Review semiconductor grant and research programs to ensure they accelerate integration of advanced AI tools into semiconductor manufacturing.

Point one seems great, the ‘everything bagel’ problem needs to be solved. Point two seems like meddling by government in the private sector, let them cook, but mostly seems harmless?

I’d have liked to see a much bigger push here. TSMC has shown they can build plants in America even under Biden’s rules. Under Trump’s rules it should be much easier, and this could shift the world’s fate and strategic balance. So why aren’t we throwing more at this?

Train a Skilled Workforce for AI Infrastructure

Similar training programs in the past consistently have not worked, so we should be skeptical other than incorporation of AI skills into the existing educational system. Can we do better than the market here? Why does the government have a role here?

  1. Create a national initiative to identify high-priority occupations essential to AI-related infrastructure, to hopefully inform curriculum design.
  2. Create and fund industry-driven training programs co-developed by employers to upskill incumbent workers.
  3. Partner with education and workforce system stakeholders to expand early career exposure programs and pre-apprenticeships that engage middle and high school students in priority AI infrastructure occupations to create awareness and on ramps.
  4. Provide guidance on updating programs.
  5. Expand use of registered apprenticeships.
  6. Expand hands-on research training and development opportunities.

I’m a big fan of apprenticeship programs and getting early students exposed to these opportunities, largely because they are fixing an imposed mistake where we put kids forcibly in school forever and focus them away from what is important. So it’s good to see that reversed. The rest is less exciting, but doesn’t seem harmful.

The question I have is, aren’t we ‘everything bageling’ core needs here? As in, the obvious way to get skilled workers for these jobs is to import the talent via high skilled immigration, and we seem to be if anything rolling that back rather than embracing it. This is true across the board, and would on net only improve opportunities available for existing American workers, whose interests are best protected here by ensuring America’s success rather than reserving a small number of particular roles for them.

Again, I understand that those authoring this document do not have the leverage to argue for more sensible immigration policy, even though that is one of the biggest levers we have to improve (or avoid further self-sabotaging) our position in AI. It still is a glaring omission in the document.

Bolster Critical Infrastructure Cybersecurity

AI can help defend against AI, and we should do what we can. Again this all seems good, again I doubt it will move the needle all that much or be sufficient.

  1. Establish an AI Information Sharing and Analysis Center for AI security threats.
  2. Give private entities related guidance.
  3. Ensure sharing of known AI vulnerabilities to the private sector.

Promote Secure-By-Design AI Technologies and Applications

Secure is the new code word, but also here it does represent an impoverished threat model, with the worry being spurious or malicious inputs. I’m also not sure what is being imagined for an LLM-style AI to be meaningfully secure by design. Is this a Davidad style proof thing? If not, what is it?

  1. Continue to refine DOD’s Responsible AI and Generative AI Frameworks, Roadmaps and Toolkits.
  2. Publish an IC Standard on AI Assurance.

I also worry about whether this cashes out to anything? All right, we’ll continue to refine these things and publish a standard. Will anyone follow the standard? Will those who most need to follow it do so? Will that do anything?

I’m not saying not to try and create frameworks and roadmaps and standards, but one can imagine why if people are saying ‘AGI likely in 2028’ this might seem insufficient. There’s a lot of that in this document, directionally helpful things where the scope of impact is questionable.

Promote Mature Federal Capacity for AI Incident Response

Planning for incidence response is great. I only wish they were thinking even bigger, both conceptually and practically. These are good first steps but seem inadequate for even the practical problems they are considering. In general, we should get ready for a much wider array of potential very serious AI incidents of all types.

  1. “Led by NIST at DOC, including CAISI, partner with the AI and cybersecurity industries to ensure AI is included in the establishment of standards, response frameworks, best practices, and technical capabilities (e.g., fly-away kits) of incident response teams.”
  2. Incorporate AI considerations into the CYbersecurity Incident & Vulnerability response playbooks.
  3. Encourage sharing of AI vulnerability information.

Pillar III: Lead in International AI Diplomacy and Security

Export American AI to Allies and Partners

I have had an ongoing pitched argument over the issue of the importance and appropriateness of US exports and the ‘American technological stack.’ I have repeatedly made the case that a lot of the arguments being made here by David Sacks and others are Obvious Nonsense, and there’s no need to repeat them here.

Again, the focus needs to be on the actual policy action planned here, which is to prepare proposals for a ‘full-stack AI export package.’

  1. “Establish and operationalize a program within DOC aimed at gathering proposals from industry consortia for full-stack AI export packages. Once consortia are selected by DOC,the Economic Diplomacy Action Group, the U.S. Trade and Development Agency, the Export-Import Bank, the U.S. International Development Finance Corporation, and the Department of State (DOS) should coordinate with DOC to facilitate deals that meet U.S.-approved security requirements and standards.”

This proposal seems deeply confused. There is no ‘full-stack AI export package.’ There are American (mostly Nvidia) AI chips that can run American or other models. Then there are American AI models that can run on those or other chips, which you do not meaningfully ‘export’ in this sense, which can also be run on chips located elsewhere, and which everyone involved agrees we should be (and are) happy to offer.

To the extent this doesn’t effectively mean ‘we should sell AI chips to our allies and develop rules for how the security on such sales has to work’ I don’t know what it actually means, but one cannot argue with that basic idea, we only talk price. Who is an ally, how many chips are we comfortable selling under what conditions. That is not specified here.

We have an implementation of this via executive order calling for proposals for such ‘full-stack AI technology packages’ that include chips plus AI models and the required secondary powers like security and cybersecurity and specific use cases. They can then request Federal ‘incentive and support mechanisms,’ which is in large part presumably code for ‘money,’ as per section 4, ‘mobilization of federal financing tools.’

Once again, this seems philosophically confused, but not in an especially scary way.

Counter Chinese Influence in International Governance Bodies

  1. Vigorously advocate for international AI governance approaches that promote innovation, reflect American values and counter authoritarian influence.

Anything else you want to list there while we are creating international AI governance standards and institutions? Anything regarding safety or security or anything like that? No? Just ‘promote innovation’ with no limiting principles?

It makes sense, when in a race, to promote innovation at home, and even to make compromises on other fronts to get it. When setting international standards, they apply to everyone, the whole point is to coordinate to not be in a race to the bottom. So you would think priorities would change. Alas.

I think a lot of this is fighting different cultural battles than the one against China, and the threat model here is not well-considered, but certainly we should be advocating for standards we prefer, whatever those may be.

Strengthen AI Compute Export Control Enforcement

This is a pleasant surprise given what else the administration has been up to, especially their willingness to sell H20s directly to China.

I am especially happy to see the details here, both exploration of using location services and enhanced enforcement efforts. Bravo.

  1. Explore leveraging new and existing location verification services.
  2. Establish a new effort led by DOC to collaborate with IC officials on global chip export control enforcement.

Plug Loopholes in Existing Semiconductor Manufacturing Export Controls

Again, yes, excellent. We should indeed develop new export controls in places where they are currently lacking.

Align Protection Measures Globally

Excellent. We should indeed work closely with our allies. It’s a real shame about how we’ve been treating those allies lately, things could be a lot easier.

  1. Develop, implement and share information on complementary technology protection measures, including in basic research and higher education.
  2. Develop a technology diplomacy strategic plan for an AI global alliance.
  3. Promote plurilateral controls for the AI tech stack while encompassing existing US controls.
  4. Coordinate with allies to ensure they adopt US export controls and prohibit US adversaries from supplying their defense-industrial base or acquire controlling stakes in defense suppliers.

It always requires a double take when you’re banning exports and also imports, as in here where we don’t want to let people use adversary tech and also don’t want to let the adversaries use our tech. In this case it does make sense because of the various points of leverage, even though in most cases it means something has gone wrong.

Ensure that the U.S. Government is at the Forefront of Evaluating National Security Risks in Frontier Models

Eyeball emoji, in a very good way. Even if the concerns explicitly motivating this are limited in scope and exclude the most important ones, what matters is what we do.

  1. Evaluate frontier AI systems for national security risks in partnership with frontier AI developers, led by CAISI in collaboration with others.
  2. Evaluate risks from use of adversary AI systems and the relative capabilities of adversary versus American systems.
  3. Prioritize the recruitment of leading AI researchers at Federal agencies.
  4. “Build, maintain, and update as necessary national security-related AI evaluations through collaboration between CAISI at DOC, national security agencies, and relevant research institutions.”

Excellent. There are other related things missing, but this great. Let’s talk implementation details. In particular, how are we going to ensure we get to do these tests before model release rather than afterwards? What will we do if we find something? Let’s make it count.

Invest in Biosecurity

 

You love to see it, this is the biggest practical near term danger.

  1. Require proper screening and security for any labs getting federal funding.
  2. Develop mechanism to facilitate data sharing between nucleic acid synthesis providers to help screen for fraudulent or malicious customers.
  3. Maintain national security-related AI evaluations.

Are those actions sufficient here? Oh, hell no. They are however very helpful.

Part 2: Reactions

Speechless

Dean Ball: Man, I don’t quite know what to say—and anyone who knows me will agree that’s rare. Thanks to everyone for all the immensely kind words, and to the MANY people who made this plan what it is. surreal to see it all come to fruition.

it’s a good plan, sir.

 

 

Zac Hill: Very clear y'all put a lot of work and thoughtfulness into this. Obviously you know I come into the space from a different angle and so there's obviously plenty of stuff I can yammer about at the object level, but it's clearly a thoughtful and considered product that I think would dramatically exceed most Americans' expectations about any Government AI Strategy — with a well-constructed site to boot!

Others, as you would expect, had plenty to say.

We Can All Just Get Along

It seems that yes, you can make both sides of an important issue pleasantly surprised at the same time, where both sides here means those who want us to not all die (the worried), and those who care mostly about not caring about whether we all die or about maximizing Nvidia’s market share (the unworried).

Thus, you can get actual Beff Jezos telling Dean Ball he’s dropped his crown, and Anthropic saying they are encouraged by the exact same plan.

That is for three reasons.

The first reason is that the worried care mostly about actions taken and the resulting consequences, and many of the unworried care mostly about the vibes. The AI Action Plan has unworried and defiant vibes, while taking remarkably wise, responsible and prescient actions.

The second reason is that, thanks in part to the worried having severely lowered expectations where we are stuck for now within an adversarial race and for what we can reasonably ask of this administration, mostly everyone involved agrees on what is to be done on the margin. Everyone agrees we must strengthen America’s position relative to China, that we need to drive more AI adoption in both the public and private sectors, that we will need more chips and more power and transmission lines, that we need to build state capacity on various fronts, and that we need strong export controls and we want our allies using American AI.

There are places where there are tactical disagreements about how best to proceed with all that, especially around chip sales, which the report largely sidesteps.

There is a point where safety and security would conflict with rapid progress, but at anything like current margins security is capability. You can’t deploy what you can’t rely upon. Thus, investing vastly more than we do on alignment and evaluations is common sense even if you think there are no tail risks other than losing the race.

The third reason is, competence matters. Ultimately we are all on the same side. This is a thoughtful, well-executed plan. That’s win-win, and it’s highly refreshing.

Okay Maybe Not All Of Us But What You Gonna Do

Worried and unworried? Sure, we can find common ground.

The Trump White House and Congressional Democrats? You don’t pay me enough to work miracles.

Where did they focus first? You have three guesses. The first two don’t count.

We are deeply concerned about the impacts of President Trump’s AI Action Plan and the executive orders announced yesterday.

“The President’s Executive Order on “Preventing Woke AI in the Federal Government” and policies on ‘AI neutrality’ are counterproductive to responsible AI development and use, and potentially dangerous.

To be clear, we support true AI neutrality—AI models trained on facts and science—but the administration's fixation on ‘anti-woke’ inputs is definitionally not neutral. This sends a clear message to AI developers: align with Trump’s ideology or pay the price.

It seems highly reasonable to worry that this is indeed the intention, and certainly it is fair game to speak about it this way.

Next up we have my other area of concern, the anti-regulatory dynamic going too far.

“We are also alarmed by the absence of regulatory structure in this AI Action Plan to ensure the responsible development, deployment, or use of AI models, and the apparent targeting of state-level regulations. As AI is integrated with daily life and tech leaders develop more powerful models, such as Artificial General Intelligence, responsible innovation must go hand in hand with appropriate safety guardrails.

In the absence of any meaningful federal alternative, our states are taking the lead in embracing common-sense safeguards to protect the public, build consumer trust, and ensure innovation and competition can continue to thrive.

We are deeply concerned that the AI Action Plan would open the door to forcing states to forfeit their ability to protect the public from the escalating risks of AI, by jeopardizing states’ ability to access critical federal funding. And instead of providing a sorely needed federal regulatory framework that promotes safe model development, deployment, and use, Trump’s plan simultaneously limits states and creates a ‘wild west’ for tech companies, giving them free rein to develop and deploy models with no accountability.

Again, yes, that seems like a highly valid thing to worry about in general, although also once again the primary source of that concern seems not to be the Action Plan or the accompanying Executive Orders.

On their third objection, the energy costs, they mostly miss the mark by focusing on hyping up marginal environmental concerns, although they are correct about the critical failure to support green energy projects - again it seems very clear an ‘all of the above’ approach is necessary, and that’s not what we are getting.

As Peter Wildeford notes, it is good to see the mention here of Artificial General Intelligence, which means the response mentions it one more time than the plan.

Good Vibes Only

This applies to both the documents and the speeches. I have heard that the mood at the official announcement was highly positive and excited, emphasizing how amazing AI would be for everyone and how excited we all are to build.

Director Michael Kratsios: Today the @WhiteHouse released America’s AI Action Plan to win the global race.

We need to OUT-INNOVATE our competitors, BUILD AI & energy infrastructure, & EXPORT American AI around the world. Visit http://AI.gov.

Juan Londono: There's a lot to like here. But first and foremost, it is refreshing to see the admin step away from the pessimism that was reigning in AI policy the last couple of years.

A lot of focusing on how to get AI right, instead of how not to get it wrong.

I am happy to endorse good vibes and excitement, there is huge positive potential all around and it is most definitely time to build in many ways (including lots of non-AI ways, let’s go), so long as we simultaneously agree we need to do so responsibly, and we prepare for the huge challenges that lie ahead with the seriousness they deserve.

There’s no need for that to dampen the vibes. I don’t especially care if everyone involved goes around irrationally thinking there’s a 90%+ chance we are going to create minds smarter and more competitive than humans and this is all going to work out great for us humans, so long as that makes them then ask how to ensure it does turn out great and then they work to make that happen.

The required and wise actions at 90% success are remarkably similar to those at 10% success, especially at current margins. Hell, even if you have 100% success and think we’ll muddle through regardless, those same precautions help us muddle through quicker and better. You want to prepare and create transparency, optionality and response capacity.

Irrational optimism can have its advantages, as many of the unworried know well.

Perhaps one can even think of humanity’s position here as like a startup. You know on some level, when founding a startup, that ~90% of them will fail, and the odds are very much against you, but that the upside is big enough that it is worth taking the shot.

However, you also know that if you want to succeed, you can’t go around thinking and acting as if you have a 90% chance of failure. You certainly can’t be telling prospective funders and employees that. You need to think you have a 90% chance of success, not failure, and make everyone involved believe it, too. You have to think You Are Different. Only then can you give yourself the best chance of success. Good vibes only.

The tricky part is doing this while correctly understanding all the ways 90% of startups fail, and what it actually takes to succeed, and to ensure that things won’t be too terrible if you fail and ideally set yourself up to fail gracefully if that happens, and acting accordingly. You simultaneously want to throw yourself into the effort with the drive of someone expecting to succeed, without losing your head.

You need confidence, perhaps Tomfidence, well beyond any rational expectation.

And you know what? If that’s what it takes, that works for me. We can make a deal. Walk the walk, even if to do that you have to talk a different talk.

I mean, I’m still going to keep pointing out the actual situation. That’s how some of us roll. You gotta have both. Division of labor. That shouldn’t be a problem.

Dare Not Speak Its Name

Peter Wildeford headlines his coverage with the fact that Rubio and Trump are now officially saying that AI is a big deal, a new industrial revolution, and he highlights the increasing attention AGI and even superintelligence are starting to get in Congress, including concerns by members about loss of control.

By contrast, America’s AI Action Plan not only does not mention existential risks or loss of control issues (although it does call for investment into AI interpretability, control and robustness in the context of extracting more mundane utility), the AI Action Plan also does not mention AGI or Artificial General Intelligence, or ASI or Superintelligence, either by those or other names.

There is nothing inconsistent about that. AI, even if we never get AGI, is still likely akin to a new industrial revolution, and is still a big freaking deal, and indeed in that case the AI Action Plan would be even more on point.

At the same time, the plan is trying to prepare us for AGI and its associated risks as best its authors can without explaining that it is doing this.

Other Reactions In Brief

Steven Adler goes through the key points in the plan in this thread, emphasizing the high degree of competence and work that clearly went into all this and highlighting key useful proposals, while expressing concerns similar to mine.

Timothy Lee notes the ideas for upgrading the electrical grid.

Anthropic offers its thoughts by focusing on and praising in detail what the plan does right, and then calling for further action export controls and transparency standards.

xAI endorsed the ‘positive step towards removing regulatory barrier and enabling even faster innovation.’

Michael Dell offers generic praise.

Harlan Stewart notes that the AI Action Plan has some good stuff, but that it does not take the emerging threat of what David Sacks called a ‘potential successor species’ seriously, contrasting it with past events like the Montreal Protocol, Manhattan Project and Operation Warp Speed. That’s true both in the sense that it doesn’t mention AGI or ASI at all, and in that the precautions mentioned mostly lack both urgency and teeth. Fair enough. Reality does not grade on a curve, but also we do the best we can under the circumstances.

Daniel Eth is pleasantly surprised and has a thread pointing out various good things, and noting the universally positive reactions to the plan, while expressing disappointment at the report not mentioning AGI.

Danny Hauge offers a breakdown, emphasizing the focus on near term actions, and that everything here is only a proposal, while noting the largely positive reaction.

Christopher Covino’s considered reaction is ‘a very promising start,’ with the issues being what is missing rather than objecting to things that are included.

Remarks of Note

Trump advocates for not applying copyright to AI training, and also says that America is ‘very very substantially’ ahead of China on AI. That is indeed current American law.

Joe Allen: Trump talking about AI as an unstoppable "baby" being "born" — one that must "grow" and "thrive" — is somewhere between Terminator and The Omen.

I am not one who lives by the vibe, yet sometimes I wish people could listen.

Conclusion: Helpful Is The Watchword

My dead is: Insufficient but helpful is the theme here. There’s a lot of very good ideas on the list, including many I did not expect, several of which are potentially impactful.

There are two particular points of substantive concern, where the wording could imply something that could get out of control, on bias policing and on going after state regulations.

Having seen the executive order on bias, I am not terribly worried there, but we need to keep an eye out to see how things are interpreted. On going after state regulations, I continue to see signs we do indeed have to worry, but not primarily due to the plan.

Mostly, we are in a great position on substance: The plan is net helpful, and the main thing wrong with the substance of the plan is not what is in it, but what is missing from it. The issues that are not addressed, or where the actions seem to lack sufficient teeth. That doesn’t mean this puts us on a path to survive, but I was very worried this would be net destructive and instead it is net helpful.

I am less happy with the rhetoric, which is hostile and inflicts pain upon the reader throughout, and most importantly does not even deem many key concerns, including the most important concerns of all, even worthy of mention. That is worrisome, but it could have been far worse, and what matters most is the substance.

Given the way things have been otherwise going, I am very happy with the substance of this plan, which means I am overall very happy with the plan. I offer my thanks and congratulations to those involved in its creation, including Dean Ball. Great work.