2600

LESSWRONG
LW

2599
AI GovernanceAI
Personal Blog

43

AI companies' policy advocacy (Sep 2025)

by Zach Stein-Perlman
29th Sep 2025
3 min read
0

43

43

New Comment
Moderation Log
More from Zach Stein-Perlman
View more
Curated and popular this week
0Comments
AI GovernanceAI
Personal Blog

Strong regulation is not on the table and all US frontier AI companies oppose it to varying degrees. Weak safety-relevant regulation is happening; some companies say they support and some say they oppose. (Some regulation not relevant to AI safety, often confused, is also happening in the states, and I assume companies oppose it but I don't really pay attention to this.) Companies besides Anthropic support federal preemption of state laws, even without an adequate federal framework. Companies advocate for non-regulation-y things like building government capacity and sometimes export controls.

My independent impression is that current regulatory efforts—SB 53 and the RAISE Act—are too weak to move the needle nontrivially. But the experts I trust are more optimistic, and so all-things-considered I think SB 53 and the RAISE Act would be substantially good (for reasons I don't really understand).

This post is based on my resource https://ailabwatch.org/resources/company-advocacy; see that page for more primary sources. This is all based on public information. My impression is that the companies are even more anti-regulation in private than in public.

US state bills

SB 1047 (2024)

California's SB 1047 (summary by supporters) was endorsed by xAI CEO Elon Musk. It was opposed by major AI companies:

  • Google opposed SB 1047 and registered a position of "Oppose Unless Amended"
  • Meta opposed SB 1047 and registered a position of "Concerns"
  • OpenAI opposed SB 1047
  • Anthropic was neutral on the final version of SB 1047;[1] previously it did not support SB 1047 and registered a position of "Support If Amended"
  • SB 1047 was opposed by trade groups representing major AI companies. See here for more links.

SB 53 (2025)

SB 53 (summary) seems particularly light-touch. Anthropic supports it, and after it passed the legislature Meta said "While there are areas for improvement, SB 53 is a step in [the right] direction." OpenAI opposes it; its letter is wrong/deceitful.

RAISE Act (2025)

New York's Responsible AI Safety and Education Act is opposed by trade groups including Computer & Communications Industry Association (representing Amazon, Google, and Meta), AI Alliance (representing Meta), and Tech:NYC (representing Amazon, Google, Meta, and Microsoft). Anthropic policy lead Jack Clark is also critical.

US federal preemption

Preemption of state AI laws is supported by Meta and OpenAI, and Google endorses preemption alongside a light-touch federal framework. Preemption is also supported by trade groups including CCIA, TechNet, and INCOMPAS, as well as "Lobbyists acting on behalf of Amazon, Google, Microsoft and Meta." Preemption is opposed by Anthropic.

AI companies including OpenAI, Meta, and Google supported a proposed federal moratorium on state AI laws in August 2025.

EU AI Act

OpenAI, Google, Meta, Microsoft, and others (but not Anthropic) were caught lobbying against the relevant part of the EU AI Act. They seem to have avoided opposing it publicly. European AI companies Mistral AI and Aleph Alpha also convinced their home countries—France and Germany—to oppose the Act. See here for more links.

(After the Code of Practice was finalized, it was signed by OpenAI, Anthropic, Google, Microsoft, and Amazon. xAI signed just the safety and security chapter; Meta refused to sign.)

Super PACs

Three large pro-innovation super PACs were announced in August–September 2025: Leading the Future, supposedly with "more than $100 million" and involving OpenAI executives Greg Brockman and Chris Lehane, and Meta California and American Technology Excellence Project, each supposedly with "tens of millions" from Meta. Leading the Future will presumably be deceptive. Lehane and a16z have historically been deceptive in political advocacy. Leading the Future plans to emulate Fairshake and is led by one of the same people; Fairshake is low-integrity.

Policies companies support

When AI companies propose policy, they generally focus on investing in AI infrastructure, government AI adoption, lack of regulation, and sometimes export controls. Anthropic's recommendations are better for safety than other companies', despite not including real regulation;[2] for example, Anthropic sometimes recommends government eval capacity, government helping companies improve security, and transparency standards.

Misc

Anthropic & Clark

Jack Clark leads Anthropic's policy advocacy. He mostly says regulation is premature or should not be burdensome. Sometimes he emphasizes competition with China and says things like "I think the greatest risk is us [i.e. America] not using it [i.e. AI]." More generally, Anthropic basically opposes regulation that goes beyond transparency (but its advocacy is otherwise reasonable, as mentioned above).

OpenAI & Lehane

Chris Lehane leads OpenAI's policy advocacy. His political advocacy has been deceptive both recently and historically. He says:

Maybe the biggest risk here is actually missing out on the opportunity. There was a pretty significant vibe shift when people became more aware and educated on this technology and what it means.

 

Companies like ours have gotten pretty comfortable with how we're deploying this stuff in a responsible way, and understand the real challenge here is to make sure this opportunity is realized.

Elsewhere, he says his two big concerns are broadly distributing the benefits of AI and America beating China.

Other AI companies, including Amazon and Microsoft, also advocate against regulation (and Nvidia advocates against export controls, often deceptively). But I have less to say here.


Subscribe on Substack.

  1. ^

    Anthropic's letter to the governor said "In our assessment the new SB 1047 is substantially improved, to the point where we believe its benefits likely outweigh its costs. However, we are not certain of this, and there are still some aspects of the bill which seem concerning or ambiguous to us." See also Jack Clark's tweet. I think many people have misinterpreted this as more supportive than it actually is. If you believe that a bill like this is only slightly better than nothing, the correct response may be not to enact it but rather to aim for a bill with less downside in the future; indeed, that's what the governor did.

  2. ^

    One blogpost notwithstanding.