According to various sources, the US Supreme Court is poised to rule on and potentially overturn the principle of "Chevron deference." Chevron deference is a key legal principle by which the entire federal bureaucracy functions, being perhaps the most cited case in American administrative law. Basically, it says that when Congress establishes a federal agency and there is ambiguity in the statutes determining the scope of the agency's powers and goals, courts will defer to the agency's interpretation of that scope as long as it is reasonable. While the original ruling seems to have merely officially codified the previously implicit rules regarding the legal authority of federal agencies, this practice seems likely to have increased the power and authority of the agencies because it has enabled them to act without much congressional oversight and because they tend to interpret their powers and goals rather broadly. I am not a legal expert, but it seems to me that without something like Chevron deference, the federal bureaucracy basically could not function in its contemporary form. Without it, Congress would have to establish agencies with much more well-specified goals and powers, which seems very difficult given the technocratic complexity of many regulations and the fact that politicians often have limited understanding of these details.

Given that the ruling has expanded the regulatory capacity of the state, it seems to be opposed by many conservative judges. Moreover, the Supreme Court is currently dominated by a conservative majority, as reflected by the recent affirmative action and abortion decisions. The market on Manifold Markets is trading at 62% that they will do so, and while only two people have traded on it, it altogether seems pretty plausible that the ruling will be somehow overturned.

While overturning Chevron deference seems likely to have positive effects for many industries which I think are largely overregulated, it seems like it could be quite bad for AI governance. Assuming that the regulation of AI systems is conducted by members of a federal agency (either a pre-existing one or a new one designed for AI as several politicians have suggested), I expect that the bureaucrats and experts who staff the agency will need a fair amount of autonomy to do their job effectively. This is because the questions relevant AI regulation (i. e. which evals systems are required to pass) are more technically complicated than in most other regulatory domains, which are already too complicated for politicians to have a good understanding of. As a result, an ideal agency for regulating AI would probably have a pretty broad range of powers and goals and would specifically be empowered to make decisions regarding the aforementioned details of AI regulation based on the thoughts of AI safety experts and not politicians. While I expect that it will still be possible for such agencies to exist in some form even if the court overturns Chevron, I am quite uncertain about this, and it seems possible that a particularly strong ruling could jeopardize the existence of autonomous federal agencies run largely by technocrats. 

The outcome of the upcoming case is basically entirely out of the hands of the AI safety community, but it seems like something that AI policy people should be paying attention to. If the principle is overturned, AI policy could become much more legally difficult and complex, and this could in turn raise the value of legal expertise and experience for AI governance efforts.

New Comment
17 comments, sorted by Click to highlight new comments since: Today at 2:07 PM
[-]River9mo2711

I don't think I see the problem. Chevron deference is, as you say, about whether courts defer to agencies interpretations statutes. It comes up when an agency thinks one interpreation is best, and a court thinks a different interpretation is the best reading of the statute, but that the agencies prefered interpreation is still a plausible reading of the statute. In that case, under Chevron, the court defers to the agencies interpreation. Do away with Chevron, and the court will follow what it thinks is the best reading of the statute. This is, I should note, the background of what courts usually do and did before Chevron. Chevron is an anomaly. 

In terms of implications, I think it is true that agencies will tend to interpret their mandates broadly, and so doing away with Chevron deference will, at the margin, reduce the scope of some agencies powers. But I don't see how it could lead to the end of the administrative state as we know it. Agencies will still have jobs to do that are authorized statute, and courts will still let agencies do those jobs. 

So what does AI regulation look like? If it looks like congress passing a new statute to either create a new agency or authorize an existing agency to regulate AI, then whether Chevron gets overturned seems irrelevant - congress is quite capable of writing a statute that authorizes someone to regulate AI, with or without Chevron. If it looks like an existing agency reading an existing statute correctly to authorize it to regulate some aspect of AI, then again, that should work fine with or without Chevron. If, on the other hand, it looks like an existing agency over-reading an existing statute to claim authority it does not have to regulate AI, then (1) that seems horribly undemocratic, though if the fate of humanity is on the line then I guess that's ok, and (2) maybe the agency does it anyway, and it takes years to get fought out in court, and that buys us the time we need. But if the court ruling causes the agency to not try to regulate AI, or if the years long court fight doesn't buy enough time, we might actually have a problem here. I think this argument needs more details fleshed out. What particular agency do we think might over-read what particular statute to regulate AI? If we aren't already targeting a particular agency with arguments about a particular statute, and have a reasonable chance of getting them to regulate for AI safety rather than AI ethics, then worrying about the courts seems pointless.

I think you’re probably right. But even this will make it harder to establish an agency where the bureaucrats/technocrats have a lot of autonomy, and it seems there’s at least a small chance of an extreme ruling which could make it extremely difficult.

Harder, yes; extremely, I'm much less convinced. In any case, Chevron was already dealt a blow in 2022, so those lobbying Congress to create an AI agency of some sort should be encouraged to explicitly give it a broad mandate (e.g. that it has the authority to settle various major economic or political questions concerning AI.)

It might also make it easier. You can use the fact that Chevron was overruled to justify writing broad powers into the new AI safety regulation. 

I am a law student between my first and second years.

I would estimate the probability of an overruling of Chevron in Loper Bright Enterprises v. Raimondo to be at least 85%. Gorsuch, Thomas, and Alito are >99% votes in favor of such an outcome. Among Roberts, Kavanaugh, and Barrett, I think it very likely that at least two join. If this kind of thing interests you, I recommend reading Gorsuch's concurrence in Gutierrez-Brizuela, a 10th Circuit case (<https://casetext.com/case/gutierrez-brizuela-v-lynch> scroll down to find the concurrence), as it is probably representative of the kind of reasoning that an overruling of Chevron will involve.

While overturning Chevron deference seems likely to have positive effects for many industries which I think are largely overregulated, it seems like it could be quite bad for AI governance. Assuming that the regulation of AI systems is conducted by members of a federal agency (either a pre-existing one or a new one designed for AI as several politicians have suggested), I expect that the bureaucrats and experts who staff the agency will need a fair amount of autonomy to do their job effectively. This is because the questions relevant AI regulation (i. e. which evals systems are required to pass) are more technically complicated than in most other regulatory domains, which are already too complicated for politicians to have a good understanding of.

Why do you think that the same federal bureaucrats who incompetently overregulate other industries will do a better job regulating AI?

Why do you think that the same federal bureaucrats who incompetently overregulate other industries will do a better job regulating AI?

Chevron deference means that judges defer to federal agencies instead of interpreting the laws themselves where the statute is ambiguous. It's not so much a question of overregulation vs underregulation as it is about who is doing the interpretation. For example, would you rather the career bureaucrats in the Environmental Protection Agency determine what regulations are appropriate to protect drinking water or random judges without any relevant expertise?

One consequence of blowing up Chevron deference is that one activist judge in Texas can unilaterally invalidate FDA approval of a drug like mifepristone for the entire country that's been safe, effective, and available on the markets for decades by substituting his own idiosyncratic opinion instead of deferring to the regulatory agency whose entire purpose is to make those kinds of determinations.

Government agencies aren't always competent but the alternative is a patchwork of potentially conflicting decisions from judges ruling outside of their area of expertise.

If the EPA would say: "You are not allowed to build any new buildings in city X, because that will reduce drinking water quality" you have to think about whether this is a power that the EPA should have or shouldn't have because there are tradeoffs between drinking water quality on the one hand and other public goods.

Having a law that specifies what kind of actions the EPA is allowed to take to improve drinking water quality instead of just letting the EPA to do whatever it wants makes sense. 

The FAA requires SpaceX to:

The company will also contribute to local education and preservation efforts — including preparing a historical context report of the events of the Mexican War and the Civil War that took place in the area as well as replacing missing ornaments on a local historical marker. 

With regulations like this, it seems there's a lot of regulatory overreach and it would be useful to reign it in. 

Chevron deference means that judges defer to federal agencies instead of interpreting the laws themselves where the statute is ambiguous.

Which is as it should be, according to the way the US system of government is set up. The legislative branch makes the law. The executive branch enforces the law. The judicial branch interprets the law. This is a fact that every American citizen ought to know, from their grade-school civics classes.

For example, would you rather the career bureaucrats in the Environmental Protection Agency determine what regulations are appropriate to protect drinking water or random judges without any relevant expertise?

I would much rather have an impartial third party determine which regulations are appropriate rather a self-interested bureaucrat. Otherwise what's the point of having a judicial system at all, if the judges are just going to yield to the executive on all but a narrow set of questions?

Government agencies aren’t always competent but the alternative is a patchwork of potentially conflicting decisions from judges ruling outside of their area of expertise.

Which can be resolved by Congress passing laws or by the Supreme Court resolving the contradiction between the different circuit courts.

Perhaps agencies consistently overregulate. And when it comes to AI, overregulation is preferable to underregulation, whereas for most other fields the opposite is true.

Governments don't consistently over-regulate. They consistently regulate poorly.  For example cracking down on illegal skateboarding but not shoplifting or public consumption of drugs.


In AI, the predictable outcome of this is lots of regulations about AI bias and basically nothing that actually helps notkilleveryone.

Furthermore, given the long history of government regulation having unintended consequences as a result of companies and private individuals optimizing their actions to take advantage of the regulation, it might be the case that government overregulation makes a catastrophic outcome more likely.

For example cracking down on illegal skateboarding but not shoplifting

This seems like an exaggeration. Shoplifting is still literally illegal in San Francisco and you will be stopped by police and charged with a crime if you do it sufficiently often in the open. I agree with the general point that regulatory priorities are often misplaced, however.

If you look at environmental pollution we might today have pesticide limits that are overregulation and a lack of regulation on microplastics. 

Note that NickGabs doesn't necessarily think that at all. For example, I agree with the quoted paragraph NickGabs wrote, but also I don't expect the same federal bureaucrats who incompetently overregulate other industries to do a better job regulating AI.

Yeah, I think they will probably do better and more regulations than if politicians were more directly involved, but I’m not super sanguine about bureaucrats in absolute terms.

I think they will probably do better and more regulations than if politicians were more directly involved

Why do you think this?