Motivation: some people on the internet seem confused about why libertarians (and others generally suspicious of government intervention) are advocating for regulation to help prevent x-risk from AI.  The arguments here aren't novel and generally seem pretty obvious to me, but I wanted a quick reference.


Who is advocating for regulations?

This is obviously the first question: is the premise correct?  Are libertarians uncharacteristically advocating for regulations?

They probably aren't the loudest or most numerous among the set of people who are advocating for regulations while being motivated by reducing AI x-risk, but I'm one such person and know multiple others.

I tend to bucket those advocating for regulations into three buckets.

Non-libertarians

Probably the biggest group, though I have wide error bars here.  Most of the people I know working on the policy & comms side are either liberal, apolitical, or not usefully described by a label like that.  I think people in this group tend to have somewhat more optimistic views on how likely things are to go well, but this isn't a super strong correlation.

Libertarians

Although some of them are focusing primarily on policy & comms, those I know more often dual-class with a technical primary & comms secondary.  Tend to have more pessimistic views on our likelihood of making it through.

Eliezer Yudkowsky

I could have put him in the "libertarian" group, but he has a distinct and specific policy "ask".  Also often misquoted/misunderstood.


The basic argument

There are many arguments for why various regulations in this domain might seem compelling from a non-libertarian point of view, and people generally object to those on different grounds (i.e. accusations of regulatory capture, corruption, tribalism, etc.), so I'll skip trying to make their case for them.

Why might libertarians advocate for (certain kinds of) regulation on AI, given their general distrust of government?

Straightforwardly, the government is not a malicious genie optimizing for the inverse of your utility function, should you happen to ask it for a favor.  Government interventions tend to come in familiar shapes, with a relatively well-understood distribution of likely first- and second-order effects.

If you're mostly a deontological libertarian, you probably oppose government interventions because they violate people's sovereignty; nevermind the other consequences.

If you're mostly a consequentialist libertarian, you probably oppose government interventions because you observe that they tend to have undesirable second-order effects, often causing more harm than any good from the first-order effects[1].

But this is a contingent fact about the intersection between the effects of regulations, and the values of libertarians.  Many regulations often have the effect of slowing down technological progress and economic growth, by making it more expensive to operate a business, do R&D, etc.  Libertarians usually aren't fans, since, you know, technological progress and economic growth are good things.

Unless you expect a specific kind of technological progress to kill everyone, possibly in the next decade or few.  A libertarian who believes that the default course of AI development will end with us creating an unaligned ASI that kills us and eats the accessible lightcone is not going to object to government regulations on AI because "regulations bad".

Now, a libertarian who believes this, and is thinking sensibly about the subject[2], will have specific models about which regulations seem like they might help reduce the chance of that outcome, and which might hurt.  This libertarian is not going to make basic mistakes like thinking that the intent of the regulation[3] will be strongly correlated with its actual effects.

They will simply observe that, while there are certainly ways in which government regulation could make the situation worse, such as by speeding things up, very often the effect of regulations is to slow things down, instead.  The libertarian is not going to be confused about the likelihood that government-mandated evals will successfully catch and stop an unaligned ASI from being deployed, should one be developed.  They will not make the mistake of thinking that the government will miraculously solve ethics and philosophy, and provide us with neat guardrails to ensure that progress goes in the right direction.

To a first approximation, all they care about is buying time to solve the actual (technical) problem.

Eliezer's ask is quite specific, but targeted at the same basic endpoint: institute a global moratorium on AI training runs over a certain size, in order to (run a crash program on augmenting human intelligence, so that you can) solve the technical alignment problem before someone accidentally builds an unaligned ASI.

If you want to argue that these people are making a mistake according to their own values and starting with their premises, arguing that governments tend to mess up whatever they touch is a non-sequitur.  Yes, they do - in pretty specific ways!

Arguments that might actually address the cruxes of someone in this reference class might include:

  • By default, risk is not as high as you think, so there is substantial room for government intervention to make x-risk more likely.
    • This does still require hashing out the likely distribution of outcomes, given various proposals.
  • The distribution of outcomes from government interventions are so likely to give you less time, or otherwise make it more difficult to solve the technical alignment problem, that there are fewer surviving worlds where the government intervenes as a result of you asking them to, compared to the counterfactual.
    • Keep in mind that if your position is something like "default doom, pretty overdetermined", you can think that government interventions will make things "worse" 90% of the time and still come out ahead, since those worlds were doomed anyways and it wasn't borderline (such that you're not losing a bunch of probability mass on other interventions tipping you over into non-doomed worlds).

Arguments that are not likely to be persuasive, since they rely on premises that most people in this reference class think are very unlikely to be true:

  • Involving the government increases risk of authoritarian value lock-in
    • Assumes that we are very likely to solve alignment in time, or that government involvement increases the probability of bad outcomes even if it increases the odds of solving the technical alignment problem[4].  (The deployment problem concern might be an interesting argument if made, but I haven't yet seen a convincing effort in that direction.)
  • Centralization of power (as is likely to result from many possible government interventions) is bad
    • I'm not sure what the actual argument here is.  I often see this paired with calls for more open-sourcing of various things.  Assumes, man, I don't even know, this isn't even trying to model any serious opposing perspective.
  • China!
    • HE'S STILL DEAD, JIM.

 

Thanks to Drake Thomas for detailed feedback.  Thanks also to Raemon, Sam, and Adria for their thoughts & suggestions.

  1. ^

    Which are often also negative!

  2. ^

    As always, this is a small minority of those participating in conversations on the subject.  Yes, I am asking you to ignore all the terrible arguments in favor of evaluating the good arguments.

  3. ^

    To the extent that it's meaningful to ascribe intent to regulations, anyways - maybe useful to instead think of the intent of those who were responsible for the regulation's existence.

  4. ^

    Example provided by Drake Thomas: Someone could have 10 year timelines with 90% chance of paperclips and 10% chance of tech CEO led utopia, but government intervention leads to 20 year timelines, 80% chance of paperclips and 20% chance of aligned to someone, but 75% probability of authoritarian dystopia conditional on someone being able to align an AI. Under this model, they wouldn't want regulations, because they move the good worlds from 10% to 5% even though the technical problem is more likely to get solved.

New to LessWrong?

New Comment
13 comments, sorted by Click to highlight new comments since: Today at 9:14 AM

Who is advocating for regulations?

...

Non-libertarians...tend to have somewhat more optimistic views on how likely things are to go well

...

Libertarians...tend to have more pessimistic views on our likelihood of making it through.

This claimed correlation between libertarianism and pessimism seemed surprising to me until I noticed that actually since we are conditioning on advocating-for-regulations, Berkson's Bias would make this correlation appear even in a world where libertarianism and pessimism were completely uncorrelated in the general population.

Berkson's Bias seems to be where you're getting a subset of people that are some combination of trait X and trait Y; that is, to be included in the subset, X + Y > threshold.  Here, "> threshold" seems to mean "willing to advocate for regulations".  It seems reasonably clear that "pessimism (about the default course of AI)" would make someone more willing to advocate for regulations, so we'll call that X.  Then Y is ... "being non-libertarian", I guess, since probably the more libertarian someone is, the more they hate regulations.  Is that what you had in mind?

I would probably put it as "Since libertarians generally hate regulations, a libertarian willing to resort to regulations for AI must be very pessimistic about AI."

Yeah, that seems like a plausible contributor to that effect.

Edit: though I think this is true even if you ignore "who's calling for regulations" and just look at the relative optimism of various actors in the space, grouped by their politics.

Y'know, I didn't realize until reading this that I hadn't seen a short post spelling it out before. The argument was just sort of assumed background in a lot of conversations. Good job noticing and spelling it out.

A slightly surreal experience to read a post saying something I was just tweeting about, written by a username that could plausibly be mine.

Your argument with Alexandros was what inspired this post, actually.  I was thinking about whether or not to send this to you directly... guess that wasn't necessary.

Centralization of power (as is likely to result from many possible government interventions) is bad

 

Suppose that you expected AI research to rapidly reach the point of being able to build Einstein/Von Neumann level intelligence and thereafter rapidly stagnate.  In this world, would you be able to see why centralization is bad?

It seems like you're not doing a very good Ideological Turing Test if you can't answer that question in detail.

The question is not whether I can pass their ITT: that particular claim doesn't obviously engage with any cruxes that I or others like me to have, related to x-risk.  That's the only thing that section is describing.

I think maybe you misunderstand the word "crux".  Crux is a point where you and another person disagree.  If you're saying you can't understand why Libertarians think centralization is bad, that IS a crux and trying to understand it would be a potentially useful exercise.

If you're saying you can't understand why Libertarians think centralization is bad, that IS a crux and trying to understand it would be a potentially useful exercise.

I am not saying that.  Many libertarians think that centralization of power often has bad effects.  But trying to argue with libertarians who are advocating for government regulations because they're worried about AI x-risk by pointing out that government regulation will increase centralization of power w.r.t. AI is a non-sequitur, unless you do a lot more work to demonstrate how the increased centralization of power acts contrariwise the libertarian's goals in this case.

Arguments that might actually address the cruxes of someone in this reference class might include: [...]

The distribution of outcomes from government interventions are so likely to give you less time, or otherwise make it more difficult to solve the technical alignment problem, that there are fewer surviving worlds where the government intervenes as a result of you asking them to, compared to the counterfactual.

The thing I care more about is quality-adjusted effort, rather than time to solve alignment. For example, I'd generally prefer 30 years to solve alignment with 10 million researchers to 3000 years with 10 researchers, all else being equal. Quality of alignment research comes from a few factors:

  • How good current AIs are, with the idea being that we're able to make more progress when testing alignment ideas on AIs that are closer to dangerous-level AGI.
  • The number of talented people working on the problem, with more generally being better

I expect early delays to lead to negligible additional alignment progress during the delay, relative to future efforts. For example, halting semiconductor production in 2003 for a year to delay AI would have given us almost no additional meaningful alignment progress. I think the same is likely true for 2013 and even 2018. The main impact would just be to delay everything by a year. 

In the future I expect to become more optimistic about the merits of delaying AI, but right now I'm not so sure. I think some types of delays might be productive, such as delaying deployment by requiring safety evaluations. But I'm concerned about other types of delays that don't really give us any meaningful additional quality-adjusted effort. 

In particular, the open letter asking for an AI pause appeared to advocate what I consider the worst type of delay: a delay on starting the training of giant models. This type of delay seems least valuable to me for two main reasons. 

The first reason is that it wouldn't significantly slow down algorithmic progress, meaning that after the pause ended, people could likely just go back to training giant models almost like nothing happened. In fact, if people anticipate the pause ending, then they're likely to invest heavily and then start their training runs on the date the pause ends, which could lead to a significant compute overhang, and thus sudden progress. The second reason is that, compared to a delay of AI deployment, delaying the start of a training run reduces the quality-adjusted effort that AI safety researchers have, as a result of preventing them from testing alignment ideas on more capable models.

If you think that there are non-negligible costs to delaying AI from government action for any reason, then I think it makes sense to be careful about how and when you delay AI, since early and poorly targeted delays may provide negligible benefits. However, I agree that this consideration becomes increasingly less important over time.

For what it's worth, I think most people I know expect most professed values to be violated most of the time, and so they think that libertarians advocating for this is perfectly ordinary; the surprising thing would be if professed libertarians weren't constantly showing up advocating for regulating things. Show don't tell in politics and ideology. That's not to say professing values is useless, just that there's not an inconsistency to be explained here, and if I link people in my circles this post, they'd respond with an eyeroll at the possibility that if only they were more libertarian they'd be honest - because the name is most associated with people using the name to lie.

[-]Lara10mo10

Maybe a consideration worth considering: 

It seems plausible that sometimes people advocate for things in political realms, even if they don't support it. 

Strategically it would make sense for an AI Lab that opposes regulation that endangers their business model to publicly support or even ask for regulation broadly, generally, to then seem more credible when opposing a specific regulatory proposal. They can then say "We're not against regulation in general, in fact we've been calling for it for a long time, but this specific proposal is bad".