Summary:

The touchstone of antitrust compliance is competition. To be legally permissible, any industrial restraint on trade must have sufficient countervailing procompetitive justifications. Usually, anticompetitive horizontal agreements like boycotts (including a refusal to produce certain products) are per se illegal.

The “learned professions,” including engineers, frequently engage in somewhat anticompetitive self-regulation through professional standards. These standards are not exempt from antitrust scrutiny. However, some Supreme Court opinions have nevertheless held that some forms of professional self-regulation that would otherwise receive per se condemnation could receive more preferential antitrust analysis under the “Rule of Reason.” This Rule weighs procompetitive and anticompetitive impacts to determine legality. To receive the rule-of-reason review, such professional self-regulation would need to:

  1. Be promulgated by a professional body;
  2. Not directly affect price or output level; and
  3. Seek to correct some market failure, such as information asymmetry between professionals and their clients.

Professional ethical standards promulgated by a professional body (i.e., comparable to the American Medical Association or American Bar Association) that prohibit members from building unsafe AI could plausibly meet all of these requirements.

This paper does not argue that this would clearly win in court, or that such an agreement would be legal. Nor does it argue that it would survive rule-of-reason review. It merely says that there exists a colorable argument for analyzing such an agreement under the Rule of Reason, rather than a per se rule. Thus, this could be a plausible route to an antitrust-compliant horizontal agreement to not engineer AI unsafely.

New to LessWrong?

New Comment
3 comments, sorted by Click to highlight new comments since: Today at 1:47 PM

My summary / commentary:

Often, AI safety proponents talk about things that might be nice, like agreements to not do dangerous things, and focus on the questions of how to make those agreements in everyone's interest, or to measure compliance with them, or so on. Often these hopes take the shape of voluntary agreements adopted by professional organizations, or by large companies that jointly dominate a field. [In my personal view, it seems more likely we can convince AI engineers and researchers than legislators to adopt sensible policies, especially in the face of potentially rapid change.]

This paper asks the question: could such agreements even be legal? What underlying factors drive legality, so that we could structure the agreements to maximize the probability that they would hold up in court?

Overall, I appreciated the groundedness of the considerations, and the sense of spotting a hole that I might otherwise have missed. [I'm too used to thinking of antitrust in the context of 'conspiracy against the public' that it didn't occur to me that a 'conspiracy for the public' might run afoul of the prohibitions, and yet once pointed out it seems definitely worth checking.]

An obvious followup question that occurs to me: presumably in order to be effective, these agreements would have to be international. [Some sorts of unsafe AI, like autonomous vehicles, mostly do local damage, but other sorts of unsafe AI, like autonomous hackers, can easily do global damage, and creators can preferentially seek out legal environments favorable to their misbehavior.] Are there similar sorts of obstacles that would stand in the way of global coordination?

I can't think of much that the government could do to seriously reduce the chance of UFAI, although taxing/banning GPUs (or just all powerful computers) could help. (It wouldn't be easy for one government to put much of a dent in moors law)

I don't think that the government is capable of distinguishing people that are doing actual safety work from people who say the word "safety" a lot.

If I write some code, how is the government even going to tell if its AI? Does this mean that any piece of code anywhere has to be sent to some government safety checkers before it can be run? Either that check needs to be automated, or it requires a vast amount of skilled labour, or you are making programming almost illegal.

If the ban on writing arbitrary code without safety approval isn't actively enforced, especially if getting approval is slow or expensive, then many researchers will run code now, with the intention of getting approval later if they want to publish (Test your program until you've removed all the bugs, then send that to the safety checkers).

There is no way that legislators can distinguish between ordinary code and experimental AI designs that would be more than moderately inconvenient to avoid.

The government can throw lots of money at anyone who talks about AI safety, and if they are lucky, as much as 10% of that money might go to people doing actual research.

They could legislate that all CS courses must have a class on AI safety, and maybe some of those classes would be any good.

I think that if Nick Bostrum was POTUS, they could pass some fairly useful AI safety laws, but nothing gamechanging. I think that to be useful at all, the rules either need to be high fidelity (a large amount of information directly from the AI safety community) or too drastic for anyone to support them. "Destroy all computers" is a short and easily memorable slogan, but way outside the overton window. Any non drastic proposals that made a significant difference can't be described in slogans.