I'm looking to get oriented in the space of "AI policy": interventions that involve world governments (particularly the US government) and existential risk from strong AI.

When I hear people talk about "AI policy", my initial reaction is skepticism, because (so far) I can think of very few actions that governments could take that seem to help with the core problems of AI ex-risk. However, I haven't read much about this area, and I don't know what actual policy recommendations people have in mind.

So what should I read to start? Can people link to plans and proposals in AI policy space?

Research papers, general interest web pages, and one's own models, are all admissible.

Thanks.


New to LessWrong?

New Answer
New Comment

2 Answers sorted by

I'll post the obvious resources:

80k's US AI Policy article

Future of Life Institute's summaries of AI policy resources

AI Governance: A Research Agenda (Allan Dafoe, FHI)

Allen Dafoe's research compilation: Probably just the AI section is relevant, some overlap with FLI's list.

The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation (2018). Brundage and Avin et al.: One of the earlier "large collaboration" papers I can recall, probably only the AI Politics and AI Ideal Governance sections are relevant for you.

Policy Desiderata for Superintelligent AI: A Vector Field Approach: Far from object-level, in Bostrom's style, but tries to be thorough in what AI policy should try to accomplish at a high level.

CSET's Reports: Very new AI policy org, but pretty exciting as it's led by the former head of IARPA so their recommendations probably have a higher chance of being implemented than the academic think tank reference class. Their work so far focuses on documenting China's developments and US policy recommendations, e.g. making US immigration more favorable for AI talent.

Published documents can trail the thinking of leaders at orgs by quite a lot. You might be better off emailing someone at the relevant orgs (CSET, GovAI, etc.) with your goals, what you plan to read, and seeing what they would recommend so you can catch up more quickly.

The "obvious resources" are just what I want. Thanks.

Also, this 80,000 Hours podcast episode with Allan Dafoe.

What? I feel like I must be misunderstanding, because it seems like there are broad categories of things that governments can do that are helpful, even if you're only worried about the risk of an AI optimizing against you. I guess I'll just list some, and you can tell me why none of these work:

  • Funding safety research
  • Building aligned AIs themselves
  • Creating laws that prevent races to the bottom between companies (e.g. "no AI with >X compute may be deployed without first conducting a comprehensive review of the chance of the AI adversarially optimizing against humanity")
  • Monitoring AI systems (e.g. "we will create a board of AI investigators; everyone making powerful AI systems must be evaluated once a year")

I don't think there's a concrete plan that I would want a government to start on today, but I'd be surprised if there weren't such plans in the future when we know more (both from more research, and the AI risk problem is clearer).

You can also look at the papers under the category "AI strategy and policy" in the Alignment Newsletter database.

As I said, I haven't oriented on this subject yet, and I'm talking from my intuition, and I might be about to say stupid things. (And I might think different things on further thought. I think, I 60% to 75% "buy" the arguments that I make here.]

I expect we have very different worldviews about this area, so I'm first going to lay out a general argument, which is intended to give context, and then respond to your specific points. Please let me know if anything I say seems crazy or obviously wrong.

General Argument

My intuition says that in general, governments can only be helpful after the core, hard problems of alignment have been solved. After that point, there isn't much for them to do, and before that point, I think they're much more likely to cause harm, for the sorts of reasons I outline in this comment.

(There is an argument that EAs should go into policy because the default trajectory involves governments interfering in the development of powerful AI, and having EAs in the mix is apt to make that interference smaller and saner. I'm sympathetic to that, if that's the plan.)

To say it more specifically: governments are much stupider tha... (read more)

8Rohin Shah4y
Idk, if humanity as a whole could have a justified 90% confidence that AI above a certain compute threshold would kill us all, I think we could ban it entirely. Like, why on earth not? It's in everybody's interest to do so. (Note that this is not the case with climate change, where it is in everyone's interest for them to keep emitting while others stop emitting.) This seems probably true even if it was 90% confidence that there is some threshold over which AI would kill us all, that we don't yet know. In this case I imagine something more like a direct ban on most people doing it, and some research that very carefully explores what the threshold is. A common way in which this is done is to get experts to help allocate funding, which seems like a reasonable way to do this, and probably better than the current mechanisms excepting Open Phil (current mechanism = how well you can convince random donors to give you money). In the world where the aligned version is not competitive, a government can unilaterally pay the price of not being competitive because it has many more resources. Also there are other problems you might care about, like how the AI system might be used. You may not be too happy if anyone can "buy" a superintelligent AI from the company that built it; this makes arbitrary humans generally more able to impact the world; if you have a group of not-very-aligned agents making big changes to the world and possibly fighting with each other, things will plausibly go badly at some point. Telling what is / isn't safe seems decidedly easier than making an arbitrary agent safe; it feels like we will be able to be conservative about this. But this is mostly an intuition. I think a general response to your intuition is that I don't see technical solutions as the only options; there are other ways we could be safe (1, 2). Cruxes: * We're going to have clear, legible things that ensure safety (which might be "never build systems of this type"). * Governmen
2 comments, sorted by Click to highlight new comments since: Today at 1:25 PM

Note that research related to governments is just a part of "AI policy" (which also includes stuff like research on models/interventions related to cooperation between top AI labs and publications norms in ML).

Ok. Good to note.