[ Question ]

What are the best arguments and/or plans for doing work in "AI policy"?

by elityre 1 min read9th Dec 20198 comments

14


I'm looking to get oriented in the space of "AI policy": interventions that involve world governments (particularly the US government) and existential risk from strong AI.

When I hear people talk about "AI policy", my initial reaction is skepticism, because (so far) I can think of very few actions that governments could take that seem to help with the core problems of AI ex-risk. However, I haven't read much about this area, and I don't know what actual policy recommendations people have in mind.

So what should I read to start? Can people link to plans and proposals in AI policy space?

Research papers, general interest web pages, and one's own models, are all admissible.

Thanks.


New Answer
Ask Related Question
New Comment

2 Answers

I'll post the obvious resources:

80k's US AI Policy article

Future of Life Institute's summaries of AI policy resources

AI Governance: A Research Agenda (Allan Dafoe, FHI)

Allen Dafoe's research compilation: Probably just the AI section is relevant, some overlap with FLI's list.

The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation (2018). Brundage and Avin et al.: One of the earlier "large collaboration" papers I can recall, probably only the AI Politics and AI Ideal Governance sections are relevant for you.

Policy Desiderata for Superintelligent AI: A Vector Field Approach: Far from object-level, in Bostrom's style, but tries to be thorough in what AI policy should try to accomplish at a high level.

CSET's Reports: Very new AI policy org, but pretty exciting as it's led by the former head of IARPA so their recommendations probably have a higher chance of being implemented than the academic think tank reference class. Their work so far focuses on documenting China's developments and US policy recommendations, e.g. making US immigration more favorable for AI talent.

Published documents can trail the thinking of leaders at orgs by quite a lot. You might be better off emailing someone at the relevant orgs (CSET, GovAI, etc.) with your goals, what you plan to read, and seeing what they would recommend so you can catch up more quickly.

What? I feel like I must be misunderstanding, because it seems like there are broad categories of things that governments can do that are helpful, even if you're only worried about the risk of an AI optimizing against you. I guess I'll just list some, and you can tell me why none of these work:

  • Funding safety research
  • Building aligned AIs themselves
  • Creating laws that prevent races to the bottom between companies (e.g. "no AI with >X compute may be deployed without first conducting a comprehensive review of the chance of the AI adversarially optimizing against humanity")
  • Monitoring AI systems (e.g. "we will create a board of AI investigators; everyone making powerful AI systems must be evaluated once a year")

I don't think there's a concrete plan that I would want a government to start on today, but I'd be surprised if there weren't such plans in the future when we know more (both from more research, and the AI risk problem is clearer).

You can also look at the papers under the category "AI strategy and policy" in the Alignment Newsletter database.