LESSWRONG
LW

1313
Wikitags

Regulation and AI Risk

Edited by Kaj_Sotala, Niremetal, John_Maxwell, et al. last updated 30th Dec 2024

Regulation and AI Risk is the debate on whether regulation could be used to reduce the risks of Unfriendly AI, and what forms of regulation would be appropriate.

Several authors have advocated AI research to be regulated, but been vague on the details. Yampolskiy & Fox (2012) note that university research programs in the social and medical sciences are overseen by institutional review boards, and propose setting up analogous review boards to evaluate potential AGI research. In order to be successful, AI regulation would have to be global, and there is the potential for an AI arms race between different nations. Partially because of this, McGinnis (2010) argues that the government should not attempt to regulate AGI development. Rather, it should concentrate on providing funding to research projects intended to create safe AGI. Kaushal & Nolan (2015) point out that regulations on AGI development would result in a speed advantage for any project willing to skirt the regulations, and instead propose government funding (possibly in the form of an "AI Manhattan Project") for AGI projects meeting particular criteria.

While Shulman & Armstrong (2009) argue the unprecedentedly destabilizing effect of AGI could be a cause for world leaders to cooperate more than usual, the opposite argument can be made as well. Gubrud (1997) argues that molecular nanotechnology could make countries more self-reliant and international cooperation considerably harder, and that AGI could contribute to such a development. AGI technology is also much harder to detect than e.g. nuclear technology is - AGI research can be done in a garage, while nuclear weapons require a substantial infrastructure (McGinnis 2010). On the other hand, Scherer (2015) argues that artificial intelligence could nevertheless be susceptible to regulation due to the increasing prominence of governmental entities and large corporations in AI research and development.

Goertzel & Pitt (2012) suggest that for regulation to be enacted, there might need to be an AGI Sputnik moment - a technological achievement that makes the possibility of AGI evident to the public and policy makers. They note that after such a moment, it might not take a very long time for full human-level AGI to be developed, while the negotiations required to enact new kinds of arms control treaties would take considerably longer.

References

  • Ben Goertzel & Joel Pitt (2012): Nine Ways to Bias Open-Source AGI Toward Friendliness. Journal of Evolution and Technology - Vol. 22 Issue 1 – pgs 116-141.
  • Mark Gubrud (1997): Nanotechnology and International Security. Fifth Foresight Conference on Molecular Nanotechnology.
  • John McGinnis (2010): Accelerating AI. Northwestern University Law Review.
  • Matthew Scherer (2015): Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies. Harvard Journal of Law & Technology.
  • Carl Shulman & Stuart Armstrong (2009): Arms control and intelligence explosions. European Conference on Computing and Philosophy.
  • Roman Yampolskiy & Joshua Fox (2012): Safety Engineering for Artificial General Intelligence. Topoi.
  • Mohit Kaushal & Scott Nolan (2015): Understanding Artificial Intelligence. Brookings.

See also

  • AI arms race
  • AGI Sputnik moment
  • Existential risk
  • Unfriendly artificial intelligence
Subscribe
Discussion
1
Subscribe
Discussion
1
Posts tagged Regulation and AI Risk
227Ways I Expect AI Regulation To Increase Extinction Risk
1a3orn
2y
32
30AI labs' statements on governance
Zach Stein-Perlman
2y
0
74Q&A on Proposed SB 1047
Zvi
1y
8
71Guide to SB 1047
Zvi
1y
18
62RTFB: California’s AB 3211
Zvi
1y
2
61[Linkpost] Chinese government's guidelines on AI
RomanS
4y
14
29How major governments can help with the most important century
HoldenKarnofsky
3y
0
3Middle Child Phenomenon
PhilosophicalSoul
2y
3
553Let’s think about slowing down AI
Ω
KatjaGrace
3y
Ω
182
228AGI in sight: our look at the game board
Ω
Andrea_Miotti, Gabriel Alfour
3y
Ω
135
159Most People Don't Realize We Have No Idea How Our AIs Work
Thane Ruthenis
2y
42
153Liability regimes for AI
Ege Erdil
1y
34
151My takes on SB-1047
leogao
1y
8
145AI companies are unlikely to make high-assurance safety cases if timelines are short
Ω
ryan_greenblatt
8mo
Ω
5
1302019 AI Alignment Literature Review and Charity Comparison
Ω
Larks
6y
Ω
18
Load More (15/109)
Add Posts