LESSWRONG
LW

Slowing Down AIAI RiskAI
Frontpage

114

The public supports regulating AI for safety

by Zach Stein-Perlman
17th Feb 2023
AI Impacts
1 min read
9

114

Slowing Down AIAI RiskAI
Frontpage

114

The public supports regulating AI for safety
6Noosphere89
4postjawline
3Aiyen
1Zach Stein-Perlman
1Aiyen
0Zach Stein-Perlman
1Aiyen
2Ebenezer Dukakis
1Ebenezer Dukakis
New Comment
9 comments, sorted by
top scoring
Click to highlight new comments since: Today at 11:31 PM
[-]Noosphere892y617

This is good news, with caveats. Still, this is very much a good thing to keep in mind.

One major caveat is that the support for AI safety should be interpreted as a maximum, rather than a minimum or average, because of the fact that once policies that have actually being debated, support starts to waver. So there's been basic no adversarial stress test of support.

Reply
[-]postjawline2y44

Sure, this is a useful poll, but I'm not so sure that the public understands AI.

Perhaps there is an opportunity to improve the public’s beliefs, attitudes, and memes and frames for making sense of AI.

Yes, I strongly agree, because I think someone should focus their efforts on providing simple, easy to understand explanations as to how AI works in collaboration with key players so the public comes to a decent understanding. Not to be elitist, but I don't think the public's opinion is a useful metric by which policy should be made regarding AI. I think it could potentially harm efforts moving forward in multiple areas.

Reply
[-]Aiyen2y3-1

Regulation in most other areas has been counterproductive.  In AI, it will likely be even more so:  there's at least some understanding of e.g. medicine by both the public and our rulers, but most people have no idea about the details of alignment.  

This could easily backfire in countless ways.  It could drive researchers out of the field, it could mandate "alignment" procedures that don't actually help and get in the way of finding procedures that do, it could create requirements for AIs to say what is socially desirable instead of what is true (chatGPT is already notorious for this), making it harder to tell how the AI is functioning...

It is socially desirable to call for regulation as a solution for almost any problem you care to name, but it is practically useful far more rarely.  This is AI alignment.  This is potentially the future of humanity at stake, and all human values.  If we cannot speak the truth here, when will we ever speak it? 

There are, of course, potentially reasonable counterarguments.  Someone might believe that AI capabilities are more fragile than AI alignment, for instance, such that regulation would tend to slow capabilities without greatly hampering alignment, and the time bought gave us a better chance of a good outcome.  Perhaps.  But please consider, are you calling for regulation because it actually makes sense, or because it's the Approved Answer to problems?  

Please don't make this worse.

Reply
[-]Zach Stein-Perlman2y*10

But please consider, are you calling for regulation because it actually makes sense, or because it's the Approved Answer to problems?

I didn't call for regulation.

Some possible regulations would be good and some would be bad.

I do endorse trying to nudge regulation to be better than the default.

Reply
[-]Aiyen2y10

How do you propose nudging regulation to be better without nudging for more regulation?

Reply
[-]Zach Stein-Perlman2y*00

Combating bad regulation would be the obvious way.

In seriousness, I haven’t focused on interventions to improve regulation yet— I just noticed a thing about public opinion and wrote it. (And again, some possible regulations would be good.)

Reply
[-]Aiyen2y10

Combating bad regulation isn’t a solution, but a description of a property you’d want a solution to have.

Or more specifically, while you could perhaps lobby against particular destructive policies, this article is pushing for “helping [government actors] take good actions”, but given the track record of government actions, it would make far more sense to help them take no action. Pushing for political action without a plan to steer that action in a positive direction is much like pushing for AI capabilities without a plan for alignment… which we both agree is insanely dangerous.

The state is not aligned. That should be crystal clear from the medical and economic regulations that already exist. And bringing in a powerful Unfriendly agent into mankind’s efforts to create a Friendly one is more likely to backfire than to help.

Reply
[-]Ebenezer Dukakis2y20

How about regulating the purchase/rental of GPUs and especially TPUs?

For companies which already have GPU clusters, maybe we need data center regulation? Something like: The code only gets run on the data center if a statement regarding its safety has been digitally signed by at least N government-certified security researchers.

Reply
[-]Ebenezer Dukakis2y10

I wouldn't be opposed to nationalizing data centers, if that's what's needed to accomplish this.

Reply
Moderation Log
Curated and popular this week
9Comments

A high-quality American public survey on AI, Artificial Intelligence Use Prompts Concerns, was released yesterday by Monmouth. Some notable results:

  • 9% say AI1 would do more good than harm vs 41% more harm than good (similar to responses to a similar survey in 2015)
  • 55% say AI could eventually pose an existential threat (up from 44% in 2015)
  • 55% favor “having a federal agency regulate the use of artificial intelligence similar to how the FDA regulates the approval of drugs and medical devices”
  • 60% say they have “heard about A.I. products – such as ChatGPT – that can have conversations with you and write entire essays based on just a few prompts from humans”

Worries about safety and support of regulation echoes other surveys:

  • 71% of Americans agree that there should be national regulations on AI (Morning Consult 2017)
  • The public is concerned about some AI policy issues, especially privacy, surveillance, and cyberattacks (GovAI 2019)
  • The public is concerned about various negative consequences of AI, including loss of privacy, misuse, and loss of jobs (Stevens / Morning Consult 2021)

Surveys match the anecdotal evidence from talking to Uber drivers: Americans are worried about AI safety and would support regulation on AI. Perhaps there is an opportunity to improve the public’s beliefs, attitudes, and memes and frames for making sense of AI; perhaps better public opinion would enable better policy responses to AI or actions from AI labs or researchers.

Public desire for safety and regulation is far from sufficient for a good government response to AI. But it does mean that the main challenge for improving government response is helping relevant actors believe what’s true, developing good affordances for them, and helping them take good actions— not making people care enough about AI to act at all.

Mentioned in
107The Overton Window widens: Examples of AI risk in the media
46New survey: 46% of Americans are concerned about extinction from AI; 69% support a six-month pause in AI development
46List of requests for an AI slowdown/halt.
27Reframing the burden of proof: Companies should prove that models are safe (rather than expecting auditors to prove that models are dangerous)
25AI Safety - 7 months of discussion in 17 minutes
Load More (5/7)