LESSWRONG
LW

AI RiskPublic DiscourseAI GovernanceAnthropic (org)DeepMindOpenAIOrganizational Culture & DesignAI
Personal Blog

141

Request to AGI organizations: Share your views on pausing AI progress

by Orpheus16, simeon_c
11th Apr 2023
2 min read
11

141

AI RiskPublic DiscourseAI GovernanceAnthropic (org)DeepMindOpenAIOrganizational Culture & DesignAI
Personal Blog

141

Request to AGI organizations: Share your views on pausing AI progress
14TW123
2Evan R. Murphy
13WilliamKiely
6Raemon
2Raemon
6Nathan Helm-Burger
5kdbscott
4James Payor
3ChristianKl
2WilliamKiely
-4Jan Kulveit
New Comment
11 comments, sorted by
top scoring
Click to highlight new comments since: Today at 6:17 PM
[-]TW1232y144

At the time of this post, the FLI letter has been signed by 1 OpenAI research scientist, 7 DeepMind research scientists/engineers, and 0 Anthropic employees. 

"1 OpenAI research scientist" felt weird to me on priors. 0 makes sense, if the company gave some guidance (e.g. legal) to not sign, or if the unanimous opinion was that it's a bad idea to sign. 7 makes sense too -- it's about what I'd expect from DeepMind and shows that there's a small contingent of people really worried about risk. Exactly 1 is really weird -- there are definitely multiple risk conscious people at OpenAI, but exactly one of them decided to sign?

I see a "Yonas Kassa" listed as an OpenAI research scientist, but it's very unclear who this person is. I don't see any LinkedIn or Google Scholar profile of this name associated with OpenAI. Previously, I know many of the signatures were inaccurate, so I wonder if this one is, too?

Anyway, my guess is that actually zero OpenAI researchers, and that both OpenAI and Anthropic employees have decided (as a collective? because of a top down directive? for legal reasons? I have no idea) to not sign.

Reply
[-]Evan R. Murphy2y*20

There are actually 3 signatories now claiming to work for for OpenAI.

Reply
[-]WilliamKiely2y130

Demis Hassabis answered the question "Do you think DeepMind has a responsibility to hit pause at any point?" in 2022:

https://www.lesswrong.com/posts/vEJAFpatEq4Fa2smp/hooray-for-stepping-out-of-the-limelight?commentId=x8DZswktu3WtfyzFR

Reply
[-]Raemon2y60

I actually don't know that I think this is helpful to push for now.

I do wish a "good version" of this would happen soon, but I think the version you'd be likely to get is one where they write weird reputational concerns where they don't want to be seen by their investors as not racing ahead to make progress as fast as possible (since their investors don't understand the degree of danger involved)

(There's also the bit where, well, the fact that they're labs pursuing AI in the first place means that (in my opinion) leadership would probably just have pausing-takes I think don't make sense)

And then, once having written a public statement on it, they'd be more likely to stick to that public statement, even if nonsensical. 

I do generally wish more orgs would speak more freely (even when I disagree with them), and I separately wish something about their strategic thinking process was different (though I'm not sure exactly what their thought process is at the moment so not sure how I wish it were different). But both of those things seem like causal-nodes further up a chain than "whether they engage publicly on this particular issue."

Reply
[-]Raemon2y20

The related thing that I think I do wish orgs would issue statements on is "what are the circumstances in which it would make sense to pause unilaterally, even though all the race-conditions still apply, because your work has gotten too dangerous. i.e., even if you think it's actually relatively safe to continue research and deployment now, if you're taking x-risk seriously as a concern there should be some point at which an AGI model would be unsafe to deploy to the public, and a point at which it's unsafe even to be running new training runs. 

Each org should have some model of when that point likely is, and I think even with my cynical-political-world-goggles on it should be to their benefit to say that publicly. 

Reply
[-]Nathan Helm-Burger2y60

I signed the letter because I think that such things are a useful way of moving the Overton window. In this case, I want the government and the public to start thinking about whether and how to regulate AI development. I might not have signed the letter if I thought that it would actually result in a pause, since I don't think that that's probably the wisest strategic move at this point. I explain why here: https://www.lesswrong.com/posts/GxzEnkSFL5DnQEAsZ/paulfchristiano-s-shortform?commentId=hEQL7rzDedGWhFQye 

Reply
[-]kdbscott2y50

I think it makes sense that the orgs haven't commented, as it would possibly run afoul of antitrust laws.

See for example when some fashion clothing companies talked about trying to slow down fashion cycles to produce less waste / carbon emissions, which led to antitrust regulators raiding their headquarters.

Reply
[-]James Payor2y40

Huh, does this apply to employees too? (ala "these are my views and do not represent those of my employer")

Reply
[-]ChristianKl2y32

On March 22, FLI published an open letter calling for a six-month moratorium on frontier AI progress.

I think it's a mistake to claim that given that the call is for a moratorium on "frontier AI progress" overall.  but about a subset of progress. 

The call is for "Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium."

It explictely says "AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal" and not that all AI research and development should stop. 

Reply
[-]WilliamKiely2y20

I strongly agree with this request.

If companies don't want to be the first to issue such a statement then I suggest they coordinate and share draft statements with each other privately before publishing simultaneously.

Reply
[-]Jan Kulveit2y-4-14

I think silence is a clearly sensible strategy for obvious reasons.

Reply
Crossposted to the EA Forum. Click to view 1 comment.
Moderation Log
Curated and popular this week
11Comments

A few observations from the last few weeks:

  • On March 22, FLI published an open letter calling for a six-month moratorium on frontier AI progress.
  • On March 29, Eliezer Yudkowsky published a piece in TIME calling for an indefinite moratorium.
  • To our knowledge, none of the top AI organizations (OpenAI, DeepMind, Anthropic) have released a statement responding to these pieces.

We offer a request to AGI organizations: Determine what you think about these requests for an AI pause (possibly with uncertainties acknowledged), write up your beliefs in some form, and publicly announce your position. 

We believe statements from labs could improve discourse, coordination, and transparency on this important and timely topic. 

Discourse: We believe labs are well-positioned to contribute to dialogue around whether (or how) to slow AI progress, making it more likely for society to reach true and useful positions.

Coordination: Statements from labs could make coordination more likely. For example, lab A could say “we would support a pause under X conditions with Y implementation details”. Alternatively, lab B could say “we would be willing to pause if lab C agreed to Z conditions.”

Transparency: Transparency helps others build accurate models of labs, their trustworthiness, and their future actions. This is especially important for labs that seek to receive support from specific communities, policymakers, or the general public. You have an opportunity to show the world how you reason about one of the most important safety-relevant topics. 

We would be especially excited about statements that are written or endorsed by lab leadership. We would also be excited to see labs encourage employees to share their (personal) views on the requests for moratoriums. 

Sometimes, silence is the best strategy. There may be attempts at coordination that are less likely to succeed if people transparently share their worldviews. If this is the case, we request that AI organizations make this clear (example: "We have decided to avoid issuing public statements about X for now, as we work on Y. We hope to provide an update within Z weeks.")

At the time of this post, the FLI letter has been signed by 7 DeepMind research scientists/engineers, probably 0 OpenAI research scientists and 0 Anthropic employees. 

See also:

  • Let's think about slowing down AI
  • A challenge for AGI organizations, and a challenge for readers
  • Six dimensions of operational adequacy in AGI projects