I read a lot of agreement with the six months memorandum and very little discussion of the details and what the proposed memorandum would actually do.

The memorandum says 'we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.'

Most AI labs are unlikely to develop anything more powerful than GPT-4 in the next six months anyway, so would likely continue with business as usual. Even if OpenAI would wait six months to train GPT-5, they would still do a lot of research to increase capabilities during those six months.

Projects like AutoGPT which are not about training big models would still be developed the same way they are developed now.

New to LessWrong?

New Answer
New Comment

2 Answers sorted by

ryan_b

Apr 15, 2023

71

You know how a common trick for solving a really tough math problem is to build a simpler toy problem and solve that first?

The FLI letter is a toy coordination problem.

  • It provides a Schelling point around which to organize public conversation and a starting point from which to apply public pressure.

  • It likewise serves as a way for people inside these companies, and especially their leaders who are otherwise trapped in race dynamics, to have a basis on which to argue for slowing down.

  • It would delay innovations which require larger training runs to achieve by 6 months. Note that it would not delay any other kind of innovation like those from fine tuning, longer training runs of the same size, and algorithmic improvements.

  • It is a straightforward goal that is plausible to achieve.

These bulletpoints are my impression of the goals based on comments by Max Tegmark on the subject. As I said, I think of this as a kind of toy coordination problem. The way this cashes out in reality for me is:

  1. Under the current circumstances, I do not believe alignment research can beat capabilities research to catastrophic success.

  2. In order to buy time for alignment to succeed, we need to coordinate to slow down capabilities research.

  3. We currently have no basis of coordination by which to do that.

In general, coordination is a positive feedback loop: the more any set of groups coordinate, the more coordination that set can do. Therefore I expect something that is actually real but still mostly symbolic to be a good starting point for trying to build coordination among this new, extended set of groups.

In sum, what it will actually do is make it easier to do things in the future.

Charlie Steiner

Apr 15, 2023

50

I expect it would slow down the scaling of compute infrastructure. If you're not training any large models for six months, you'll probably put less capital investment into the capability to train super-large models.

I don't think the effective time lost on the scaling trajectory would be the full six months, but I wouldn't be surprised at a 3 month effective impact.

A 3 month delay in human extinction is about two billion QALYs and would be worth about $2 trillion in Givewell donations at $1k/QALY.

If it took two million signatures to come into effect then signing the letter would be worth a million dollars.

4 comments, sorted by Click to highlight new comments since: Today at 10:57 PM

My impression about the proposed FLI Moratorium is that it is more about establishing a precedent for a coordinated capabilities development slowdown than it is about being actually impactful in slowing down this current round of AI capabilities development.  Think of it as being like the Kyoto Protocol (for better or worse...).  

Will it actually slow down AI capabilities in the short-term?  No.  

Will it maybe make it more likely that a latter moratorium with more impact and teeth will get widespread adoption?  Maybe. 

Would a more ambitious proposal have been possible now? Unclear. 

Is the FLI Moratorium already (as weak as it is) too ambitious to be adopted? Possibly. 

Insofar as the clearest analogue to this is something like the (ineffectual) Kyoto Protocol, is that encouraging? Hell no. 

Is the FLI Moratorium already (as weak as it is) too ambitious to be adopted? 

There are many issues besides "too ambitious" for a proposal not to be adopted.

If I would imagine that I'm in charge of OpenAI or Google and I could make a move that harms my business interests while doing nothing for safety beyond being a virtue signal, why would I adopt it? 

If my lab is "unlikely to develop anything more powerful than GPT-4 in the next six months anyway" then it is also unlikely that it harms my business interests.

If I could make a move that signals virtue and doesn't harm my business interests, why would I reject it?

[This comment is no longer endorsed by its author]Reply

When it comes to players that are open about the work they are doing I think Google and OpenAI might develop models that are more powerful than GPT-4 in the relatively near future. 

If OpenAI develops GPT-5 a few months later it might mean that they make less profits for those months with ChatGPT and their API service. For Google it's likely similar. 

Other actors that might train a model that's stronger than GPT-4 might be the NSA or Chinese companies. FLI seemed to have decided against encouraging Chinese companies to join by taking simple steps like publishing a Chinese version of the letter. The NSA is very unlikely to publically say anything about whether or not they are training a model and certainly not allow transparency into what models they are building as the letter calls for.

If I could make a move that signals virtue and doesn't harm my business interests, why would I reject it?

Because someone prefers a climate where AI safety actions are targeted at producing AI safety instead of where those actions are targeted as virtue signals?

In an environment where most actions are taken in the name of virtue signaling it's easy for all actions to be perceived as being about virtue signaling.