(Cross-posted at The Futurati Podcast)

After reviewing Zvi's analysis of the Future of Life Institute's open letter, I decided I wanted to have him on for a longer chat about the issue. The result is episode 130 of the Futurati Podcast, "Should we halt progress in AI?"

Below, I've included some notes from the episode. As always, it helps a lot if you like and share it.

Notes

The FLI Letter

  • Conflates two broad categories of danger, relatively near-term threats of societal disruption (deepfakes, economic dislocations), and relatively longer-term threats that come from misaligned AI. 
  • Given who the letter's signatories are, this must've been a deliberate, strategic decision to boost sign-on. 

What are similar episodes from the past we can learn from?

  • Nuclear weapons are the obvious example.
  • They could kill billions of people, either directly or through second-order effects.
  • There are similar coordination difficulties, i.e. leaders across the world worry that if they don't build their own nuclear weapons other nations will, and that will leave them at a disadvantage.
  • There were moderately-successful efforts at non-proliferation, though nukes remain an existential threat.
  • Other examples include similar agreements for biotech and chemical weapons.
  • (I'd add: this would be a useful future research direction. I might do it myself if I can find the time.)

What coordination mechanisms should be used for AI development?

  • The good new is, the ability to create truly cutting-edge AI systems is pretty self-contained.
  • The AI Summer is a substantial opportunity. None of the relevant companies (Google, OpenAI, Anthropic) actually need to forego major profits as part of a moratorium, there's tons of money to be made with already-existing technology.
  • All the major labs have people that at least somewhat appreciate the dangers.
  • There's also real value in merely starting a process of inter-company cooperation. Even if nothing much gets accomplished this time, it flexes the relevant muscles for when there's a real problem. 

Okay, but are these companies actually going to slow down?

  • It takes a lot of time for momentum to develop, and it seems unlikely that anyone is actually going to relinquish it.
  • But we also know that they aren't moving as quickly as they could be (the release of GPT-4 was delayed for months.)
  • And, some of them (DeepMind, Anthropic), were founded with specific charters emphasizing AI safety.
  • RLHF hammers down on the model's creativity by quite a lot, and that was probably a decision made at least partly on the basis of PR considerations. Perhaps those could continue to be a restraining factor?
  • It also takes stupendous resources to train these models, and it's not clear how much more will be gained from future iterations. OpenAI (and others) might well be sighing in relief because they have a good excuse for pulling back.

Does pausing give China time to catch up?

  • Probably not.
  • China is substantially behind, owing to export controls and the iron hand of the CCP.
  • The only companies that can do this research are Google, OpenAI, Anthropic, maybe Facebook (eventually.)
  • China isn't going to catch up in 6 months.

What are China's incentives?

  • It's usually taken as a given that China will speed ahead, but it's not clear that's the case.
  • China doesn't need to race, their strategic interest lies in having everyone stop moving forward.
  • It's not crazy to think China would be onboard with a moratorium. The biggest reason they need AI is because everyone else has it. 

Will we fall behind as a result of the moratorium?

  • It's doubtful. 
  • Work doesn't stop merely because you're not training a model.
  • There's tons of follow-up research, incorporating feedback, thinking of algorithmic tweaks, integrating the models into other applications.
  • Zvi thinks the moratorium will ultimately delay the release of GPT-5 by less than 6 months.
  • Much of the intermediate low-hanging fruit bill be in playing with GPT-4, finding out what it can do, developing better tooling for it, etc.
  • There are currently serious bandwidth limitations to GPT-4 anyway, so even if OpenAI had GPT-5 it's not clear they'd be able to make it publicly available right away.

Is Zvi hopeful about the future?

  • Broadly, yes. 
  • More people are trying harder than ever to understand these arguments.
  • The Alignment Research Center's tests wouldn't have caught the failure modes of GPT-4, but we now have more experience in evaluating these systems.
  • (I speculated that this might lend credence to Paul Christiano's claim that we'll be able to learn from alignment failure.)
  • Meanwhile, LLMs will be a huge boon for the economy. 
  • People are trying to make these things agents, and given their limited capacity we should be able to learn from whatever inevitable problems emerge from these efforts. 
New Comment
2 comments, sorted by Click to highlight new comments since: Today at 2:17 PM

What are similar episodes from the past we can learn from?

See "Technological restraint" in "Slowing AI: Reading list." (But you seem to be most interested in the strategic/military aspect of technology, and most sources on technological restraint don't focus on that.)

I suppose I'm interested in both, but that reference is very helpful. I'm also vaguely aware of some literature on what is called "private governance" that would be germane to this discussion.