(Cross-posted at The Futurati Podcast)
After reviewing Zvi's analysis of the Future of Life Institute's open letter, I decided I wanted to have him on for a longer chat about the issue. The result is episode 130 of the Futurati Podcast, "Should we halt progress in AI?"
Below, I've included some notes from the episode. As always, it helps a lot if you like and share it.
What are similar episodes from the past we can learn from?
See "Technological restraint" in "Slowing AI: Reading list." (But you seem to be most interested in the strategic/military aspect of technology, and most sources on technological restraint don't focus on that.)
I suppose I'm interested in both, but that reference is very helpful. I'm also vaguely aware of some literature on what is called "private governance" that would be germane to this discussion.