otto.barten

Comments

Should we postpone AGI until we reach safety?

Richard, thanks for your reply. Just for reference, I think this goes under argument 5, right?

It's a powerful argument, but I think it's not watertight. I would counter it as follows:

  1. As stated above, I think the aim should be an ideally global treaty were no country is allowed to go beyond a certain point of research. The countries should then enforce the treaty on all research institutes/companies within their borders. You're right that in this case, a criminal or terrorist group will have an edge. But seeing how hard it currently is for legally allowed and indeed heavily funded groups to develop AGI, I'm not convinced that terrorist or criminal groups could easily do this. For reference, I read this paper by a lawyer this week on an actual way to implement such a treaty. I think signing such a treaty will not affect countries without effective AGI research capabilities, so they won't have a reason not to sign it, and will benefit from the increased existential safety. The ones likely least inclined to sign up will be the countries that are trying to develop AGI now. So effectively, I think a global treaty and a US/China deal would amount to roughly the same thing.
  2. You could make the same argument for tax, (not profitable) climate action, R&D, defense spending against a common enemy, and probably many other issues. Does that mean we have zero tax, climate action, R&D, or defense? No, because at some point countries realize it's better to not be the relative winner, than to all loose. In many cases this is then formalized in treaties, with varying but nonzero success. I think that could work in this case as well. Your argument is indeed a problem in all of the fields I mention, so you have a point. But I think, fortunately, it's not a decisive point.
otto.barten's Shortform

Minimum hardware leads to maximum security. As a lab or a regulatory body, one can increase safety of AI prototypes by reducing the hardware or amount of data researchers have access to.

Should we postpone AGI until we reach safety?

My response to counterargument 3 is summarized in this plot, for reference: https://ibb.co/250Qgc9

Basically, this would only be an issue if postponement cannot be done until risks are sufficiently low, and if take-off would be slow without postponement intervention.

Should we postpone AGI until we reach safety?

Interesting line of thought. I don't know who and how, but I still think we should already think about if it would be a good idea in principle.

Can I restate your idea as 'we have a certain amount of convinced manpower, we should use it for the best purpose, which is AI safety'? I like the way of thinking, but I still think we should use some of them for looking into postponement. Arguments:

- The vast majority of people is unable to contribute meaningfully to AI safety research. Of course all these people could theoretically do whatever makes most money and then donate to AI safety research. But most will not do that in practice. I think many of these people could be used for the much more generic task of convincing others about AI risks, and also arguing for postponement. As an example, I saw a project once with the goal of teaching children about AI safety which claimed they could not continue for lack of 5000$ of funding. I think there's a vast sea of resource-constrained possibility out there once we make the decision that telling everyone about AI risk is officially a good idea.

- Postponement weirdly seems to be a neglected topic within the AI safety community (for dislike of regulation, I guess), but also outside the community (for lack of AI risk insight). I think it's a lot more neglected at this point than technical AI safety, which is perhaps also niche, but does have its own institutes already looking at it. Since it looks important and neglected, I think an hour spent on postponement is probably better spent than an hour on AI safety, perhaps unless you're a talented AI safety researcher.

Should we postpone AGI until we reach safety?

Thanks for that comment! I didn't know Bill McKibben, but I read up on his 2019 book 'Falter: Has the Human Game Begun to Play Itself Out?' I'll post a review as a post later. I appreciate your description of what the scene was like back in the 90s or so, that's really insightful. Also interesting to read about nanotech, I never knew these concerns were historically so coupled.

But having read McKibben's book, I still can't find others on my side of the debate. McKibben is indeed the first one I know who both recognizes AGI danger, and does not believe in a tech fix, or at least does not consider this a good outcome. However, I would expect that he would cite others on his side of the debate. Instead, in the sections on AGI, he cites people like Bostrom and Omohundro, which are not postponists in any way. Therefore I'm still guessing at this moment that a 'postponement side' of this debate is now absent, and it's just that McKibben happened to know Kurzweil who got him personally concerned about AGI risk. If that's not true and there are more voices out there exploring AGI postponement options, I'd still be happy to hear about it. Also if you could find links to old discussions, I'm interested!

Should we postpone AGI until we reach safety?

Thanks for your insights Adam. If every AGI researcher is in some sense for halting AGI research, I'd like to get more confirmation on that. What are their arguments? Would they also work for non-AGI researchers?

I can imagine the combination of Daniel's point 1 and 2 stops AGI researchers from speaking out on this. But for non-AGI researchers, why not explore something that looks difficult, but may have existential benefits?

Should we postpone AGI until we reach safety?

I agree and thanks for bringing some nuance in the debate. I think that would be a useful path to explore.

Should we postpone AGI until we reach safety?

I'm imagining an international treaty, national laws, and enforcement from police. That's a serious proposal.

Should we postpone AGI until we reach safety?

I think a small group of thoughtful, committed, citizens can change the world. Indeed, it is the only thing that ever has.

Should we postpone AGI until we reach safety?

I appreciate the effort you took in writing a detailed response. There's one thing you say in which I'm particularly interested, for personal reasons. You say 'I've been in or near this debate since the 1990s'. That suggests there are many people with my opinion. Who? I would honestly love to know, because frankly it feels lonely. All people I've met, so far without a single exception, are either not afraid of AI existential risk at all, or believe in a tech fix and are against regulation. I don't believe in the tech fix, because as an engineer, I've seen how much of engineering is trial and error (and science even more). People have ideas, try them, it says boom and then they try something else. Until they get there. If we do that with AGI, I think it's sure to go wrong. That's why I think at least some kind of policy intervention is mandatory, not optional. And yes it will be hard. But no argument I've heard so far has convinced me that it's impossible. Or that it's counterproductive.

I think we should first answer the question: is postponement until safety a good idea if it would be implementable. What's your opinion on that one?

Also, I'm serious: who else is on my side of this debate? You would really help me personally to let me talk to them, if they exist.

Load More