I think you may be right that this is what people think of. It seems pretty incompatible with any open source-ish vision of AGI. But what I'm most surprised at, is that people call supervision by humans dystopian/authoritarian, but the same supervision by an ASI (apparently able to see all your data, stop anyone from doing anything, subtly manipulate anyone, etc etc) a utopia. What am I missing here?
Personally, by the way, I imagine a regulation regime to look like regulating a few choke points in the hardware supply chain, plus potentially limits to the hardware or data a person can possess. This doesn't require an authoritarian regime at all, it's just regular regulation as we have in many domains already.
In any case, the point was, is something like this going to lead to <=1% xrisk? I think it doesn't, and definitely not mixed with a democratic/open source AGI vision.
I strongly agree with Section 1. Even if we would have aligned superintelligence, how are we going to make sure no one runs an unaligned superintelligence? A pivotal act? If so, which one? Or does defense trump offense? If so, why? Or are we still going to regulate heavily? If so, wouldn't the same regulation be able to stop superintelligence altogether?
Would love to see an argument landing at 1% p(doom) or lower, even if alignment would be easy.
Recordings are now available!
Maybe it'll be "and now call GPT and ask it what Sam Altman thinks is good" instead
Thanks for the compliment. Not convinced though that this single example, assuming it's correct, generalizes
Agree that those drafts are very important. I also think there will be technical research required in order to find out which regulation would actually be sufficient (I think at present we have no idea). I disagree, however, that waiting for a crisis (warning shot) is a good plan. There might not really be one. If there would be one, though, I agree that we should at least be ready.
Thank you for writing this reply. It definitely improved my overview of possible ways to look at this issue.
I guess your position can be summarized as "positive offense/defense balance will emerge soon, and aligned AI can block following unaligned AIs entirely if required", is that roughly correct?
I have a few remarks about your ideas (not really a complete response).
The necessity for enforcing a ban even after AGI development is essentially entirely about failures of technical alignment.
First, in general, I think you're underestimating the human component of alignment. Aligned AI should be aligned to something, namely humans. That means it won't be able to build an industrial base in space until we're ready to make it do that.
Even if we are not harmed by such a base in any way, and even if it would be legal to build it, I expect we may not be ready for it for a long time. It will be dead scary to see something develop that seems more powerful than us, but also deeply alien to us, even if tech companies insist it's 'aligned to our values'. Most people's response will be to rein in its power, not expand it further. Any AI that's aligned to us will need to take those feelings seriously.
Even if experts would agree that increasing the power of the aligned AI is good and necessary, and that expansion in space would be required for that, I think it will take a long time to convince the general public and/or decision makers, if it's at all possible. And in any remotely democratic alignment plan, that's a necessary step.
Second, I think it's uncertain whether a level of AI that's powerful enough to take over the world (and thereby cause existential risk) will also be powerful enough to build a large industrial base in space. If not, your plan might not work.
The biggest barrier to extreme regulatory measures like a ban is doubt (both reasonable and unreasonable) about the magnitude of misalignment risk.
I disagree, from my experience of engaging with the public debate, doubt is mostly about AI capability, not about misalignment. Most people easily believe AI to be misaligned to them, but they have trouble believing it will be powerful enough to take over the world any time soon. I don't think alignment research will do that much here.
First, I don't propose 'no AGI development'. If companies can create safe and beneficial AGIs (burden of proof is on them), I see no reason to stop them. On the contrary, I think it might be great! As I wrote in my post, this could e.g. increase economic growth, cure disease, etc. I'm just saying that I think that existential risk reduction, as opposed to creating economic value, will not (primarily) originate from alignment, but from regulation.
Second, the regulation that I think has the biggest chance of keeping us existentially safe will need to be implemented with or without aligned AGI. With aligned AGI (barring a pivotal act), there will be an abundance of unsafe actors who could run the AGI without safety measures (also by mistake). Therefore, the labs themselves propose regulation to keep almost everyone but themselves from building such AGI. The regulation required to do that is almost exactly the same.
Third, I'm really not as negative as you are about what it would take to implement such regulation. I think we'll keep our democracies, our freedom of expression, our planet, everyone we love, and we'll be able to go anywhere we like. Some industries and researchers will not be able to do some things they would have liked to do because of regulation. But that's not at all uncommon. And of course, we won't have AGI as long as it isn't safe. But I think that's a good thing.
Thanks Oliver for adding that context, that's helpful.
I don't disagree. But I do think people dismissing the pivotal act should come up with an alternative plan that they believe is more likely to work. Because the problem is still there: "how can we make sure that no-one, ever builds an unaligned superintelligence?" My alternative plan is regulation.