Posts

Sorted by New

Wiki Contributions

Comments

It feels like it is often assumed that the best way to prevent AGI ruin is through AGI alignment, but this isn't obvious to me. Do you think that we need to use AGI to prevent AGI ruin?

Here's a proposal (there are almost certainly better ones): Because of the large amount of compute required to create AGI, governments creating strict regulation to prevent AGI from being created. Of course, the compute to create AGI probably goes down every year, but this buys lots of time, during which one might be able to enact more careful AI regulation, or pull off state-sponsored AGI powered pivotal act project. 

It seems very unlikely that one AI organization will be years ahead of everyone else on the road to AGI, so one of the main policy challenges is to make sure that all of the organizations that could deploy an AGI and cause ruin somehow decide to not do this. The challenge of making all of these organizations not deploy AGI seems easier to pull off than trying to prevent AGI via government regulations, but potentially not by that much, and the benefit of not having to solve alignment seems very large. 

The key downside of this path that I see is that it strictly cuts off the path of using the AGI to perform the pivotal act, because government regulation would prevent that AGI from being built. And this government prevented AGI means that we are still on the precipice -- later AGI ruin might happen that is more difficult to prevent, or another x-risk could happen. But it's not clear to me which path gives a higher likelihood of success.