Ultimately I think this leads to the necessity of very strong global monitoring, including breaking all encryption, to prevent hostile AGI behavior.
This may be the case but I think there are other possible solutions and propose some early ideas of what they might look like in: https://www.lesswrong.com/posts/5nfHFRC4RZ6S2zQyb/risks-from-gpt-4-byproduct-of-recursively-optimizing-ais
This may be the case but I think there are other possible solutions and propose some early ideas of what they might look like in: https://www.lesswrong.com/posts/5nfHFRC4RZ6S2zQyb/risks-from-gpt-4-byproduct-of-recursively-optimizing-ais