Do you treat the coordination problem as a “non-alignment” problem? For example, creating an international AI treaty to pause or regulate AI development seems infeasible under current geopolitical and market conditions - is that the kind of problem you mean?
Ah yeah, I figured it out. I believe that work on international collaboration is part of your solution—it’s needed to pause frontier AI development & steer ASI development