Yes, but it's not all about the way things are right now. It's about the way things could be, and how we can get there. I think we can agree that, even though capability researchers are not doing good, they do care about doing something good, or at least something that can be rationalized as "good" and perceived as good by others. Which means that, if the culture shifts so that those activities are no longer seen as good, and the rationalizations are seen for what they are, they may well change their activities. Or at least the next generation of researchers who haven't yet locked in to a particular worldview and career path may not continue those activities.
Michael Kratsios has said recently, “We totally reject all efforts by international bodies to assert centralized control and global governance of AI.” What if the US government doesn't budge on this commitment? This is a plan B: shift the culture among academics so that frontier capabilities research in the private sector is widely frowned upon and the best people want to avoid the well-earned stigma associated with it. Sublimate the competition for capabilities into a competition for righteousness.
Thanks for your response. I was hoping for some way to make a difference through in-person interactions for a couple of reasons:
But you're right, online dialogue matters and shouldn't be dismissed.
I don't have any special political connections. So it might seem delusional for me to be aiming for very ambitious political goals like the ones proposed in the post. I figure though that "AI is going to kill us all, but here's something we can do about it that could really make a difference" is a message that can inspire one to take action, whereas "AI is going to kill us all, and there's not much we can do about it" is not.
And then you would do what with those resources? If the pause is just a proxy-goal for whatever those resources would be used for, why not make that thing the explicit goal instead of the pause?