Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.

    Thanks to Kate Woolverton and Richard Ngo for useful conversations, comments, and feedback.

    EA and AI safety have invested a lot of resources into building our ability to get coordination and cooperation between big AI labs. So far, however, despite that investment, it doesn’t seem to me like we’ve had that many big coordination “wins” yet. I don’t mean to say that to imply that our efforts have failed, however—the obvious other hypothesis is just that we don’t really have that much to coordinate on right now, other than the very nebulous goal of improving our general coordination/cooperation capabilities.

    In my opinion, however, I think that our lack of clear wins is actually a pretty big problem—and not just because I think there are useful things that we can plausibly coordinate on right now, but also because I expect our lack of clear wins now to limit our ability to get the sort of cooperation we need in the future.

    In the theory of political capital, it is a fairly well-established fact that “Everybody Loves a Winner.” That is: the more you succeed at leveraging your influence to get things done, the more influence you get in return. This phenomenon is most thoroughly studied in the context of the ability of U.S. presidents’ to get their agendas through Congress—contrary to a naive model that might predict that legislative success uses up a president’s influence, what is actually found is the opposite: legislative success engenders future legislative success, greater presidential approval, and long-term gains for the president’s party.

    I think many people who think about the mechanics of leveraging influence don’t really understand this phenomenon and conceptualize their influence as a finite resource to be saved up over time so it can all be spent down when it matters most. But I think that is just not how it works: if people see you successfully leveraging influence to change things, you become seen as a person who has influence, has the ability to change things, can get things done, etc. in a way that gives you more influence in the future, not less.

    Of course, you do have to actually succeed to make this work—if you try to spend your influence to make something happen and fail, you get the opposite effect. This suggests the obvious strategy, however, of starting with small but nevertheless clear coordination wins and working our way up towards larger ones—which is exactly the strategy that I think we should be pursuing.[1]


    1. In that vein, in a follow-up post, I will propose a particular clear, concrete coordination task that I think might be achievable soon given the current landscape, would generate a clear win, and that I think would be highly useful in and of itself. ↩︎

    New to LessWrong?

    New Comment
    16 comments, sorted by Click to highlight new comments since: Today at 4:30 AM
    [-]Rohin Shah2yΩ111923

    EA and AI safety have invested a lot of resources into building our ability to get coordination and cooperation between big AI labs.

    Wait, really? Can you name some examples? I thought this was mostly being left to the big AI labs. Maybe I should be talking to the people investing these resources.

    The one big coordination win I recall us having was the 2015 Research Priorities document that among other things talked about the threat of superintelligence. The open letter it was an attachment to was signed by over 8000 people, including prominent AI and ML researchers.

    And then there's basically been nothing of equal magnitude since then.

    Is the best way to suggest how to do political and policy strategy, or coordination, to post it publicly on Lesswrong? This seems obviously suboptimal, and I'd think that you should probably ask for feedback and look into how to promote cooperation privately first.

    That said, I think everything you said here is correct on an object level, and worth thinking about.

    [-]evhub2yΩ364

    I'd think that you should probably ask for feedback and look into how to promote cooperation privately first.

    I have done this also.

    Agreed, though I think there's an additional factor to consider: what goes into ensuring that you succeed. I view it in terms of power expansion and power consolidation.

    When you try to get something unusual done, you "stake" some amount of your political capital on this. If you win, you "expand" the horizon of the socially acceptable actions available to you. You start being viewed as someone who can get away with doing things like that, you get an in with more powerful people, people are more tolerant of you engaging in more disruptive action.

    But if you try to immediately go for the next, even bigger move, you'll probably fail. You need buy-in from other powerful actors, some of which have probably only now became willing to listen to you and entertain your more extreme ideas. You engage in politicking with them, arguing with them, feeding them ideas, establishing your increased influence and stacking the deck in your favor. You consolidate your power.

    Then you stake it to expand your action-space even more, and so on.

    I agree that we need clear wins, but I also think that most people in the AI Safety community agree that we need clear wins. Would you be interested in taking ownership of this, speaking with various people in the community, and write up a blog post with what you think would characterize a clear action plan, with transparent benchmarks for progress? I think this would be very beneficial, both on the Alignment side and the Governance side.

    In that vein, in a follow-up post, I will propose a particular clear, concrete coordination task that I think might be achievable soon given the current landscape, would generate a clear win, and that I think would be highly useful in and of itself.

    I'm looking forward to your follow-up post.

    I disagree with the conclusion of this post, but still found it a valuable reference for a bunch of arguments I do think are important to model in the space.

    Maybe we should see if, out of the population of those that need to coordinate, we can convince several of them to try to pair up and coorindate with one other in the same population. It's a small start, but it's a start

    EA and AI safety have invested a lot of resources into building our ability to get coordination and cooperation between big AI labs.

     

    Are you having any luck finding cooperation with Russian, Chinese, Iranian and North Korean labs?

    [-]lc2y113

    Are you having any luck finding innovative Russian, Chinese, Iranian, or North Korean labs?

    Upvoted because I think this comment is a reasonable question, and shouldn't be getting this many downvotes. Your latter comment in the thread wasn't thought provoking, as it felt like a non-sequitur, though still not really something I'd downvote. I would encourage you to share your model for why a lack of co-operation with labs within three likely-inconsequential-to-AI states and one likely-consequential-to-AI-states implies that well intended intellectuals in the west aren't likely to have control over the future of AI. 

    After all, substantial chunk of the most capable AI companies take alignment risks fairly seriously (Deepmind, OpenAI sort-of), and I mostly think AGI will arrive in a decade or two. Given Chinese companies don't seem interested in building AGI, and still aren't producing as high quality research as the west, and China's slowing economic growth, I think it probable the West will play a large role in the creation of AGI.

    It's not a reasonable question because the premise of the OP is that there currently isn't any cooperation no matter the nationality. 

    It also does ignore that the Chinese Communist Party does take actions in regard to AI safety and that practically matters more than any cooperation with North Korean AI labs. 

    There's an odd background framing that implies that somehow the Chinese don't care about the public good while Westerners do care. The CCP is perfectly willing to engage in heavy regulations of their tech industry provided they believe that the regulation will protect the public good. There's much more potential for Chinese actors to not follow economic imperatives because their government believes that this is a bad idea.  

    OP writes that there have been no big cooperation wins, so a fortiori, there have been no big cooperation wins with the countries you mention.