Wiki Contributions

Comments

Regarding AGI R&D strategy and coordination questions. I've not seen one realistic proposal by "leading figures" in the field and AI safety organisations. Beyond these people and organisations, I've seen even less thinking about it at all. Take the complete collapse of movement on the UN GGE on LAWS, only a slither of possible AI development and use, that should be the yardstick for people when thinking about AGI R&D strategy and coordination, and it has mostly failed. 

I could find little trace of thinking about these problems on LW.

I tend to think that's because it doesn't happen that successfully outside of LW. 

A reminder that the post articulates that: 

"This argument is based on a pathway toward AGI. That is, while it will focus on the endpoint, where an AGI is created, it is likely that issues around resource distribution and relative power shifts within the international system caused by AI will come well before the development of AGI. The reason for focussing on the end point is the assumption that it would create an event horizon where the state that develops AGI archives runaway power over its rivals economically, culturally and militarily. But many points before this could be equally as valid depending on circumstances within the international system."

This being said, I would agree that you are entirely right that at some point after AGI that the scale of advantage would become overwhelmingly large and possibly undefeatable, but that doesn't negate any of the points raised in the post. States will respond to the scenario, doing so will increase non-AGI risks. 

"Conquering the planet is a relatively simple problem if you have exponential resources.  You just need an overwhelming number of defense weapons for every nuclear delivery vehicle in the hands of the combination of everyone else, and you need a sufficient number of robotic occupying troops to occupy all the land and I guess suicide drones to eliminate all the conventional armies, all at once.

This capacity is not likely to appear overnight, is it? And baring in mind this could potentially mean having to defeat more than one state at the same time. During the interregnum between AGI and building up capacity to do this, other states will react in their own self interest. Yes there is a possible scenario is one where a state pre-builds a certain capacity to overwhelm every other state once it achieves AGI, but in doing so it will create a security dilemma, and other states respond, in doing so this will also increase non-AGI risks. 

"I would assume in scenarios where the advantage is large but temporary I am not sure there is a correct move other than attack.  By letting the rivals catch up you are choosing to let their values possibly control the future.  And their values include a bunch of things you don't like."

Indeed, but the same logic also applies to states worried about being attacked. How do you think they'll respond to this threat? 

The scenario only needs two states in competition with each other to work. The entire Cold War and its associated nuclear risks was driven by a bipolar world order. Therefore, by your own metrics of three powers capable of this, the scenario is realistic. By three powers, I am assuming you mean China the US and the UK? Or were you perhaps thinking of China, the US and the EU? The latter doesn't have nuclear weapons because it doesn't have an army, unless you were including the French nuclear arsenal into your calculation?

"By endgame I mean a single winner could likely conquer the rest by force. And might be compelled to do so." How would a winner conquer the rest by force if my scenario is unrealistic because of mutually assured destruction? 

It would seem as if a part of your line of thinking is just reiterating my entire post. If there is disagreement about a single state ruling over everyone via the development of AGI, then other states will respond in kind, it is a classic security dilemma. States would seek to increase their security, even if MAD prevents certain forms of response. My final paragraph summarises this: "States could respond with increased conventional weapons systems or powerful AI systems to compensate for their lack of AGI. Given the potential power of AGI, it would make sense for this to be a first strike capability. This could increase non-AGI risks." 

I'm not convinced this line of thinking works from the perspective of the structure of the international system. For example, not once are international security concerns mentioned in this post. 

My post here draws out some fundamental flaws in this thinking:
https://www.lesswrong.com/posts/dKFRinvMAHwvRmnzb/issues-with-uneven-ai-resource-distribution

I'll take the point about misuse not being clear, and I've made a 3 word edit to the text to cover your point. 

However, I do also state prior to this that: 

"This argument is based on a pathway toward AGI. That is, while it will focus on the endpoint, where an AGI is created, it is likely that issues around resource distribution and relative power shifts within the international system caused by AI will come well before the development of AGI." 


If anything your post above bolsters my argument. If states do not share resources they'll be in competition with each other to work toward AGI, and everything before it, creating risks. If they do share resources, they create risks. However, it is logical for disadvantaged states to increase other security risks, through restoring a balance of power, in response to the >AGI ones. If state A manages to nail alignment research and develop AGI, my framework is still valid, state B may respond defensibly if it doesn't have access to resources. 

I've covered that, did you read it? 

"The lack of resource distribution has a twofold problem:

  • There is a need for agreement on the distribution of AI resources. However, a wider diffusion of AI could increase the risk of misuse leading to a possible reduction in diffusion."

Your argument is that only certain states should develop AGI, and while that makes sense on the one hand, you're not accounting for the increase in how others will react to the non-diffusion of AI. I'm not arguing for the wider distribution of AI, rather I'm pointing out how others will react to being disadvantaged. Which is also a dumb idea since it will just cause increased competition between states and less regulation at the international level, therefore increasing the risks from AGI and AI in general.