Wiki Contributions

Comments

It does seem that the Trachtenberg reference basically relies upon individual recollections (which I don't trust), and the following extract from a 1944 letter by Szilard to Vannevar Bush (my bold):

Making some allowances for the further development of the atomic bomb in the next few years which we may take for granted will be made, this weapon will be so powerful that there can be no peace if it is simultaneously in the possession of any two powers unless these two powers are bound by an indissoluble political union. It would therefore be imperative rigidly to control all deposits, if necessary by force, and it will hardly be possible to get political action along that line unless high efficiency atomic bombs have actually been used in this war and the fact of their destructive power has deeply penetrated the mind of the public.

While one could make the argument there that he is arguing for a pre-emptive strike, it is sufficiently ambiguous (controlling by force could also mean conventional forces, and "used" could imply a demonstration rather then a deployment on a city) that I would prefer to delete the reference to Szilard in this article. Also because I've seen many more instances where this view was attributed to Russell and von Neumann, but this is the only case where it has been attributed to Szilard.

I was surprised by that as well, but I took that from an article by Jules Lobel, Professor of Law, University of Pittsburgh Law School based on a book he wrote:

Influential intellectuals such as Bertrand Russell and the famous physicist Leo Szilard supported preventive war arguments, as did noted mathematicians such as John Williams, a leading figure at the RAND Corporation, and John von Neumann, the founder of gametheory.129  Von Neumann was a particularly  strong  advocate, remarking in 1950 that “[i]f you say why not bomb them tomorrow, I say why not today?If you say today at 5 o’clock, I say why not one o’clock?”130

For that claim he in turn cites Marc Trachtenberg's History and Strategy, which I do not have access to.

Thanks for the clarification. If that's the plausible scenario for Aligned AGI, then I was drawing a sharper line between Aligned and Unaligned than was warranted. I will edit some part of the text on my website to reflect that.

Thanks for your comment. This is something I should have stated a bit more explicitly.

When I mentioned "single state (or part thereof)", the part thereof was referring to these groups or groups in other countries that are yet to be formed.

I think the chance of government intervention is quite high in the slow take-off scenario. It's quite likely that any group successfully working on AGI will slowly but noticeably start to accumulate a lot of resources. If that cannot be concealed, it will start to attract a lot of attention. I think it is unlikely that the government and state bureaucracy would be content to let such resources accumulate untouched e.g. the current shifting attitude to Big Tech in Brussels and Washington.

In a fast take-off scenario, I think we can frame things more provocatively: the group that develops AGI either becomes the government, or the government takes control while it still can. I'm not sure what the relative probabilities are here, but in both circumstances you end up with something that will act like a state, and be treated as a state by other states, which is why I model them like a state in my analysis. For example, even if OpenAI and DeepMind are friendly to each other, and that persists over decades, I can easily imagine the Chinese state trying to develop an alternative that might not be friendly to those two groups, especially if the Chinese government perceive them as promoting a different model of government.

Thanks for your comment.

If someone wants to estimate the overall existential risk attached to AGI, then it seems fitting that they would estimate the existential risk attached to the scenarios where we have 1) only unaligned AGI, 2) only aligned AGI, or 3) both. The scenario you portray is a subset of 1). I find it plausible. But most relevant discussion on this forum is devoted to 1) so I wanted to think about 2). If some non-zero probability is attached to 2), that should be a useful exercise. 

I thought it was clear I was referring to Aligned AGI in the intro and the section heading. And of course, exploring a scenario doesn't mean I think it is the only scenario that could materialise.  

Thanks! There seems to be an openness towards error correction which is admirable and unfortunately uncommon.

I've started browsing and posting here a bit so I should introduce myself.

I've been writing online for around five months and put some draft chapters of a book on my website. The objective is to think about how to immunise a society from decline, which basically means trying to find the right balance between creativity and cohesion (not that they are inversely related—it’s quite possible to have neither). Because I can’t buy into any worldview out there today, I’ve tried to systematise my thoughts into a philosophy I call Metasophism. It’s a work in progress, and most of what I read and write links into that in some way.

Prediction mechanisms are something commonly discussed here which I’ve partly integrated, but I need to think more about that which this site will help with I think.

How did I end up here? A commenter on an early post of mine mentioned LW, which I didn’t then frequent even though I was familiar with some of the writers here. That caused me to check it out, and the epistemic culture caused me to stick around.

When it costs 20$ to transport a kg to low-earth orbit we might find a way to to mine palladium that can be sold for $34,115 per kg on earth or gold that can be sold for $60,882 per kg.

It would be interesting to see some kind of analysis of what the effect of asteroid mining could be on the prices of these commodities. For example, the global supply of palladium is just over 200 tonnes, so if asteroid mining could match that the price could fall quite dramatically.

The support provided in the book is purely anecdotal (along the lines of what I discussed above) and doesn't really discuss any other models. The alternative explantions I discuss such as re-religiofication due to material conditions are not mentioned in the book, which is wrote in a somewhat impressionistic manner.

Thanks for elaborating.

I agree with the point about utilities, and the fact that for utility-like services (more specifically, those with overwhelming network effects and economies of scale) it should be illegal to prevent access unless the person to whom service is being denied is doing something illegal.

Load More