All of hum3's Comments + Replies

Thank you for that reference. I hadn't seen a quantification of the Bitcoin computer capacity which was interesting and high.

A bit gloomy as only global catastrophe or delayed catastrophe.

I thought we had 40 years but with Elon Musk talking about 7-8 years for an AGI and with the recent 4 hour training to get to [world chess supremacy][chess] I am not so sure. So I think we need to buy some time. Even if you can't destroy the semiconductor fabs you could still increase taxes. this could be marketed as helping to pay for societies dislocation while we undergo job losses.

I also think that there is only several years until dangerous AI. See my presentation about it: However, I think that war will only increase extinction risks - and even AI risks, as it will increase arms race and stops ethical thinking. Also, a strike on Silicon valley will kill best minds in AI safety, but some obscure chineese labs will continue to exist.

I really like your map as it starts to give me a framework for dealing with the whole issue. The percentages of success are depressing low.

Under 0 Preliminary measures you don't put your personal estimate of % chance of success. You also put in destruction of AI labs. Is not destruction/taxation of semiconductor fabs an easier target (Wikipedia has a list)? I think they are also so expensive they are harder to hide.

ps spelling error desireble -> desirable in bottom right yellow legend. and Prelimnary=> preliminary

Thanks for error pointing. I don't think that anyone actually plotting nuclear strikes on AI labs. However, Putin or Kim could think that it is their only chance to preserve power in AI age. Anyways, it can't be regarded as success. It is either a global catastrophe or a short delay. But I will think how to add estimation to successfull AI ban.