It seems like I see this sort of thing (people being evasive or euphemistic when discussing AI risk) a LOT. Even discussions by the AI2027 authors, who specifically describe a blow by blow extinction scenario that ends with AI killing 99% of humans with a bioweapon and then cleaning up the survivors with drones, frequently refer to it with sanitized language like "it could move against humans" [1] or "end up taking over" [2]. Guys you literally wrote a month by month extinction scenario with receipts! No need to protect our feelings here!
I'm not a regular member of this community, OR a resident of the Bay Area, so apply extreme skepticism to my observations and my suggestions. They are submitted with significant humility.
I don't think there's any practical way to relocate a cultural hub on purpose. It might move on its own, over time, but that will be an incremental process. So, to some degree, I think this discussion is moot. Even if a few huge players announced an agreed upon "Second Hub" I don't think many people would/could just pick up and go there.
Nevertheless, various factors (COVID, better online collaboration tools, economic factors that make the Bay Area uniquely difficult) do seem to
It seems like I see this sort of thing (people being evasive or euphemistic when discussing AI risk) a LOT. Even discussions by the AI2027 authors, who specifically describe a blow by blow extinction scenario that ends with AI killing 99% of humans with a bioweapon and then cleaning up the survivors with drones, frequently refer to it with sanitized language like "it could move against humans" [1] or "end up taking over" [2]. Guys you literally wrote a month by month extinction scenario with receipts! No need to protect our feelings here!