Is insider trading allowed on Manifold?
As a reality check, "any company which fund research into AGI" here would mean all the big tech companies (MAGMA). Much more people use those products than people know AGI developers. It is a much easier ask to switch to using a different browser/search engine/operating system, install an ad blocker, etc. than to ask for social ostracism. Those companies' revenues collapsing would end the AI race overnight, while having AGI developers keep a social circle of techno-optimists only wouldn't.
Ok, let's say we get most of the 8 billion people in the world to 'come to an accurate understanding of the risks associated with AI', such as the high likelihood that ASI would cause human extinction.
Then, what should those people actually do with that knowledge?
Boycotting any company which fund research into AGI would be (at least in this scenario) both more effective and, for the vast majority of people, more tractable than "ostracizing" people whose social environment is largely dominated by... other AI developers and like-minded SV techno-optimists.
I think it's the combination of a temporal axis and a (for a lack of a better term) physical violence axis.
I don't think the point of hunger strikes is to achieve immediate material goals, but publicity/symbolic ones.
It is vanishingly unlikely that all other major AI companies would agree to do so without the US government telling them to; this statement would be helpful, but only to communicate their position and not because of the commitment itself. Why not ask them to ask the government to stop everyone (maybe conditional on China agreeing to stop everyone in China)?
This seems to be exactly the point of the demand? This is a demand that would be cheap (perhaps even of negative cost) for DeepMind to accept (because the other AI companies wouldn't agree to that), and would also be a major publicity win for the Pause AI crowd. Even counting myself skeptical of the hunger strikes, I think this is a very smart move.
automated AI safety research, biosecurity, cybersecurity (including AI control), possibly traditional transhumanism (brain-computer interfaces, intelligence augmentation, whole brain emulation)
My point was that in the first stages of AI-induced job loss, it might not be clear to everyone (either due to genuine epistemic uncertainty or due to partisan bias) whether the job loss was induced by AI or their own previous preferred political grievance. This was just an aside and not important to my broader point though.
? Protectionism (whether against AI, or immigration, or trade) is often justified by concerns about job loss.
Relevant.