Effect is hard if not impossible to determine but the Netherlands have one of the lowest unemployment rates in Europe.
https://en.m.wikipedia.org/wiki/List_of_sovereign_states_in_Europe_by_unemployment_rate
Guido has already repeatedly done protest and even gotten arrested multiple times in front of OA. I don't know his exact reasons for choosing Anthropic now, but spreading the protests over the different actors makes sense to me.
People also asked the same kind of 'why not ...' question when he and others repeatedly protested OA. In the end whatever reasons there may be to go somewhere else, you can only be in one place.
There is now one hunger striker in front of Anthropic and two in front of Google Deepmind.
https://x.com/DSheremet_/status/1964749851490406546
Could you give the source(s) of these anonymous surveys of engineers with insider knowledge about the arrival of AGI? I would be interested in seeing them.
Good observations. The more general problem is modeling. Models break and 'hope for the best expecting the worst' generally works better than any model. It matters how screwed you are when your model fails, not how close to reality the model is. In the case of AI, the models break at a really really important place. The same was true for models predating economic crises. One can go through life without modeling but with preparing for the worst but not the other way around.
I fail to see how that's an argument. It doesn't seem to me a reason not to cull now, only maybe not to advocate for it, and even that I would disagree with. Can you explain yourself?
This is great.
Since you already anticipate the dangerous takeoff that is coming, and we are unsure if we notice and can act on time: why not cull now?
I get that part of the point is slowing down the takeoff and culling now does not get that effect.
But what if March 2027 is too late? What if getting proto-AGIs to do AI R&D only requires minor extra training or unhobbling?
I'd trust a plan that relies on already massively slowing down AI now way more than one that relies on it still being on time later.
I can attest that for me talking about AI dangers in an ashamed way has rarely if ever prompted a positive response. I've noticed and been told that it gives 'intellectual smartass' vibes rather than 'concerned person' vibes.
A lot of this seems to be pointing to 'love'.
What if you're wrong?