What do you think is realistic if alignment is possible? Would the large corporations make a loving machine or a money-and-them-aligned machine?
Did you use EFA to conclude that EFA is the worst, common bad argument?
How would this work with European airlines or airlines from countries where there are much less credit card payments?
What if you're wrong?
Effect is hard if not impossible to determine but the Netherlands have one of the lowest unemployment rates in Europe.
https://en.m.wikipedia.org/wiki/List_of_sovereign_states_in_Europe_by_unemployment_rate
Guido has already repeatedly done protest and even gotten arrested multiple times in front of OA. I don't know his exact reasons for choosing Anthropic now, but spreading the protests over the different actors makes sense to me.
People also asked the same kind of 'why not ...' question when he and others repeatedly protested OA. In the end whatever reasons there may be to go somewhere else, you can only be in one place.
There is now one hunger striker in front of Anthropic and two in front of Google Deepmind.
https://x.com/DSheremet_/status/1964749851490406546
Could you give the source(s) of these anonymous surveys of engineers with insider knowledge about the arrival of AGI? I would be interested in seeing them.
Good observations. The more general problem is modeling. Models break and 'hope for the best expecting the worst' generally works better than any model. It matters how screwed you are when your model fails, not how close to reality the model is. In the case of AI, the models break at a really really important place. The same was true for models predating economic crises. One can go through life without modeling but with preparing for the worst but not the other way around.
Utopians are on their way to end life on earth because they don't understand that iterative x-risk leads to x.