(Cross-posted from EA Forum): I think you could have strengthened your argument here further by talking about how even in Dario's op-ed opposing the ban on state-level regulation of AI, he specifically says that regulation should be "narrowly focused on transparency and not overly prescriptive or burdensome". That seems to indicate opposition to virtually any regulations that would actually directly require doing anything at all to make models themselves safer. It's demanding that regulations be more minimal than even the watered-down version of SB 1047 that Anthropic publicly claimed to support.
Cross-posted from the EA forum, and sorry if anyone has already mentioned this, BUT:
Is the point when models hit a length of time on the x-axis of the graph meant to represent the point where models can do all tasks of that length that a normal knowledge worker could perform on a computer? The vast majority of knowledge worker tasks of that length? At least one task of that length? Some particular important subset of tasks of that length?
What would "ulterior motives" be here? Do you think Thorstad is consciously lying? That seems really weird to me.
One way to understand this is that Dario was simply lying when he said he thinks AGI is close and carries non-negligible X-risk, and that he actually thinks we don't need regulation yet because it is either far away or the risk is negligible. There have always been people who have claimed that labs simply hype X-risk concerns as a weird kind of marketing strategy. I am somewhat dubious of this claim, but Anthropic's behaviour here would be well-explained by it being true.
People will sometimes invest if they think the expected return is high, even if they also think there is a non-trivial chance that the investment will go to zero. During the FTX collapse many people claimed that this is a common attitude amongst venture capitalists, although maybe Google and Amazon are more risk averse?
It's pretty telling that you think there's no chance that anyone who doesn't like your arguments is acting in good faith. I say that as someone who actually agrees that we should (probably, pop. ethics is hard!) reject total utilitarianism on the grounds that bringing someone into existence is just obviously less important than preventing a death, and that this means that longtermist are calling for important resource to be misallocated. (That is true of any false view about how EA resources should be spent though!). But I find your general tone of 'people have reasons to be biased against me so therefore nobody can possibly disagree with me in good faith or non-fanatically' extraordinarily off-putting, and think it's most likely effect is to cause a backfire where people in the middle move towards the simple total utilitarian view.
Empires are more like the opposite of nationalism than an example of it, even if the metropoles of empires tends to be nationalist. Nationalism is about the view that particular "people's", defined ethnically or just be citizenship should be sovereign and proud of it, empire is about the idea that one country can rule over many people's. This is kind of a nitpick, as having stable coherent national identity maybe did help industiral rev start in Britain, I don't know this history well enough to say. But in any case, the British Empire was hardly obviously net positive, it did huge damage to India in the 18th century for example (amongt many awful human rights abuses), when India was very developed by 18th century standards. And it's not clear it was necessary for the industrial revolution to happen. Raw materials could have been bought rather than stolen for example, and Smith thought slavery was less efficient than free labour.