Slowing down AI progress is not itself valuable.
Slowing down AI progress is valuable if and only if you can slow down AI progress without slowing down AI safety progress.
I have faith in the ability of the US government to put lots of red tape and roadblocks in the way of AI development, require that huge armies of lawyers and compliance officers be deployed by any entity trying to develop AI, etc.
I have no faith whatsoever that this process will not also slow down AI safety progress, and some strong suspicion that it might disproportionately slow down AI safety progress.
Overall I think that, when I look at the current conduct of Anthropic, and the current conduct of the US government, I do not find myself saying 'oh, if only we had given the US government more control over AI companies!'
To me, such costly and invasive regulations seem a priori unlikely given the nature of the issue and our epistemics thereof. Can you expand on that?
I appreciate the post.
Regarding your questions, this might be a more sophisticated defense of technological determinism: https://80000hours.org/podcast/episodes/allan-dafoe-unstoppable-technology-human-agency-agi/
I don't remember much of what's there, but I remember disagreeing with various apparently important points (I think I had notes on this but lost them).
(Aside from this coming from someone working at an AGI lab, which raises my eyebrows on priors.)
Our culture is dominated by an ideology of progress, called progressivism, which conflates purportedly inevitable progressions along trendlines (especially ones that amount to an increase in taxable activity, or increased economic mobilization) with the solution of problems and people's lives getting better. Because it's an ideology, progressives worship progress rather than having honest propositional views on it amenable to evidence; enough evidence might cause a personal crisis, but they don't have the virtues of lightness or specificity about it.
In practice progressivism is a job creation scheme for elites. The jobs created have to be constituted to manage problems, rather than solve them; solving problems destroys jobs. Oddly, if you wanted to slow down the rate at which we solve our proximate problems using AI or increase AI capacities, the best option available short of revolution or radical dissidence en masse might be to make "AI Progress" an important positive policy goal and try to persuade elites that it's important to get those metrics up.
Institutional recommendations are shaped by implicit constraints like "don't reduce headcount" and "don't invalidate your department's premise," internalized as limits on what's thinkable: https://benjaminrosshoffman.com/parkinsons-law-ideology-statistics/#diagnosis
Calling for X produces jobs doing the opposite: https://benjaminrosshoffman.com/openai-makes-humanity-less-safe/
Neoclassical (progressive) economics tries to quantify "total value," and then maximize it, which in practice means maximizing transaction volume: https://substack.com/@benhoffman700141/note/c-237461608
A much deeper dive into the same thing: https://benjaminrosshoffman.com/the-domestic-product/
To what extent do you think this manifests in the progress studies movement, in the sense of the cluster(s) coordinated around https://rootsofprogress.org/ or https://worksinprogress.co/ ?
Those seem like attempts to extend the useful life of the current regime by trying to organize around doing more of the things that originally won it legitimacy, rather than to productively criticize or supersede it. Sometimes you should patch up an old thing rather than buy a new one, sometimes this is false economy because the cost of upkeep is higher than the amortized cost of replacement, and sometimes you’re driving around in an explosive death trap or breathing mold every day making you sick when you should really just get a safe new car or house built from scratch.
I would put Tyler Cowen in the same category, accepting things like GDP as the best politically available target to organize around, but trying to persuade people to do good rather than bad things to raise the GDP.
Are we sure that those are examples of progressivism rather than using the same word to describe themselves?
Progressivism seems like a third degree simulacral procession from Whig Theory of History, which is at least a theory. Whig Theory of History relates to progress as a contingent claim to be asserted about aggregates, which is meaningful because it lawfully decomposes into constituent facts we care about. Progressivism seems to inherit the emotional loading of the term as a given and then situate intentions relative to it.
Progress Studies and Works in Progress seem to be relying on “progress” as something whose value & reality is uncontested even if its specific meaning is unclear and the rollout is uneven.
I agree that we can stop or delay the development of some technology. However, AI does not appear to be one of them, in my view, for several reasons:
(i). The x-risk has little to no political salience, there is no such thing as a "scientific consensus"[1] yet, and you can't point to a concrete instance of the problem yet that will convince the uninitiated.[2]
(ii). Every policy that stops superintelligence from being developed will likely come with massive, concrete economic costs.
(iii). Fast global coordination is needed.
No example in your list has all of these, for instance:
You can point to the CAIS statement or similar things, but this is not in the same category as e.g. the consensus around climate change.
For example, you don't need to understand nuclear physics to understand "big fireball bad", but you need more theory to be scared about METR reward hacking results.
As for (ii), I would distinguish between massive economic costs in the form of (a) missed opportunities and (b) damaging the existing economy. Many such cases of (a), fewer, but still many, of (b). Nobody is proposing taking away today's LLMs.
Since (ii) and (iii) have many examples mentioned in the post, it seems (i) is the crux.
Each part of (i) sounds like a self-fulfilling prophecy. Ultimately, it amounts to the proposition that AI Safety arguments have not yet won decisively, and therefore they cannot win. Building scientific consensus is an ongoing process of convincing people that the dangers are real and that action is possible, and posts like Katja's are part of building it.
As for political salience and public awareness of the risks, they are already pretty high and rising fast. Many people are taking actions to raise awareness and salience, and to a large extent it's taking care of itself as capabilities improve and the dangers become obvious (see e.g. Mythos/Glasswing).
“We can’t prevent progress” say the people for some reason enthusiastically advocating that we just risk dying by AI rather than even consider contravening this law.
I have several problems with this, beyond those unsubtly hinted at above.
First, it seems to be willfully conflating “increasing technology understanding and/or tools” with “things getting better”. The word ‘progress’ generally means ‘things getting better’, but here in a debate about whether it is good or not for society to acquire and spread some specific information and tools, we are being asked to label all increases in information and tools as ‘progress’, which is quite the presumption of a particular conclusion.
(Yes the sub-debate here is more narrowly about whether averting technology is feasible not whether it is good, but the bid here to implicitly grant that the infeasible thing is also reprehensible and backward to want (i.e. anti-”progress”) seems unfriendly.)
If we separate the conflated concepts—i.e. distinguish ‘increasing technological information and tools’ from ‘things getting better’—the statement doesn’t seem remotely true for either of them.
First: Preventing things from getting better is a capability humans have had perhaps at least as far back as the Sea Peoples of Bronze Age collapse fame. (If indeed we go ahead and make machines that do in fact destroy humanity, we will also have prevented ‘progress’ in the normal sense.)
But now let’s consider preventing “increasing technology information and tools”, which seems like the more relevant contention. I’m a bit unsure what the position is here, honestly—do people think for instance that the FDA doesn’t slow down the pharmaceutical industry? Do they think that the pharmaceutical industry is too small and insulated from financial incentives for its slowing down to be evidence about AI?
Perhaps we just don’t usually think of the pharmaceutical industry as ‘slowed down’ because we are used to that as the way it operates? Or perhaps this doesn’t count because the point isn’t to slow it down, it’s just to have it proceed at the rate it can do so safely for people, with the slowness as an unfortunate side-effect. In which case, fine—that would also do for AI!
In case this example is for some reason wanting, here are more examples of technologies slowed down to something more like a halt, from a previous post (more detail here also):
Aside from the seeming disconnect with empirical evidence, I’m confused by the theoretical model here. Do people think the rate of technological development can’t be affected by funding, or by the costs of inputs, or by regulation? Or do they think these factors would affect technology, but that this will never in practice happen because the relevant decisionmakers will never have the will?
Do they also think technology cannot be sped up? If so, how is that different?
Do they just mean you can’t fully grind it to a halt, preventing all progress? That may be so, but in that case, slowing it down a lot would generally suffice!