I'd say this is a partial misunderstanding, because the difference between final and intermediate consumption is about intention, rather than the type of goods.
Or to be more concrete, this is where I get off the train.
Would the economy collapse if all humans put their spending cash towards ambitious projects like rocket ships, instead of movies and fast cars? No, of course not! Right?
It depends entirely on whether these endeavors were originally thought to be profitable. If you were spending your own money, with no thought of financial returns, then it would be fine. If all the major companies on the stock market announced today that they were devoting all of their funds to rocket ships, on the other hand, the result would be easily called a economic collapse, as people (banks, bondholders, etc.) recalibrate their balance sheets to the updated profitability expectations.
If AI is directing that spending, rather than people, on the other hand, the distinction would not be between alignment and misalignment, but rather with something more akin to 'analignment,' where AIs could have spending preferences completely disconnected from those of their human owners. Otherwise, their financial results would simply translate to the financial conditions of their owners.
The reason why intention is relevant to models which might appear at first to be entirely mechanistic has to do with emergent properties. While on the one hand this is just an accounting question, you would also hope that in your model, GDP at time t bears some sort of relationship with t+1 (or whatever alternative measure of economic activity you prefer). Ultimately, any model of reality has to start at some level of analysis. This can be subjective, and I would potentially be open to a case that AI might be a more suitable level of analysis than the individual human, but if you are making that case then I would like to see the case the independence of AI spending decisions. If that turns out to be a difficult argument to make, then it's a sign that it may be worth keeping conventional economics as the most efficient/convenient/productive modelling approach.
If a human population gradually grows (say, by birth or immigration), then demand for pretty much every product increases, and production of pretty much every product increases, and pretty much every product becomes less expensive via experience curves / economies of scale / R&D.
Agree?
QUESTION: How is that fact compatible with Say’s Law?
If you write down an answer, then I will take the text of your answer but replace the word “humans” with “AGIs” everywhere, and bam, that’s basically my answer to your question! :) (after some minor additional tweaks.)
Okay. Humans are capable of final consumption (i.e. with a reward function that does not involve making more money later).
I'm interested to see how an AI would do that because it is the crux of a lot of downstream processes.
I'm seeing a very different crux to these debates. Most people are not interested in the absolute odds, but rather how to make the world safer against this scenario - the odds ratios under different interventions. And a key intervention type would be the application of the mathematician's mindset.
The linked post cites a ChatGPT conversation which claims that the number of bugs per 1,000 lines of code has declined by orders of magnitude, which (if you read the transcript) is precisely due to the use of modern provable frameworks.
It is worth quoting this conclusion in full.
So this reads to me like rejecting the hacker mindset, in favor of a systems engineering approach. Breaking things is useful only to the extent you formalize the root cause, and your systems are legible enough to integrate those lessons.