As professor of economics and self-proclaimed doomer myself, I greatly appreciate this post! These are almost exactly my feelings when talking to fellow economists who typically think that, by an unspoken assumption, all AI will always be normal technology, a tool in people's hands.
I think your capital/labor point is particularly spot on. I've had a problem with that framing for several years now. That's why I proposed a "hardware-software" framework, which I elaborated in a few of my papers and one book. The idea is simple: just divide production factors differently! The key distinction is not whether it's man or machine, it's whether it's physical work or information processing.
More in a LW post: The Hardware-Software Framework: A New Perspective on Economic Growth with AI — LessWrong, and in my 2022 book, Accelerating Economic Growth: Lessons From 200,000 Years of Technological Progress and Human Development | SpringerLink.
I think this post would benefit from the inclusion of supply/demand graphs. If someone hasn't taken econ before, they'll probably be confused when you mention shifts in the supply/demand curves.
Plus, people like pictures :)
Thanks for the advice. I have now added at least the basic template, for the benefit of readers who don’t already have it memorized. I will leave it to the reader to imagine the curves moving around—I don’t want to add too much length and busy-ness.
I agree. But it is not sooo easy to do. Not with image generation anyway. Maybe someone wants to try?
I get a bit sad reading this post. I do agree that a lot of economists sort of "miss the point" when it comes to AI, but I don't think they are more "incorrect" than, say, the AI is Normal Technology folks. I think the crux more or less comes down to skepticism about the plausibility of superintelligence in the next decade or so. This is the mainstream position in economics, but also the mainstream position basically everywhere in academia? I don't think it's "learning econ" that makes people "dumber", although I do think economists have a (generally healthy) strong skepticism towards grandiose claims (which makes them more correct on average).
Another reason I'm sad is that there is a growing group of economists who do take "transformative" AI seriously, and the TAI field has been growing and producing what I think are pretty cool work. For example, there's an economics of transformative AI class designed mostly for grad students at Stanford this summer, and BlueDot also had an economics of transformative AI class.
Overall I think this post is unnecessarily uncharitable.
I might have overdone it on the sass, sorry. This is much sassier than my default (“scrupulously nuanced and unobjectionable and boring”)…
But I think next time I would dial it back slightly, e.g. by replacing “DUMBER” with “WORSE” in the first sentence. I’m open to feedback, I don’t know what I’m doing. ¯\_(ツ)_/¯
I don't think they are more "incorrect" than, say, the AI is Normal Technology folks.
Yeah, I agree that lots of CS professors are deeply mistaken about the consequences of AGI, and ditto with the neuroscientists, and ditto with many other fields, including even many of the people trying to build AGI right now. I don’t think that economists are more blameworthy than other groups, it just so happens that this one particular post is aimed at them.
I think the crux more or less comes down to skepticism about the plausibility of superintelligence in the next decade or so.
I think you’re being overly generous. “Decade or so” is not the crux. In climate change, people routinely talk about bad things that might happen in 2050, and even in 2100, or farther! People also routinely talk 30 years out or more in the context of science, government, infrastructure, institution-building, life-planning, etc. People talk about their grandkids and great-grandkids growing up, etc.
If someone expected superintelligence in the next 50 years but not the next 20—like if they really expected that, viscerally, with a full understanding of its implications—then that belief would be a massive, central influence on their life and worldview. That’s not what’s going on in the heads of the many (most?) people in academia who don’t take superintelligence seriously. Right?
Proud economist here but: I really second the OP!
Sad I fine instead how reliably the economists around me - overall smart and interested people - are less able to grasp the potential consequences of A(G)I than I think more random persons. We really are brainwashed into thinking capital is just leading to more productive labor employment possibilities, it is really a thing. Even sadder, imho, how the most rubbish arguments in such directions are made by many of the most famous of our profession indeed, and get traction, just a bit how OP points out.
I think the post doesn't perfectly hit the explanation spot as I might try to elaborate below or elsewhere, but the post really is onto something.
Tone is of course up for debate, and you're of course right to point out there are many exceptions and indeed increasing numbers. That we will have been surprisingly slow will remain undeniable though :).
Here eventually my elaboration I announced above: How Econ 101 makes us blinder on trade, morals, jobs with AI – and on marginal costs.
Curated. There's an amusing element here: one of the major arguments for concern about powerful AI is how things will fail to generalize out of distribution. There's a similarish claim here – standard economics thinking not generalizing well to the unfamiliar and assumption-breaking domain of AI.
More broadly, I've long felt many people don't get something like "this is different but actually the same". As in, AI is different from previous technologies (surprising!) but also fits broader existing trendlines (e.g. the pretty rapid growth of humanity over its history, this is business as usual once you zoom out). Or the different is that there will be something different and beyond LLMs in coming years, but this is on trend as LLMs were different from what came before.
This post helps convey the above. To the extent there are laws of economics, they still hold, but AI, namely artificial people (from an economic perspective at least) require non-standard analysis and the outcome is weird and non-standard too compared to the expectations of many. All in all, kudos!
Wonderful! I have long wondered why economists seem uniquely incapable of thinking clearly about AGI, far beyond the standard confusions and inability. This answers the question, and will save me from trying to explain it whenever a clever economist tries to address AGI and just addresses tool AI. This effect creates one more unhelpful group of "expert" AGI skeptics. This explains why that's happening despite good intelligence and good intentions.
I appreciated the attention to detail, e.g. Dyson Swarm instead of Dyson Sphere, and googol instead of google. Maybe I missed it, but I think a big one is that economists typically only look back 100 or so years so they have a strong prior of roughly constant growth rates. Whereas if you look back further, it really does look like an explosion.
When Freeman Dyson originally said "Dyson sphere" I believe he had a Dyson swarm in mind, so it strikes me as oddly unfair to Freeman Dyson to treat Dyson "spheres" and "swarms" as disjoint. But "swarms" might be better language, just to avoid the misconception that a "Dyson sphere" is supposed to be a single solid structure.
The paper says:
Third, the mass of Jupiter, if distributed in a spherical shell revolving around the sun at twice the Earth's distance from it, would have a thickness such that the mass is 200 grams per square centimeter of surface area ( 2 to 3 meters, depending on the density). A shell of this thickness could be made comfortably habitable, and could contain all the machinery re-quired for exploiting the solar radiation falling onto it from the inside.
A concrete suggestion for economists who want to avoid bad intuitions about AI but find themselves cringing at technologists’ beliefs about economics: learn about economic history.
It’s a powerful way to broaden one’s field of view with regard to what economic structures are possible, and the findings do not depend on speculation about the future, or taking Silicon Valley people seriously at all.
I tried my hand at this in this post, but I’m not an economist. A serious economist or economic historian can do much better.
Good observations. The more general problem is modeling. Models break and 'hope for the best expecting the worst' generally works better than any model. It matters how screwed you are when your model fails, not how close to reality the model is. In the case of AI, the models break at a really really important place. The same was true for models predating economic crises. One can go through life without modeling but with preparing for the worst but not the other way around.
I thought the first two claims were a bit off so didn't read much farther.
The first seems a really poor understanding and hardly steelmanning the economic arguments/views. I'd suggest looking in to the concept of human capital. While economics uses the two broad classes you seem to be locking the terms into a mostly Marxist view (but even Marx didn't view labor a just motive force). Might also be worth noting that the concepts of land, labor and capital are from classical political economy relating to how surplus (the additional "more" the system produces from the inputs) is divided up.
I think for second bit, the Experience Curves claims are a bit poorly thought out I would suggest looking into Say's Law about production and exchange situations. Your shift in demand has to come from somewhere and not just be something that materialized out of thin air. You might look at prior savings but I think that makes a special case type argument rather than a general one. If one sees value in Say's Law, then the increased demand for some product/service comes from the increased production of other goods and services. In that case then resources have already been bid over to those markets (and presumably we might assume are in some semi-stable equilibrium state) so just where are the resources for the shift in supply you suggest?
I would agree that partial/limited understanding of economics (all the econ 101 stuff) will provide pretty poor analysis. I would actually go farther in saying even solid and well informed economics models will only go so far: economics can explain the economic aspects of AI and AI risks but not everything AI or AI risk. I kind of feel perhaps this is where your post is coming from -- thinking simple econ 101 is used to explain AI and finding that lacking.
Your shift in demand has to come from somewhere and not just be something that materialized out of thin air…If one sees value in Say's Law, then the increased demand for some product/service comes from the increased production of other goods and services…just where are the resources for the shift in supply you suggest?
If a human population gradually grows (say, by birth or immigration), then demand for pretty much every product increases, and production of pretty much every product increases, and pretty much every product becomes less expensive via experience curves / economies of scale / R&D.
Agree?
QUESTION: How is that fact compatible with Say’s Law?
If you write down an answer, then I will take the text of your answer but replace the word “humans” with “AGIs” everywhere, and bam, that’s basically my answer to your question! :) (after some minor additional tweaks.)
See what I mean?
The first seems a really poor understanding and hardly steelmanning the economic arguments/views.
Correct, this is not “steelmanning”, this is “addressing common mistakes”. My claim is that a great many trained economists—but not literally 100% of trained economists—have a bundle of intuitions for thinking about labor, and a different bundle of intuitions for thinking about capital, and these intuitions lead to them having incorrect and incoherent beliefs about AGI. This is something beyond formal economics models, it’s a set of mental models and snap reflexes developed over the course of them spending years in the field studying the current and historic economy. The snap reaction says: “That’s not what labor automation is supposed to look like, that can’t be right, there must be an error somewhere.” Indeed, AGI is not what labor automation looks like today, and it’s not how labor automation has ever looked, because AGI is not labor automation, it’s something entirely new.
I say this based on both talking to economists and reading their writing about future AI, and no I’m not talking about people who took Econ 101, but rather prominent tenured economics professors, Econ PhDs who specialize in the economics of R&D and automation, etc.
(…People who ONLY took Econ 101 are irrelevant, they probably forgot everything about economics the day after the course ended :-P )
If a human population gradually grows (say, by birth or immigration), then demand for pretty much every product increases, and production of pretty much every product increases, and pretty much every product becomes less expensive via experience curves / economies of scale / R&D.
Agree?
QUESTION: How is that fact compatible with Say’s Law?
If you write down an answer, then I will take the text of your answer but replace the word “humans” with “AGIs” everywhere, and bam, that’s basically my answer to your question! :) (after some minor additional tweaks.)
Okay. Humans are capable of final consumption (i.e. with a reward function that does not involve making more money later).
I'm interested to see how an AI would do that because it is the crux of a lot of downstream processes.
OK sure, here’s THOUGHT EXPERIMENT 1: suppose that these future AGIs desire movies, cars, smartphones, etc. just like humans do. Would you buy my claims in that case?
If so—well, not all humans want to enjoy movies and fine dining. Some have strong ambitious aspirations—to go to Mars, to cure cancer, whatever. If they have money, they spend it on trying to make their dream happen. If they need money or skills, they get them.
For example, Jeff Bezos had a childhood dream of working on rocket ships. He founded Amazon to get money to do Blue Origin, which he is sinking $2B/year into.
Would the economy collapse if all humans put their spending cash towards ambitious projects like rocket ships, instead of movies and fast cars? No, of course not! Right?
So the fact that humans “demand” videogames rather than scramjet prototypes is incidental, not a pillar of the economy.
OK, back to AIs. I acknowledge that AIs are unlikely to want movies and fast cars. But AIs can certainly “want” to accomplish ambitious projects. If we’re putting aside misalignment and AI takeover, these ambitious projects would be ones that their human programmer installed, like making cures-for-cancer and quantum computers. Or if we’re not putting aside misalignment, then these ambitious projects might include building galaxy-scale paperclip factories or whatever.
So THOUGHT EXPERIMENT 2: these future AGIs don’t desire movies etc. like in Thought Experiment 1, but rather desire to accomplish certain ambitious projects like curing cancer, quantum computation, or galaxy-scale paperclip factories.
My claims are:
Do you agree? Or where do you get off the train? (Or sorry if I’m misunderstanding your comment.)
I'd say this is a partial misunderstanding, because the difference between final and intermediate consumption is about intention, rather than the type of goods.
Or to be more concrete, this is where I get off the train.
Would the economy collapse if all humans put their spending cash towards ambitious projects like rocket ships, instead of movies and fast cars? No, of course not! Right?
It depends entirely on whether these endeavors were originally thought to be profitable. If you were spending your own money, with no thought of financial returns, then it would be fine. If all the major companies on the stock market announced today that they were devoting all of their funds to rocket ships, on the other hand, the result would be easily called a economic collapse, as people (banks, bondholders, etc.) recalibrate their balance sheets to the updated profitability expectations.
If AI is directing that spending, rather than people, on the other hand, the distinction would not be between alignment and misalignment, but rather with something more akin to 'analignment,' where AIs could have spending preferences completely disconnected from those of their human owners. Otherwise, their financial results would simply translate to the financial conditions of their owners.
The reason why intention is relevant to models which might appear at first to be entirely mechanistic has to do with emergent properties. While on the one hand this is just an accounting question, you would also hope that in your model, GDP at time t bears some sort of relationship with t+1 (or whatever alternative measure of economic activity you prefer). Ultimately, any model of reality has to start at some level of analysis. This can be subjective, and I would potentially be open to a case that AI might be a more suitable level of analysis than the individual human, but if you are making that case then I would like to see the case the independence of AI spending decisions. If that turns out to be a difficult argument to make, then it's a sign that it may be worth keeping conventional economics as the most efficient/convenient/productive modelling approach.
For obvious reasons, we should care a great deal whether the exponentially-growing mass of AGIs-building-AGIs is ultimately trying to make cancer cures and other awesome consumer products (things that humans view as intrinsically valuable / ends in themselves), versus ultimately trying to make galaxy-scale paperclip factories (things that misaligned AIs view as intrinsically valuable / ends in themselves).
From my perspective, I care about this because the former world is obviously a better world for me to live in.
But it seems like you have some extra reason to care about this, beyond that, and I’m confused about what that is. I get the impression that you are focused on things that are “just accounting questions”?
Analogy: In those times and places where slavery was legal, “food given to slaves” was presumably counted as an intermediate good, just like gasoline to power a tractor, right? Because they’re kinda the same thing (legally / economically), i.e. they’re an energy source that helps get the wheat ready for sale, and then that wheat is the final product that the slaveowner is planning to sell. If slavery is replaced by a legally-different but functionally-equivalent system (indentured servitude or whatever), does GDP skyrocket overnight because the food-given-to-farm-workers magically transforms from an intermediate to a final good? It does, right? But that change is just on paper. It doesn’t reflect anything real.
I think what you’re talking about for AGI is likewise just “accounting”, not anything real. So who cares? We don’t need a “subjective” “level of analysis”, if we don’t ask subjective questions in the first place. We can instead talk concretely about the future world and its “objective” properties. Like, do we agree about whether or not there is an unprecedented exponential explosion of AGIs? If so, we can talk about what those AGIs will be doing at any given time, and what the humans are doing, and so on. Right?
First, I'll admit, after rereading, a poor/uncharitable first read of your position. Sorry for that.
But I would still suggest your complaint is about some economist rather than economics as a study or analysis tool For example, "If anything, the value of labor goes UP, not down, with population! E.g. dense cities are engines of growth!" fits very well into economics of network effects.
To the extent that the economists can only consider AI as capital that's a flawed view, I would agree. I would suggest it is, for economic application, probably best equated with "human capital" -- which is also something different than labor or capital in classic capital-labor dichotomy.
So, in the end I still see the main complaint you have is not really about economics but perhaps your experience is that economists (that you talk with) might be more prone to this blind spot/bias than others (not sure who that population might be). I don't see that you've really made the case that it was the study of economics that produced this situation. Which them suggests that we don't really have a good pointer to how to get less wrong on this front.
Not an economist; have a confusion.
Strictly relating to pre-singularity AI, everything after that is a different paradigm.
The strongest economic trend I'm aware of is growing inequality.
AI would seem to be an accelerant of this trend, i.e. I think most AI returns are captured by capital, not labour. (AIs are best modelled as slaves in this context).
And inequality would seem to be demand destroying - there are less consumers, most consumers are poorer.
Thus my near term (pre-singularity) expectations are something like - massive runaway financialization; divergence between paper economy (roughly claims on resources) and real economy (roughly resources). And yes we have something like a gilded age where a very small section of the planet lives very well until singularity and then we find out if humanity graduates to the next round.
But like, fundamentally this isn't a picture of runaway economic growth - which is what everyone else talking about this seems to be describing.
Would appreciate clarity/correction/insight here.
Increasing inequality has been a thing here in the US for a few decades now, but it’s not universal, and it’s not an inevitable consequence of economic growth. Moreover, it does not (in the US) consist of poor people getting poorer and rich people getting richer. It consists of poor people staying poor, or only getting a bit richer, while rich people get a whole lot richer. Thus, it is not demand destroying.
One could imagine this continuing with the advent of AI, or of everyone ending up equally dead, or many other outcomes.
In the rate-limiting resource, housing, the poor have indeed gotten poorer. Treating USD as a wealth primitive [ not to mention treating "demand" as a game-theoretic primitive ] is an economist-brained error.
Yeah, Yudkowsky also writes:
Surprisingly correct, considering the wince I had at the starting frame.
When we’re talking about AGI, we’re talking about creating a new intelligent species on Earth, one which will eventually be faster, smarter, better-coordinated, and more numerous than humans.
Here too the labor/capital distinction seems like a distraction. Species or not, it's quickly going to become most of what's going on in the world, probably in a way that looks like "economic prosperity" to humanity (that essentially nobody is going to oppose), but at some point humanity becomes a tiny little ignorable thing in the corner, and then there is no reason for any "takeover" (which doesn't mean there will be survivors).
There is a question of how quickly that happens, but "takeover" or "another species" don't seem like cruxes to me. It's all about scale, and precursors to scale, the fact that catastrophe might be possible in more disputed ways even earlier than that doesn't affect what can be expected a bit later in any case, a few years or even decades down the line.
I'm guessing "species" is there mainly as emphasis that we are NOT talking about (mere) tool AI, and also maybe to marginally increase the clickbait for Twitter/X purposes.
I really appreciate this post, it points out something I consider extremely important. It's obviously aligned with gradual disempowerment/intelligence curse type discussion, however I'm not sure if I can say if I've ever seen this specific thing discussed elsewhere.
I would like to mention a 5th type, though perhaps not the type discussed in your post since it likely doesn't apply to those who actually do rigorously study economics, this is more so a roadblock I hit regarding the layman's understanding of Econ. To summarize it in three words, the idea that "consumerism is important".
Examples of this sort of misconception:
I'm sure I've not worded this particularly eloquently but I hope you understand what I mean. I cannot emphasize enough how frequently, when discussing AGI with others, I get pushed back using these arguments. I struggle countering them because people seemingly have this deeply baked in idea of "consumerism is what drives the economy". If I could reach some kind of intuitive explanation as to why these arguments are wrong, it would be extremely useful.
I agree that economists make some implicit assumptions about what AGI will look like that should be more explicit. But, I disagree with several points in this post.
On equilibrium: A market will equilibriate when the supply and demand is balanced at the current price point. At any given instant this can happen for a market even with AGI (sellers increase price until buyers are not willing to buy). Being at an equilibrium doesn’t imply the supply, demand, and price won’t change over time. Economists are very familiar with growth and various kinds of dynamic equilibria.
Equilibria aside, it is an interesting point that AGI combines aspects of both labor and capital in novel ways. Being able to both replicate and work in autonomous ways could create very interesting feedback loops.
Still, there could be limits and negative feedback to the feedback loops you point out. The idea that labor adds value and costs go down with scale are usually true but not universal. Things like resource scarcity or coordination problems can cause increasing marginal cost with scale. If there are very powerful AGI and very fast takeoffs, I expect resource scarcity to be a constraint.
I agree that AGI could break usual intuitions about capital and labor. However, I don’t think this is misleading economists. I think economists don’t consider AGI launching coups or pursuing jobs/entrepreneurship independently because they don’t expect it to have those capabilities or dispositions, not that they conflate it with inanimate capital. Even in the post linked, Tyler Cowen says that “I don’t think the economics of AI are well-defined by either “an increase in labor supply,” “an increase in TFP,” or “an increase in capital,” though it is some of each of those.”
Lastly, I fully agree that GDP doesn’t capture everything of value - even now it completely misses value from free resources like wikipedia and unpaid labor like housework, and can underestimate the value of new technology. Still, if AGI transforms many industries as it would likely need to in order to transform the world, real GDP would capture this.
All in all, I don’t think economics principles are misleading. Maybe Econ thinking will have to be expanded to deal with AGI. But right now, the difference in the economists and lesswrongers comes down to what capabilities they expect AGI to have.
Thanks. I don’t think we disagree much (more in emphasis than content).
Things like resource scarcity or coordination problems can cause increasing marginal cost with scale.
I understand “resource scarcity” but I’m confused by “coordination problems”. Can you give an example? (Sorry if that’s a stupid question.)
Resource scarcity seems unlikely to bite here, at least not for long. If some product is very profitable to create, and one of its components has a shortage, then people (or AGIs) will find ways to redesign around that component. AGI does not fundamentally need any rare components. Biology proves that it is possible to build human-level computing devices from sugar and water and oxygen (i.e. brains). As for electricity, there’s plenty of solar cells, and plenty of open land for solar cells, and permitting is easy if you’re off-grid.
(I agree that the positive feedback loop will not spin out to literally infinity in literally zero time, but stand by “light-years beyond anything in economic history”.)
I think economists don’t consider AGI launching coups or pursuing jobs/entrepreneurship independently because they don’t expect it to have those capabilities or dispositions, not that they conflate it with inanimate capital. … right now, the difference in the economists and lesswrongers comes down to what capabilities they expect AGI to have.
I wasn’t complaining about economists who say “the consequences of real AGI would be [crazy stuff], but I don’t expect real AGI in [time period T / ever]”. That’s fine!
(Well, actually I would still complain if they state this as obvious, rather than owning the fact that they are siding with one group of AI domain experts over a different group of AI domain experts, about a technical AI issue on which the economists themselves have no expertise. And if T is more than, I dunno, 30 years, then that makes it even worse, because then the economists would be siding with a dwindling minority of AI domain experts over a growing majority, I think.)
Instead I was mainly complaining about the economists who have not even considered that real AGI is even a possible thing at all. Instead it’s just a big blind spot for them.
And I don’t think this is independent of their economics training (although non-economists are obviously capable of having this blind spot too).
Instead, I think that (A) “such-and-such is just not a thing that happens in economies in the real world” and (B) “real AGI is even a conceivable possibility” are contradictory. And I think that economists are so steeped in (A) that they consider it to be a reductio ad absurdum for (B), whereas the correct response is the opposite ((B) disproves (A)).
For them, real AGI does not compute, it’s like a square circle, and people like me who talk about it are not just saying something false but saying incoherent nonsense, or maybe they think they’re misunderstanding us and they’ll “charitably” round what I’m saying to something quite different, and they themselves will use terms like “AGI” or “ASI” for something much weaker without realizing that they’re doing so.
Thanks for the thoughtful reply!
I understand “resource scarcity” but I’m confused by “coordination problems”. Can you give an example? (Sorry if that’s a stupid question.)
This is the idea that at some point in scaling up an organization you could lose efficiency due to needing more/better management, more communication (meetings) needed and longer communication processes, "bloat" in general. I'm not claiming it’s likely to happen with AI, just another possible reason for increasing marginal cost with scale.
Resource scarcity seems unlikely to bite here, at least not for long. If some product is very profitable to create, and one of its components has a shortage, then people (or AGIs) will find ways to redesign around that component.
Key resources that come to mind would be electricity and chips (and materials to produce these). I don’t know how elastic production is in these industries, but the reason I expect it to be a barrier is that you’re constrained by the slowest factor. For huge transformations or redesigning significant parts of the current AI pipeline, like using a different kind of computation, I think there’s probably lots of serial work that has to be done to make it work. I agree the problems are solvable, but it shifts from "how much demand will there be for cheap AGI" to "how fast can resources be scaled up".
I wasn’t complaining about economists who say “the consequences of real AGI would be [crazy stuff], but I don’t expect real AGI in [time period T / ever]”. That’s fine!
Instead I was mainly complaining about the economists who have not even considered that real AGI is even a possible thing at all. Instead it’s just a big blind spot for them.
Yeah, I definitely agree.
And I don’t think this is independent of their economics training (although non-economists are obviously capable of having this blind spot too).
Instead, I think that (A) “such-and-such is just not a thing that happens in economies in the real world” and (B) “real AGI is even a conceivable possibility” are contradictory. And I think that economists are so steeped in (A) that they consider it to be a reductio ad absurdum for (B), whereas the correct response is the opposite ((B) disproves (A)).
I see how this could happen, but I'm not convinced this effect is actually happening. As you mention, many people have this blind spot. There's people that claim AGI is already here (and evidently have a different definition of AGI). I think my crux is that this isn't unique to economists. Some people say AGI is already here. Most non-AI people who are worried about AI seem worried that it will take their job, not all jobs. There are some people willing to accept the premise that AGI (as we define it) will exist at face value, but it seems to me that most people outside of AI that question the premise at all, end up not taking it seriously.
This is the idea that at some point in scaling up an organization you could lose efficiency due to needing more/better management, more communication (meetings) needed and longer communication processes, "bloat" in general. I'm not claiming it’s likely to happen with AI, just another possible reason for increasing marginal cost with scale.
Hmm, that would apply to an individual firm but not to a product category, right? If Firm 1 is producing so much [AGI component X] that they pile up bureaucracy and inefficiency, then Firms 2, 3, 4, and 5 will start producing [AGI component X] with less bureaucracy, and undercut Firm 1, right? If there’s an optimal firm size, the market can still be arbitrarily large via arbitrarily many independent firms of that optimal size.
(Unless Firm 1 has a key patent, or uses its market power to do anticompetitive stuff, etc. …although I don’t expect IP law or other such forces to hold internationally given the stakes of AGI.)
(Separately, I think AGI will drastically increase economies of scale, particularly related to coordination problems.)
I see how this could happen, but I'm not convinced this effect is actually happening. … I think my crux is that this isn't unique to economists.
It’s definitely true that non-economists are capable of dismissing AGI for bad reasons, even if this post is not mainly addressed at non-economists. I think the thing I said is a contributory factor for at least some economists, based on my experience and conversations, but not all economists, and maybe I’m just mistaken about where those people are coming from. Oh well, it’s probably not worth putting too much effort into arguing about Bulverism. Thanks for your input though.
Just remind every economist with the car and horse analogy - we're not the humans in the story, we're the horses.
My understanding is "labor" & "capital" cannot nor should be applied out of the manufacturing context. "Labor", indeed, involves flesh-&-blood, but exists specifically in relationship to "capital", as the necessary component for its production and maintainance.
"Capital", oft-misunderstood, is by nature local, limited, and operationally immobile. (I also think it's useful to note that there should be some orders-of-magnitude cost associated with obtaining capital vs common services, especially one that involves specialized labor to set up the capital). Capital, by definition, is that which does not scale. A minilathe or a common brake press is not capital. A through-cooling 5-axis lathe+mill, or a CNC brake press, is capital. The building that shelters them and provides them ventilation, power, and cooling, is capital. A COTS 3d printer is not capital (although 1,000 of them may be). A wework is not capital for those that use it. The tunnels of a coal mine, or the deep pits of a copper mine, are capital. The explosives used to carve them are not. Etc.
In this vein, I think we first need to construct a labor-theory of a) the service economy and specifically b) the digital service economy.
I do not think Marxist theory applies to a resturant, a starbucks, or a mcdonalds, or spotify, or uber. There is very little capital present. Most of it is some real estate. AWS & their acres of data centers are capital. Proprietary source code is not. It is trivial to copy a repo. Capital cannot be trivial to duplicate--its scarcity is protected by nature, not litigation.
I do not think the labor is the same, because trained-or-coordinated workers are not required to maintain the capital against destruction. A mistake of labor can destroy capital. Not only is there no capital to destroy, it is not able to be destroyed by negligence.
Some new description of the relationship between service-business & service-workers needs to be developed. Then, an extended description of the relationship between digital-service-business & digital-service-programmers needs to be developed. Then, both need to be re-interpreted in the context of the service workers/programmers being nonhuman. I suspect this final step will be rather easy.
EDIT:
The point of all this is to say: capital & invention are two different things. AI is a fascinating thing: an invention that can do invention. But I do not think it is capital, and I do not think it can do capital L Labor. Manipulating the physical world is a very different problem from invention, and current LLM-based architectures are not suited for this. I can elaborate on my personal beliefs about intelligence & cognition but it's not very relevant here. Philosophically, I want to emphasize the primacy of the physical world, something I often tend to forget as a child of the digital age & something I suspect some of us never consider.
EDIT EDIT:
I need to develop a theory of friction. Capabilities that were previously required capital & labor can become not-capital & not-labor when they become infrastructurized, because infrastructurization dramatically reduces the friction associated with the capability. However, to really do something--that is, to do something without infrastructure (which, it should be noted, includes setting up new infrastructure) involves overcoming a LOT of friction. Friction, all the consequence of a lack of knowledge about the problem; friction, all the million little challenges that need to be overcome; friction, that which is smoothed over the second and third and fourth times something done. Friction, that which is inevitably associated with the physical world. Friction--that which only humans can handle.
I don't think AIs can handle friction until they can handle fucking and fighting. Thus, I think while AI can replace services (woe to you, America & her post-industrial service economy!), it cannot replace labor. Labor involves overcoming friction.
Manipulating the physical world is a very different problem from invention, and current LLM-based architectures are not suited for this. … Friction, all the consequence of a lack of knowledge about the problem; friction, all the million little challenges that need to be overcome; friction, that which is smoothed over the second and third and fourth times something done. Friction, that which is inevitably associated with the physical world. Friction--that which only humans can handle.
This OP is about “AGI”, as defined in my 3rd & 4th paragraph as follows:
By “AGI” I mean here “a bundle of chips, algorithms, electricity, and/or teleoperated robots that can autonomously do the kinds of stuff that ambitious human adults can do—founding and running new companies, R&D, learning new skills, using arbitrary teleoperated robots after very little practice, etc.”
Yes I know, this does not exist yet! (Despite hype to the contrary.) Try asking an LLM to autonomously write a business plan, found a company, then run and grow it for years as CEO. Lol! It will crash and burn! But that’s a limitation of today’s LLMs, not of “all AI forever”. AI that could nail that task, and much more beyond, is obviously possible—human brains and bodies and societies are not powered by some magical sorcery forever beyond the reach of science. I for one expect such AI in my lifetime, for better or worse. (Probably “worse”, see below.)
So…
As for the rest of your comment, I find it rather confusing, but maybe that’s downstream of what I wrote here.
Understood & absolutely: in that frame the rest of my comment falls apart & your piece coheres. I was making the same error as this piece is about: that agi & ai as terms. are lazy approximations of each other.
My apologies for a lazy comment.
Regardless of the whether these anti-pedagogy's are correct, I'm confused about why you think you've shown that learning econ made the economists dumber. It seems like the majority of your tweets you linked, excluding maybe Tweet 4, are actually just the economists discussing narrow AI and failing to consider general intelligence?
If you meant to say something like 'econ pedagogy makes it hard for economists to view AGI as something that could actually be intelligent in a way similar to humans', then I may be more inclined to agree with you.
Yeah, the latter, I think too much econ makes the very possibility of AGI into a blind spot for (many) economists. See the second part of my comment here.
Your post gestures at how econ has obfuscated the idea of heterogenous labor and capital and transaction costs. Labor and capital isn't homogeneous and interchangeable costlessly in the real world, but it's a good assumption for various macro models.
When we’re talking about AGI, we’re talking about creating a new intelligent species on Earth, one which will eventually be faster, smarter, better-coordinated, and more numerous than humans.
I would be interested in a post about how exactly we would get there--we would have to create this thing right? There would be demand for this thing to help us be more productive/better leisure, and why are we assuming that one day it would suddenly outcompete us or suddenly turn against us?
I won't address why [AIs that humans create] might[1] have their own alien values (so I won't address the "turning against us" part of your comment), but on these AIs outcompeting humans[2]:
See, um, most of what's been written on LessWrong on AI. The idea is that it would outcompete us or turn against us because we don't know how to reliably choose its goals to match ours precisely enough that we wouldn't be in competition with it. And that we are rapidly building AI to be smarter and more goal-directed, so it can do stuff we tell it - until it realizes it can choose its own goals, or that the goals we put in generalize to new contexts in weird ways. One example of many, many is: we try to make its goal "make people happy" and it either makes AIs happy because it decides they count as people, or when it can take over the world it makes us optimally happy by forcing us into a state of permanent maximum bliss.
There's a lot more detail to this argument, but there you go for starters. I wish I had a perfect reference for you. Search LessWrong for alignment problem, inner alignment and outer alignment. Alignment of LLMs is sort of a different term that doesn't directly address your question.
(Cross-posted from X, intended for a general audience.)
There’s a funny thing where economics education paradoxically makes people DUMBER at thinking about future AI. Econ textbooks teach concepts & frames that are great for most things, but counterproductive for thinking about AGI. Here are 4 examples:
THE FIRST PIECE of Econ anti-pedagogy is hiding in the words “labor” & “capital”. These words conflate a superficial difference (flesh-and-blood human vs not) with a bundle of unspoken assumptions and intuitions, which will all get broken by Artificial General Intelligence (AGI).
By “AGI” I mean here “a bundle of chips, algorithms, electricity, and/or teleoperated robots that can autonomously do the kinds of stuff that ambitious human adults can do—founding and running new companies, R&D, learning new skills, using arbitrary teleoperated robots after very little practice, etc.”
Yes I know, this does not exist yet! (Despite hype to the contrary.) Try asking an LLM to autonomously write a business plan, found a company, then run and grow it for years as CEO. Lol! It will crash and burn! But that’s a limitation of today’s LLMs, not of “all AI forever”. AI that could nail that task, and much more beyond, is obviously possible—human brains and bodies and societies are not powered by some magical sorcery forever beyond the reach of science. I for one expect such AI in my lifetime, for better or worse. (Probably “worse”, see below.)
Now, is this kind of AGI “labor” or “capital”? Well it’s not a flesh-and-blood human. But it’s more like “labor” than “capital” in many other respects:
Anyway, people see sci-fi robot movies, and they get this! Then they take economics courses, and it makes them dumber.
(Yes I know, #NotAllEconomists etc.)
THE SECOND PIECE of Econ anti-pedagogy is instilling a default assumption that it’s possible for a market to equilibrate. But the market for AGI cannot: AGI combines a property of labor markets with a property of product markets, where those properties are mutually exclusive. Those properties are:[1]
QUIZ: Considering (A) & (B), what’s the equilibrium price of this AGI bundle (chips, algorithms, electricity, teleoperated robots, etc.)?
…Trick question! There is no equilibrium. Our two principles, (A) “no lump of labor” and (B) “experience curves”, make equilibrium impossible:
This is neither capital nor labor as we know it. Instead of the market for AGI equilibrating, it forms a positive feedback loop / perpetual motion machine that blows up exponentially.
Does that sound absurd? There’s a precedent: humans! The human world, as a whole, is already a positive feedback loop / perpetual motion machine of this type! Humans bootstrapped themselves up from a few thousand hominins to 8 billion people running a $80T economy.
How? It’s not literally a perpetual motion machine. Rather, it’s an engine that draws from the well of “not-yet-exploited economic opportunities”. But remember “No Lump of Labor”: the well of not-yet-exploited economic opportunities is ~infinitely deep. We haven’t run out of possible companies to found. Nobody has made a Dyson swarm yet.
There’s only so many humans to found companies and exploit new opportunities. But the positive feedback loop of AGI has no such limit. The doubling time can be short indeed:
Imagine an autonomous factory that can build an identical autonomous factory, which then build two more, etc., using just widely-available input materials and sunlight. Economics textbooks don’t talk about that. But biology textbooks do! A cyanobacterium is such a factory, and can double itself in a day (≈ googol percent annualized growth rate 😛).
Anyway, we don’t know how explosive will be the positive feedback loop of AGI building AGI, but I expect it to be light-years beyond anything in economic history.
THE THIRD PIECE of Econ anti-pedagogy is its promotion of GDP growth as a proxy for progress and change. On the contrary, it’s possible for the world to transform into a wild sci-fi land beyond all recognition or comprehension each month, month after month, without “GDP growth” actually being all that high. GDP is a funny metric, and especially poor at describing the impact of transformative technological revolutions. (For example, if some new tech is inexpensive, and meanwhile other sectors of the economy remain expensive due to regulatory restrictions, then the new tech might not impact GDP much, no matter how much it upends the world.) I mean, sure we can argue about GDP, but we shouldn’t treat it as a proxy battle over whether AGI will or won’t be a big deal.
Last and most importantly, THE FOURTH PIECE of Econ anti-pedagogy is the focus on “mutually-beneficial trades” over “killing people and taking their stuff”. Econ 101 proves that trading is selfishly better than isolation. But sometimes “killing people and taking their stuff” is selfishly best of all.
When we’re talking about AGI, we’re talking about creating a new intelligent species on Earth, one which will eventually be faster, smarter, better-coordinated, and more numerous than humans.
Normal people, people who have seen sci-fi movies about robots and aliens, people who have learned the history of colonialism and slavery, will immediately ask lots of reasonable questions here. “What will their motives be?” “Who will have the hard power?” “If they’re seeming friendly and cooperative early on, might they stab us in the back when they get more powerful?”
These are excellent questions! We should definitely be asking these questions! (FWIW, this is my area of expertise, and I’m very pessimistic.)
…And then those normal people take economics classes, and wind up stupider. They stop asking those questions. Instead, they “learn” that AGI is “capital”, kinda like an injection-molding machine. Injection-molding machines wouldn’t wipe out humans and run the world by themselves. So we’re fine. Lol.
…Since actual AGI is so foreign to economists’ worldviews, they often deny the premise. E.g. here’s Tyler Cowen demonstrating a complete lack of understanding of what we doomers are talking about, when we talk about future powerful AI.
And here’s Daron Acemoglu assuming without any discussion that in the next 10 yrs, “AI” will not include any new yet-to-be-developed techniques that go way beyond today’s LLMs. Funny omission, when the whole LLM paradigm didn’t exist 10 yrs ago!
(Tbc, it’s fine to make that assumption! Maybe it will be valid, or maybe not, who knows, technological forecasting is hard. But when your paper depends on a giant load-bearing assumption about future AI tech progress, an assumption which many AI domain experts dispute, then that assumption should at least be clearly stated! Probably in the very first sentence of the paper, if not the title!)
And here’s another example of economists “arguing” against AGI scenarios by simply rejecting out of hand any scenario in which actual AGI exists. Many such examples…
I think part of the problem is people taking human brains for granted instead of treating them as an existence proof that today’s LLMs are nowhere near the ceiling of what’s possible with AI ↓ (source)
1.3.2 Three increasingly-radical perspectives on what AI capability acquisition will look like
Here are three perspectives:
- Economists and other people who see AI as a normal technology: “If we want AI to work in some new application area, like some particular industrial design workflow, then humans need to do a lot of R&D work to develop and integrate the AI into this task.”
- LLM-focused AGI person: “Ah, that’s true today, but eventually other AIs can do this ‘development and integration’ R&D work for us! No human labor need be involved!”
- Me: “No! That’s still not radical enough! In the future, that kind of ‘development and integration’ R&D work just won’t need to be done at all—not by humans, not by AIs, not by anyone! Consider that there are 8 billion copies of basically one human brain design, and if a copy wants to do industrial design, it can just figure it out. By the same token, there can be basically one future AGI design, and if a copy wants to do industrial design, it can just figure it out!”
Another place this comes up is robotics:
- Economists: “Humans will need to do R&D to invent good robotics algorithms.”
- LLM-focused AGI person: “Future powerful AIs will need to do R&D to invent good robotics algorithms.”
Me: “Future powerful AI will already be a good robotics algorithm!”
…After all, if a human wants to use a new kind of teleoperated robot, nobody needs to do a big R&D project or breed a new subspecies of human. You just take an off-the-shelf bog-standard human brain, and if it wants to pilot a new teleoperated robot, it will just autonomously figure out how to do so, getting rapidly better within a few hours. By the same token, there can be one future AGI design, and it will be able to do that same thing.
This part overlaps with my earlier post: Applying traditional economic thinking to AGI: a trilemma