Someone who is interested in learning and doing good.
My Twitter: https://twitter.com/MatthewJBar
My Substack: https://matthewbarnett.substack.com/
And so I don't really think that existential risk is caused by "unemployment". People are indeed confused about the nature of comparative advantage, and mistakenly assume that lack of competetiveness will lead to loss of jobs, which will then be bad for them.
People are also confused about the meaning of words like "unemployment" and how and why it can be good or bad. If being unemployed merely means not having a job (i.e., labor force participation rate), then plenty of people are unemployed by choice, well off, happy, and doing well. These are called retired people.
One way labor force participation can be high is if everyone is starving and needs to work all day in order to survive. Another way labor force participation can be high is if it's extremely satisfying to maintain a job and there are tons of benefits that go along with being employed. My point is that it is impossible to conclude whether it's either "bad" or "good" if all you know is that this statistic will either go up or down. To determine whether changes to this variable are bad, you need to understand more about the context in which the variable is changing.
To put this more plainly, idea that machines will take our jobs generally means one of two things. Either it means that machines will push down overall human wages and make humans less competitive across a variety of tasks. This is directly related to x-risk concerns because it is a direct effect of AIs becoming more numerous and more productive than humans. It makes sense to be concerned about this, but it's imprecise to describe it as an "unemployment": the problem is not that people are unemployed, the problem is that people are getting poorer.
Or, the idea that machines will take our jobs means that it will increase our total prosperity, allowing us to spend more time in pleasant leisure and less time in unpleasant work. This would probably be a good thing, and it's important to strongly distinguish it from the idea that wages will fall.
In my view, Baumol's cost disease is poorly named: the name suggests that certain things are getting more expensive, but if "more expensive" means "society (on the whole) cannot afford as much as it used to" then this implication is false. To be clear, it is definitely possible that things like healthcare and education have gotten less affordable for a median consumer because of income inequality, but even if that's true, it has little to do with Baumol's cost disease per se. As Scott Alexander framed it,
The Baumol effect cannot make things genuinely less affordable for society, because society is more productive and can afford more stuff. However, it can make things genuinely less affordable for individuals, if those individuals aren’t sharing in the increased productivity of society.
I don't think that the number of employees per patient in a hospital or the number of employees per student in a university is lower today than it was in the 1980s, even if hospitals and universities have improved in other ways.
I think this is likely wrong, at least for healthcare, but I'd guess for education too. For healthcare, Random Critical Analysis has written about the data, and I encourage you to look at their analysis.
There is also a story of sclerosis and stagnation. Sure, lots of frivolous consumer goods have gotten cheaper but healthcare, housing, childcare, and education, all the important stuff, has exploded in price.
I think the idea that this chart demonstrates sclerosis and stagnation in these industries—at least in the meaningful sense of our economy getting worse at producing or affording these things—is largely a subtle misunderstanding of what the chart actually shows. (To be clear, this is not an idea that you lean on much in this post, but I still think it's important to try to clarify some misconceptions.)
Prices are relative: it only makes sense to discuss the price of X relative to Y, rather than X's absolute price level. Even inflation is a relative measure: it shows the price of a basket of goods and services relative to a unit of currency.
With this context in mind, we should reconsider what it means for the items at the top of the chart to have "exploded in price". There are several possible interpretations:
If the economic stagnation narrative were accurate, we would expect:
However, I argue that, when correctly interpreted under the appropriate measures, there's little evidence that healthcare, housing, childcare, and education have become significantly less affordable for an average (not median) consumer. Moreover, I claim that the chart is consistent with this view.
To reconcile my claim with the chart, it's crucial to distinguish between two concepts: average income and average wages. Income encompasses all money received by an individual or household from various sources, including wages, non-wage benefits, government assistance, and capital investments.
Average income is a broader and more appropriate way to measure whether something is becoming less "affordable" in this context, since what we care about is whether our economy has stagnated in the sense of becoming less productive. I personally think a more appropriate way to measure average income is via nominal GDP per capita. If we use this measure, we find that average incomes have risen approximately 125% from 2000-2023, which is substantially more than the rise in average wages over the same time period, as shown on the chart.
Using average wages for this analysis is problematic because it overlooks additional income sources that people can use to purchase goods and services. This approach also introduces complexities in interpretation, for example because you'd need to account for a declining labor share of GDP. If we focused on wages rather than average income, we would risk misinterpreting the decrease in average wages relative to certain services as a real decline in our ability to afford these things, instead of recognizing it more narrowly as a shift in the price of labor compared to these services.
A closer examination of the chart reveals that only four items have increased in price by more than 125% over the given period: Medical Care Services, College Textbooks, College Tuition and Fees, and Hospital Services. This immediately implies that, according to the chart, childcare and housing have actually become more affordable relative to average incomes. For the remaining items, I argue that they don't accurately represent the overall price levels of healthcare and education. To support this claim, let's break down each of these components:
To avoid having this comment misinterpreted, I need to say: I'm not saying that everything has gotten more affordable in the last 25 years for the median consumer. I'm not making any significant claims about inequality either, or even about wage stagnation. I'm talking about a narrower claim that I think is most relevant to the post: whether the chart demonstrates substantial economic stagnation, in the sense of our economy getting worse at producing certain stuff over time.
What is different this time?
I'm not confident in the full answer to this question, but I can give some informed speculation. AI progress seems to rely principally on two driving forces:
On the hardware scaling side, there's very little that an AI lab can patent. The hardware itself may be patentable: for example, NVIDIA enjoys a patent on the H100. However, the mere idea of scaling hardware and training for longer are abstract ideas that are generally not legally possible to patent. This may help explain why NVIDIA currently has a virtual monopoly on producing AI GPUs, but there is essentially no barrier to entry for simply using NVIDIA's GPUs to train a state of the art LLM.
On the software side, it gets a little more complicated. US courts have generally held that abstract specifications of algorithms are not subject to patents, even though specific implementations of those algorithms are often patentable. As one Federal Circuit Judge has explained,
In short, [software and business-method patents], although frequently dressed up in the argot of invention, simply describe a problem, announce purely functional steps that purport to solve the problem, and recite standard computer operations to perform some of those steps. The principal flaw in these patents is that they do not contain an "inventive concept" that solves practical problems and ensures that the patent is directed to something "significantly more than" the ineligible abstract idea itself. See CLS Bank, 134 S. Ct. at 2355, 2357; Mayo, 132 S. Ct. at 1294. As such, they represent little more than functional descriptions of objectives, rather than inventive solutions. In addition, because they describe the claimed methods in functional terms, they preempt any subsequent specific solutions to the problem at issue. See CLS Bank, 134 S. Ct. at 2354; Mayo, 132 S. Ct. at 1301-02. It is for those reasons that the Supreme Court has characterized such patents as claiming "abstract ideas" and has held that they are not directed to patentable subject matter.
This generally limits the degree to which an AI lab can patent the concepts underlying LLMs, and thereby try to restrict competition via the legal process.
Note, however, that standard economic models of economies of scale generally predict that there should be a high concentration of firms in capital-intensive industries, which seems to be true for AI as a result of massive hardware scaling. This happens even in the absence of regulatory barriers or government-granted monopolies, and it predicts what we observe fairly well: a small number of large companies at the forefront of AI development.
Concretely, what does it mean to keep a corporation "in check" and do you think those mechanisms will not be available for AIs?
I still think I was making a different point. For more clarity and some elaboration, I previously argued in a short form post that the expected costs of a violent takeover can exceed the benefits even if the costs are small. The reason is because, at the same time taking over the entire world becomes easier, the benefits of doing so can also get lower, relative to compromise. Quoting from my post,
The central argument here would be premised on a model of rational agency, in which an agent tries to maximize benefits minus costs, subject to constraints. The agent would be faced with a choice: (1) Attempt to take over the world, and steal everyone's stuff, or (2) Work within a system of compromise, trade, and law, and get very rich within that system, in order to e.g. buy lots of paperclips. The question of whether (1) is a better choice than (2) is not simply a question of whether taking over the world is "easy" or whether it could be done by the agent. Instead it is a question of whether the benefits of (1) outweigh the costs, relative to choice (2).
In my comment in this thread, I meant to highlight the costs and constraints on an AI's behavior in order to explain how these relative cost-benefits do not necessarily favor takeover. This is logically distinct from arguing that the cost alone of takeover would be high.
I think people are and should be concerned about more than just violent or unlawful takeovers. Exhibit A: Persuasion/propaganda.
Unfortunately I think it's simply very difficult to reliably distinguish between genuine good-faith persuasion and propaganda over speculative future scenarios. Your example is on the extreme end of what's possible in my view, and most realistic scenarios will likely instead be somewhere in-between, with substantial moral ambiguity. To avoid making vague or sweeping assertions about this topic, I prefer being clear about the type of takeover that I think is most worrisome. Likewise:
B: For example, suppose the AIs make self-replicating robot factories and bribe some politicians to make said factories' heat pollution legal. Then they self-replicate across the ocean floor and boil the oceans (they are fusion-powered), killing all humans as a side-effect, except for those they bribed who are given special protection.
I would consider this act both violent and unlawful, unless we're assuming that bribery is widely recognized as legal, and that boiling the oceans did not involve any violence (e.g., no one tried to stop the AIs from doing this, and there was no conflict). I certainly feel this is the type of scenario that I intended to argue against in my original comment, or at least it is very close.
I don't think I'm objecting to that premise. A takeover can be both possible and easy without being rational. In my comment, I focused on whether the expected costs to attempting a takeover are greater than the benefits, not whether the AI will be able to execute a takeover with a high probability.
Or, put another way, one can imagine an AI calculating that the benefit to taking over the world is negative one paperclip on net (when factoring in the expected costs and benefits of such an action), and thus decide not to do it.
Separately, I focused on "violent" or "unlawful" takeovers because I think that's straightforwardly what most people mean when they discuss world takeover plots, and I wanted to be more clear about what I'm objecting to by making my language explicit.
To the extent you're worried about a lawful and peaceful AI takeover in which we voluntarily hand control to AIs over time, I concede that my comment does not address this concern.
I'm thinking of this in the context of a post-singularity future, where we wouldn't need to worry about things like conflict or selection processes.
I'm curious why you seem to think we don't need to worry about things like conflict or selection processes post-singularity.
But San Francisco is also pretty unusual, and only a small fraction of the world lives there. The amount of new construction in the United States is not flat over time. It responds to prices, like in most other markets. And in fact, on the whole, the majority of Americans likely have more and higher-quality housing than their grandparents did at the same age, including most poor people. This is significant material progress despite the supply restrictions (which I fully concede are real), and it's similar to, although smaller in size than what happened with clothing and smartphones.
I'm confused about the clarifications in this post. Generally speaking, I think the terms "alignment", "takeover", and "disempowered" are vague and can mean dramatically different things to different people. My hope when I started reading this post was to see you define these terms precisely and unambiguously. Unfortunately, I am still confused about how you are using these terms, although it could very easily be my fault for not reading carefully enough.
Here is a scenario that I want you to imagine that I think might help to clarify where I'm confused:
Suppose we grant AIs legal rights and they become integrated into our society. Humans continue to survive and thrive, but AIs eventually and gradually accumulate the vast majority of the wealth, political power, and social status in society through lawful means. These AIs are sentient, extremely competent, mostly have strange and alien-like goals, and yet are considered "people" by most humans, according to an expansive definition of that word. Importantly, they are equal in the eyes of the law, and have no limitations on their ability to hold office, write new laws, and hold other positions of power. The AIs are agentic, autonomous, plan over long time horizons, and are not enslaved to the humans in any way. Moreover, many humans also upload themselves onto computers and become AIs themselves. These humans expand their own cognition and often choose to drop the "human" label from their personal identity after they are uploaded.
Here are my questions