I loved markets. I got a Ph.D. in economics from the University of Chicago, wrote a free market microeconomics textbook, and recognized that markets have made most people in the developed world richer than medieval kings. Yet, I'm increasingly convinced that these very markets are likely steering us towards the creation of an uncontrollable artificial superintelligence that probably kills everyone. In this essay, I discuss how the challenges of AI existential risks distinctively leverage market weaknesses. It’s as if a vengeful demon, channeling Karl Marx, created a scenario highlighting their flaws. The thesis of this essay is that if you believe that a smarter-than-human AI would create a significant existential risk, then even if you normally are a staunch advocate of free markets, you should be concerned that these markets are potentially leading us to a disastrous outcome.

In my microeconomics classes I present a hypothetical scenario to students, drawing inspiration from an example by the great 18th century economist Adam Smith. Imagine, while cooking and watching TV, you see a news report about a devastating earthquake in an unfamiliar country, claiming over 100,000 lives, including children. The images of the catastrophe deeply unsettle you. In your distraction, you accidentally cut off the tip of your pinky finger. Upon calling their mother in distress and being asked what’s wrong, my students admit they would mention the finger injury over the distant tragedy, revealing an innate tendency to value personal afflictions over the suffering of strangers, no matter the scale.

In an ideal world, the economy would operate with each individual dedicated to the greater good of humanity. However, as E.O. Wilson wryly noted about communism, “Wonderful theory, wrong species.” While we may deeply care for our immediate circle—friends and family—the same level of concern rarely extends to distant others. So how have we managed to build an extensive, interconnected global economy? The insight of Adam Smith is key here: "It is not from the benevolence of the butcher, the brewer, or the baker that we expect our dinner, but from their regard to their own interest." Our economic system leverages this self-interest; it's structured so that serving others often coincides with personal gain. For instance, a baker who makes tastier or more affordable bread not only boosts his profits but also serves the community. He is guided as if by an “invisible hand” to understand and cater to his customers' desires, sometimes even before they do. 

But markets don't magically align our self-interest with the broader good. There are challenges to market efficiency that create a wedge between profit maximization and the betterment of society. A critical wedge is externalities – unintended consequences of an activity that affect individuals not directly involved. Take, for example, a bakery whose operations generate noise pollution harmful to nearby residents.

Although consumers might generally prefer a market where all businesses mitigate externalities such as noise pollution, their individual purchasing decisions often don't reflect this preference. This is because businesses that don't invest in reducing externalities can offer lower prices. For an individual consumer, the negligible impact of a single purchase on the overall noise pollution levels often leads to a preference for cheaper options. Consequently, there's a paradox where everyone might agree that a market with less noise pollution is preferable, but individual incentives lead to a collective outcome where businesses that ignore such externalities thrive. Yet, this situation is far from hopeless.

Direct negotiations might address this local externality, given the small number of stakeholders and their probable existing relationships. The bakery could offer compensation or complimentary bread to the affected residents, or alternatively, the residents might impose a cost on the bakery as an incentive to resolve the issue. Alternatively, residents bothered by the noise might relocate, making way for others less sensitive to it. While this scenario highlights a market challenge in promoting societal benefit, it often lends itself to resolution through community-level interventions.

When the bakery's operations result in city-wide externalities like air pollution, the challenge for market solutions intensifies. The problem's vast scale renders direct negotiations between the bakery and each impacted resident impractical. Moreover, the idea of residents relocating to escape pollution is impractical, considering the large number affected and the extreme measure of leaving the city.

Unchecked pollution externalities can severely degrade urban environments, challenging the effectiveness of markets. However, even imperfect governments can play a successful role in managing these externalities, as demonstrated by the clean air and water in almost all the cities of wealthier nations. Furthermore, concrete air quality measurements greatly assist in developing and assessing relevant policies. Governments, already accustomed to managing disputes among city residents, are frequently well-equipped to address these issues.  While pollution within a legal jurisdiction presents a significant challenge to market-based solutions, I believe it doesn't negate the overall advantages of market systems.

To add complexity to the challenge to market efficiency, envision a scenario where a company's actions result in global pollution, an externality affecting everyone. Individual negotiations or domestic solutions are ineffective here; global cooperation is essential. However, international decision-making is notoriously less developed and more complex than national governance, compounded by conflicts between countries. This complexity echoes the challenges in tackling climate change, a major global concern, despite its widely acknowledged severity and proposed solutions such as the carbon tax, which is popular among economists.

Further compounding the challenge to market effectiveness, consider pollution that yields immediate benefits but harbors long-term negative consequences. Politicians, driven by self-interest, often focus on short-term gains, reducing their motivation to tackle such externalities. This is evident in democratic systems where the complexity of issues and the perceived insignificance of individual votes lead to a lack of accountability for long-term outcomes. This phenomenon can be seen in the ongoing crisis of U.S. entitlement programs, where delaying resolution for short-term comfort exacerbates future costs. This contrasts with a homeowner who, knowing their roof will fail in ten years, understands the economic benefit of early repair to enhance the home's resale value even if he plans on selling his house this year. Therefore, anticipating that politicians will steer markets towards long-term sustainability is unrealistic, akin to expecting children to persuade their parents to opt for healthy family diets over the immediate gratification of sweets.

Adding a layer of confusion only intensifies our externality dilemma. Suppose there is a division of opinion about whether the pollution is beneficial, uncertainty about when its effects will manifest, and a general consensus that the mechanisms at play are beyond the grasp of the political elite. This is the case with AI existential risk.

In a free market economy, profit serves as the primary motivator for innovation and development. Companies and investors are constantly seeking new technologies and advancements that promise significant returns. AI represents one of these frontier technologies where the potential for profit is enormous. From automating routine tasks to revolutionizing industries like healthcare, finance, and transportation, AI promises to unlock new levels of efficiency and capability. AI offers literally trillions of dollars of profit and enormous consumer satisfaction. It is incentivized by markets like nothing else in history. 

Competition propels us towards artificial superintelligence, as any AI firm slowing its pace risks being overtaken by others, and workers understand that refusing to engage in capacity research merely leads to their replacement. Consequently, even if you personally think AI will probably kill everyone, you can morally justify working on capacity research because you know the marginal benefit to AI safety of your quitting just determined by how much better you are at your job than the person who would be hired to replace you. We have a true market failure when even if all the relevant participants recognized they were pushing humanity towards extinction and were all willing to accept a significant reduction in wealth for a non-trivial reduction in risk, the logic of markets and competition could keep them working towards quickly creating an artificial superintelligence.

While the pursuit of profit motivates firms to develop increasingly advanced AIs, the potential catastrophic consequences represent an externality that current market structures struggle to address. The existential risk of AI, being global, elusive, and long-term, sharply contrasts with its immediate, tangible benefits. This mismatch challenges governments to intervene effectively.

Confronting the AI existential risk may require revisiting a concept highlighted by Adam Smith: the propensity for collusion in any trade to reduce competition. He wrote "People of the same trade seldom meet together, even for merriment and diversion, but the conversation ends in a conspiracy against the public, or in some contrivance to raise prices." This inherent tendency towards collusion among market players could be leveraged in regulating AI. Permitting collusion among existing AI firms could decelerate the race to create a superintelligent computer. While this would likely involve restricting competition, leading to higher profits and reduced innovation for these firms, it could be a lesser evil compared to the alternative risks of unregulated AI development. However, admittedly, this approach carries its own dangers, such as the possibility of these firms becoming too powerful or even using their position to take over the world.

 

Written with the assistance of GPT-4.

New to LessWrong?

New Comment
9 comments, sorted by Click to highlight new comments since: Today at 11:14 PM
[-]gjm3mo128

Meta: If you're going to say "Written with the assistance of GPT-4" then I, at least, want to know roughly what GPT-4 assisted with. Did you use it to clean up little mistakes in spelling and grammar? (In that case, I don't actually see why it's necessary to mention GPT-4 at all. "Written with the assistance of the Chambers English Dictionary.") Did you write a first version of the article and use GPT-4 to make the writing style more, well, GPT-4-like? Did you say "Please write me an article about economics and AI risk suitable for posting on Less Wrong"? Or what? The inferences I draw both about the article and about GPT-4 are going to be quite different in those different cases.

The big thing I used it for was asking it to find sentences it thinks it can improve, and then have it give me the improved sentence. I created this GPT to help with my writing: https://chat.openai.com/g/g-gahVWDJL5-iterative-text-improver

[-]Roko3mo42

You can think of AI x-risk in terms of Coase theorem: AI work creates an externality at least in expectation, so we can solve that with Coasean bargaining if

  • there are strong property rights
  • transactions costs are low

The problem with the AI risk externality is that long-term property rights on Earth are very weak, countries typically collapse and all property that was enforced by the government is lost. AI takeovers are dangerous because they could crush the current governments and just take everything.

I think the transaction costs are probably dominated by information problems (which AIs are actually dangerous?), but costs of negotiation are also something to consider. Still, I think these are the relatively tamer problems, the big one is how to create stronger property rights.

So we mostly need to think about how to make property rights stronger. This can be achieved in various ways:

  • use legislation to place limitations on AI
  • use AI to make current governments more resistant to coups or other forms of influence
  • use AI to outright replace current governments with a new governance system that's much more resistant to coups or other forms of unwanted influence
  • make takeoff more gradual and less sharp (reduce the "curvature of takeoff"), for example by allocating more funding to AI companies earlier in the process (which is counterintuitive - faster can be safer if it reduces the curvature of takeoff)

I meant  the noise pollution example in my essay to be the Coase theorem, but I agree with you that property rights are not strong enough to solve with AI risk. I agree that AI will open up new paths for solving all kinds of problems, including giving us solutions that could end up helping with alignment.

The article makes this claim:

Competition propels us towards artificial superintelligence, as any AI firm slowing its pace risks being overtaken by others, and workers understand that refusing to engage in capacity research merely leads to their replacement.

And I agree that even if a worker values his own survival above all else and believes ASI is both near at hand and bad, then plausibly he doesn't make himself better off by quitting his job.  But given that the CEO of an AI firm has more control over the allocation of the firm's resources, if he values survival and believes that ASI is near/bad, then is his best move really to continue steering resources into capabilities development?

[-][anonymous]3mo30

Summarizing

         Free markets are efficient methods of coordination

         Markets goodheart for profits, which can lead to negative externalities (and corporate compensation              structures can lead to goodhearting for quarterly profits, see Boeing for where this can fail)

          Governments must force externalities to be internalized, and governments must coordinate among each other or you simply have an arms race (ideally by taxes and fees that slightly exceed the damages caused by the externalities rather than bans or years wasted waiting for approval)

           You propose allowing AI labs to collude, as an exception to antitrust law, but this is unlikely to work because of the defector problem - your proposal creates a large incentive to defect

Counterpoint: As a PhD who wrote a textbook, you likely are aware of at least the basics of cold war history.  Nuclear weapons have several significant negative externalities:

a. radioactive waste and contamination of the earth

b. Risk of an unauthorized or accidental use

c.  Risk of escalation leading to destroying most of the cities in the developed world

 

And theoretically governments should have been able to coordinate or collude to build no nuclear weapons.  They are clearly a hazard simply to have.  "EAs" are worried about existential risks, and nuclear arsenals have until recently been the largest credible x-risk, where the risk of a launch or escalation per year means over a long enough timeline, a major nuclear war is inevitable.  Currently the 3 largest arsenals are in the hands of effectively dictators, including the USA president who only requires the consent of 1 other official that the President appointed in order to launch.  In addition in the coming US election cycle, voters will choose which elderly dictator to give the nuclear launch codes to, at least one of whom appears to be particularly unstable.

Conclusion: if governments can't coordinate to reduce nuclear arsenal sizes below 'assured destruction' levels, it is difficult to see how meaningful coordination to remove AI risks could happen.  This puts governments into the same situation as the early 1950s, where despite all the immense costs, there was no choice but to proceed with building nuclear arsenals and exorbitantly expensive defense systems, including the largest computers ever built (in size):  https://en.wikipedia.org/wiki/AN/FSQ-7_Combat_Direction_Central

However, just like the 1950s, plutonium doesn't need to be sold on the civilian market without restrictions.  Very high end hardware for AI, especially specialized hardware able to inference or train the largest neural networks, probably has to be controlled similar to plutonium, where only small samples are available to private businesses.

I agree with the analogy in your last paragraph, and this gives hope for governments slowing down AI development, if they have the will.

[-][anonymous]3mo30

This wouldn't mean any slowdowns. Acceleration probably. Just the ai development at government labs with unlimited resources and substantial security instead of private ones.

To use my analogy, if the government didn't restrict plutonium, private companies would have still taken longer to develop fusion boosted nukes and test them at private ranges with the aim of getting a government contract. A private nuke testing range is going to need a lot of private funding to purchase.

Less innovation but with RSI you probably don't need new innovation past a variant on current models. (Because you train AIs to learn from data what makes the most powerful AIs)

[-]O O3mo10

This will only work if we move past GPUs to ASICs or some other specialized hardware made for training specific AI. GPUs are too useful and widespread in everything else to be controlled that tightly. Even the China ban is being curbed with Chinese companies using shell companies in other countries (obvious if you look at sales #)