Why doesn't a single power rule the world today?

[I'm taking advantage of the new "LW posts as blog posts" format to post something I'm pretty unsure about. I'm working from my memories of the blog posts, and from a discussion I had with Robin Hanson and Katja Grace in late 2012. Please let me know if any of this is inaccurate!]

One of the key differences of opinion in the Hanson-Yudkowsky AI-Foom Debate is about the idea of a "decisive advantage". If I'm not misrepresenting the parties horribly, the idea is that some point in a world with AGI, some AGI-enabled party uses their greater intelligence to increase their general power: money, resources, control over others, intelligence and suchlike. That greater power increases their ability to gain power, resulting in a snowball effect that ends with some party having control over the outcomes for all of Earth-originating life.

Robin asks the very reasonable question: if that's how things work, why hasn't it already happened? What stops the largest business using its decisive power over smaller ones to defeat and absorb them, growing ever larger and more powerful until all other businesses fall to its power? Why do we have multiple nations today, when this model would seem to predict that a single state should ultimately conquer and rule all? I don't remember Robin proposing an answer of his own: a mechanism or theoretical model that would lead us to expect multiple powers. But it seems like a good question, and it's bugged me ever since.

I think I'd need to be much more of a student of history than I am to have any confidence in an answer, so let me share some wild speculation that might at least start discussion:

  • Regulation: Such growth isn't an option for legal businesses at all, because states exist. So powerful a business would challenge the power of the state, and the state is in a position to disallow that. The explicit purpose of monopoly legislation is to stop a business which has become very powerful in one area from leveraging that to become powerful elsewhere.

  • Principal-agent problems: it sure would be easier to keep an empire together if you could reliably appoint generals and rulers who always did what you told them to. Especially if the round-trip time for getting them a message is on the order of weeks, and you have to entrust them with the discretion to wield tremendous power in the mean time.

  • Moral norms: Nuclear weapons gave the USA a decisive advantage at the end of WWII. If the USA had been entirely ruthless and bent on power at any cost, it would immediately have used that advantage to cripple all rivals for world superpower and declared its rulership of the world.

I don't expect any of these factors to limit the growth of an AGI. Is there some more general limit to power begetting power that would also affect AGI?

New Comment
19 comments, sorted by Click to highlight new comments since:

I believe that in real life, it is mostly the principal-agent problem, or coordination problems in general. Groups defeat individuals, but large groups often fall apart under their own weight. When a group becomes powerful, infighting becomes more profitable than fighting for the benefit of the group. An entirely ruthless army is an army that wouldn't mind taking over its own country; and then its generals would optimize for getting in power over conquering foreign territories.

If we had robots, capable of thinking like a human, and needing a comparable amount of resources to live and reproduce... and in addition, able to make perfectly loyal copies of themselves... that would already be quite scary, like Scientology on steroids.

People waste a lot of resources competing with each other. Imagine how great it would be to have a group of people whom you could trust 100% to act in your best interest. Having friends is already a great thing, and friends are far from 100% reliable.

One point you neglect that would be especially relevant in the AGI scenario is leakiness of accumulated advantage. When the advantage is tech, the leaks take the pretty concrete form of copying the tech. But there's also a sense that in a globalized world, undeveloped nations will often grow faster, catching up to the more prosperous nations.

Leakiness probably explains why Britain was never strong enough to conquer Europe despite having the Industrial Revolution first.

Another theory is that weaker powers can work together to fend off a stronger power, even if no single weaker power could do it alone. My history professor emphasized this explanation in the context of early modern Europe (c. 1450-1850). This is also one of the justifications for why people are "equal" in a state of nature - another person may be stronger than you, but no single individual is stronger than you plus the allies that you can gather.

A different approach is to see this question as basically asking for the theory of the firm: why is it that some production is organized within a single hierarchically structured firm, while other production involves market transactions between firms? Why not just one all-encompassing firm, or a marketplace of single individuals, or any other balance besides the one that we have? From the Wikipedia page, it looks like there are various proposed explanations which involve transaction costs, asymmetric information, principal-agent problems, and asset specificity. (In the context of countries, I imagine that the analogous question is when to conquer another country and when to just trade with them.)

> another person may be stronger than you, but no single individual is stronger than you plus the allies that you can gather

Why doesn't the stronger individual bring their allies? It should actually be easier, because people should be more willing to join the stronger guy. I think there is the principal-agent problem here again.

Let me propose a positive, generative theory that generates multiple agents/firms/etc.

Any agent (let's say nation) when it starts has a growth rate and a location. As it grows, it gains more power in its location, and power nearby at levels declining exponentially with distance. If a new nation starts at some minimum level of power, not too late and somewhat far from a competing nation, then it can establish itself and will end up having a border with the other nation where their power levels are the same.

Certainly there are growth rates for different nations that would give us fluctuating or expanding borders for new nations or that would wipe out new nations. Major changes in history could be modeled like different coefficients on the rates of growth or the exponent of decay (or maybe more clearly by different measures of distance, especially with inventions like horses, boats, etc.)

In particular if the growth rates of all nations are linear, then as long as a nation can come into existence it will be able to expand as it ages from size zero to some stable size. The linear rates could be 5000 and .0001 an the smaller growth nation would still be able to persist--just with control of much less space.

In practice, in history, some nations have become hundreds of times as large as others, but only a certain size has been achieved. There are a few ways this model could account for a different single-nation scenario.

  1. Non-linear growth rates; in particular growing at a rate that equals the rate of distance decay (no matter what those rates are) would overwhelm any nation growing at a linear rate. Probably it's easier to overwhelm those nations than that.

  2. Massive changes to the distance decay. This has been attested historically at least in small part--European countries expanding into the Americas and Africa with boats, Mongols with horses, even back to vikings. This is also analogous to principal-agent problems (and maybe Roman republicanism is another example?) Mathematically, if the exponents are similar it won't matter too much (or at least the 'strongest' state will take 'some time' to overpower others) but an immediate, large change on the part of the strongest nation would cause it to overwhelm every other nation

  3. Not having enough space for equilibrium. A new nation can only start in this model if it's far enough away from established nations to escape their influence. We don't see much in the way of new nations starting these days, for example.

This is a toy model that I literally came up with off the top of my head, so I don't mean to claim that it has any really thrilling analogies to reality that I haven't listed above.

I do think it's robustly useful in that if you massively change all the parameters numerically it should still usually predict that there will be many nations but if you tweak the parameters at a higher level it could collapse into a singleton.

Why assume AGI doesn't have problems analogous to agency problems? It will have parts of itself that it doesn't understand well, and which might go rogue.

I think that is mainly the point argued in more detail by jedharris. I think it would really be valuable to understand that mechanism in more detail.

Many of us believe that central planning is dominated by diverse local planning plus markets in human affairs. Do we really believe that for AGIs, central planning will become dominant? This is surprising.

In general AGIs will have to delegate tasks to sub-agents as they grow, otherwise they run into computational and physical bottlenecks.

Local capabilities of sub-agents raise many issues of coordination that can't just be assumed away. Sub-agents spawned by an AGI must take advantage of local computation, memory and often local data acquisition, otherwise they confer no advantage. In general these local capabilities may cause divergent choices that require negotiation to generate re-convergence between agents. This implies that the assumption of a unified dominant AGI that can scale indefinitely is dubious at best.

Let's look at a specific issue here.

Loyalty is a major issue, directly and indirectly referenced in other comments. Without reliable loyalty, principal - agent problems can easily become crippling. But another term for loyalty is goal alignment. So in effect an AGI has to solve the problem of goal alignment to grow indefinitely by spawning sub-agents.

Corporations solve the problem of alignment internally by inculcating employees with their culture. However that culture becomes a constraint on their possible responses to challenges, and that can kill them - see many many companies whose culture drove success and then failure.

An AGI with a large population of sub-agents is different in many ways but has no obvious way to escape this failure mode. A change in culture implies changes in goals and behavioral constraints for some sub-agents, quite possibly all. But

  1. this can easily have unintended consequences that the AGI can't figure out since the sub-agents collectively have far more degrees of freedom than the central planner, and

  2. the change in goals and constraints can easily trash sub-agents' existing plans and advantages, again in ways the central planner in general can't anticipate.

To avoid taking the analogy "humans : AGIs" too far, there are a few important differences. Humans cannot be copied. Humans cannot be quickly and reliably reprogrammed. Humans have their own goals besides the goals of the corporation. None of this needs to apply to computer sub-agents.

Also, we have systems where humans are more obedient than usual: cults and armies. But cults need to keep their members uninformed about the larger picture, and armies specialize at fighting (as opposed to e.g. productive economical activities). The AGI society could be like a cult, but without keeping members in the dark, because the sub-agents would genuinely want to serve their master. And could be economically active, with the army levels of discipline.

An article on the supposed inevitibility of cancer/death.

Not read it fully, but it seems worth figuring out if similar arguments apply to cultures/machine intelligences or not.

Moral norms: Nuclear weapons gave the USA a decisive advantage at the end of WWII. If the USA had been entirely ruthless and bent on power at any cost, it would immediately have used that advantage to cripple all rivals for world superpower and declared its rulership of the world.

Dubious. We never had a decisive advantage. At the end of WWII we had a few atomic bombs, yes, but not ICBMs. The missile tech came later. Our delivery method was a bomber, and the Soviets were quite capable of shooting them down, unlike Japanese at the time. Attacking them would have been a terrible risk.

The Soviets had spies and developed their own nuclear weapons much sooner than we anticipated.

In hindsight, there may have been a point early in the Cold War when a nuclear first strike could have worked, but we didn't know that at the time, and we'd have certainly taken losses.

I agree, if the USA had decided to take over the world at the end of WWII, it would have taken absolutely cataclysmic losses. I think it would still have ended up on top of what was left, and the world would have rebuilt, with the USA on top. But not being prepared to make such an awful sacrifice to grasp power probably comes under a different heading than "moral norms".

Another related question comes from biology.

Why aren't we a single organism? That is why doesn't a single bacteria that is better than all the rest at cooperating and replicating take over the world? That is why didn't biology green goo itself.

There are a few different aspects of this question.

1.Why aren't we in the process of this constantly. That is why isn't an asexual bacteria looking like it is doing this right now? Sexuality is a tricky question for biology, from a selfish gene point of view it doesn't make much sense to chuck away half your genes and mix them with another organisms. The best answer we have is monocultures of bacteria/species are susceptible to parasitism and it makes sense to ditch half your genes to protect your offspring from that. In the AI case this argues for variation in robotics/software so that you do not get entirely owned by other AIs if they discover a weak point. Variation might lead to unpredictability/principle agent problems.

2. Would a global monoculture of bacteria be stable if it were to exist (if might have happened with the first reliable replicator)? This seems unlikely for a couple of reasons. The first is cancer, that is replicators or machinery that has degraded and is malfunctioning can wipe out the whole system if there are no boundaries or protective mechanisms. The bigger the system and also the longer the system exists the more likely you are to get cancer or something like it. On astronomical times scales you have to worry about very weird things quantum tunneling around the place. You might also want boundaries between systems to protect against auto-immune disorders, assuming you have some clean up mechanism on old/destroyed machinery that also gets old and might malfunction in weird ways attacking the rest of the system.

Is there some more general limit to power begetting power that would also affect AGI?

The only one which immediately comes to mind is inflexibility. Often companies shrink or fail entirely because they're suddenly subject to competition armed with a new idea. Why do the new ideas end up implemented by smaller competitors? The positive feedback of "larger companies have more people who can think of new ideas" is dominated by the negative feedbacks of "even the largest company is tiny compared to its complement" and "companies develop monocultures where everyone thinks the same way" and "companies tend to internally suppress new ideas which would devalue the company's existing assets".

Any AGI that doesn't already have half the world's brainpower would be subject to the first limitation (which may mean any AGI who hasn't taken over half the world or just any AGI less than an hour old, depending on how much "foom" the real world turns out to allow, I admit), and an AGI that propagates by self-copying might even be more affected than humans by the second limitation. Whether an AGI was subject to the third limitation or not would depend on its aspirations, I suppose; anything with a "conquer the world" endgame would hardly let itself get trapped in the "just stretch out current revenues as long as possible" mindset of Blockbuster-vs-Netflix...

I think at least three factors hamper the emergence of a single global power:

  1. As others have commented, coordination problem is a big factor.

  2. A subset of coordination problem I think is that most humans are not linear in their utility: sure, making more money might be attractive in some ranges, but only peculiar individuals chase money or power for their own sake. Maybe you can have a maniac CEO who is willing to stay awake twenty hours a day working on the developement of her business, but a lot of people will be content to just work enough and receive enough.

  3. The time-frame for the emergence might be so long to be unobservable. After all, going from tribes to city states to regional powers to nations to global corporations has taken millennia. Already global corporations are trying to gain the upper hand against states with special laws, so it might very well be the case that in a few decades the world is going to be dominated by a few big conglomerates.

You can pitch 1 and 3 against each other and see that by nature, humans don't cooperate spontaneously very well; as technology marches forward, though, and means to connect more and more people spreads, you see the emergence of bigger and bigger powers. Facebook nowadays has more people connected than any other nations in history.

It is worth considering that in the context of power, regulations are merely a form of incentive. Compare to the pre-commitment strategies we often discuss.

In the early history of the firm we saw examples of businesses exceeding state power all the time. For example, the British East India company in India, American shipping companies in Nicaragua, United Fruit in Guatemala. In current affairs, companies like Nestle and Coca-cola are accused of violently enforcing their interests in Africa. I am confident that given attention, we either would find or will soon misbehavior of a similar nature from Chinese firms. And this is only a comparison against states; local governments get run over roughshod routinely, exactly as we would expect from comparing their available resources.

OThe intuition is that all possible agents are competing in the same field, which is reality. The extent to which they understand the field makes it an information game of sorts. I think this means the fundamental limits on AGI are the same as any other variety of agent: the cost in time and resources of integrating new information. We just expect both of these costs to be very low relative to other agents, and therefore the threat is high.

[-]Elo10

Alternaltive reason: there are many axis of winning.

You can be the funniest person in the world but you can still be "beat" on the axis of prettyness by being less attractive. You can have the faster app, but the worse interface. You can be the better driver, driving a shitty car.

People are winning all the time but only on some axis and not others. Winning on all axis is hard, winning on all relevant axis is easier but still hard. humans are in a constant race to differentiate themselves from the competition on different axies.

(this is also my reason as to why I disagree with race to the bottom. I don't think it's true because there are too many axies of contest.

In the case of countries, the main problem seems to be that as you grow the population becomes more culturally heterogeneous. People on average disagree more with whatever federal policies are chosen, giving them a reason to split off into smaller countries. Also there are increasing coordination costs in size.

http://www.eco.uc3m.es/docencia/MicroReadingGroup/Ruben.pdf